The State of Infrastructure-from-Code 2023
Infrastructure-from-Code (IfC) is a new way of thinking about cloud infrastructure, and represents the next step in a line of innovations that makes spinning up infrastructure easier and more seamless for developers. In this article, we’ll talk about where the state of the industry is today, and where we think it’s going next.
The evolution of Infrastructure-as-Code
The infrastructure landscape has shifted in recent years, with Infrastructure-as-Code (IaC) becoming the go-to solution for defining infrastructure. IaC is the latest major, mainstream pattern for making infrastructure easy to provision and tear off, a trend that arguably started with the commoditization of virtualization in the early-to-mid 2000s.
The first wave of IaC tools introduced a new DSL aimed at creating, configuring, and managing cloud resources in a repeatable way. Chef, Ansible, Puppet, and Terraform were some of the most popular tools of this wave. After that, the second wave of IaC tools replaced the DSL with existing programming languages like TypeScript, Python, and Go to express the same ideas. Pulumi and CDK are some of the popular examples of this wave.
While these tools are constantly improving and adding higher-level features, they all fundamentally require a human to declare the specific infrastructure components they need, usually in a fair amount of detail. This means that a developer needs to understand not just what resources their application needs, but the permission models, dependent resources, and communication links between those resources.
In the IaC world, a developer or operator who wants to expose an API to the internet needs to set up an API Gateway, connect it to the web server or framework, translate traffic, configure a private network and security rules, and make sure role permissions (like IAM) are all properly configured — and that’s before we even consider other resources, like a database, secrets manager or messaging system. Doing this in code makes it easier to repeat and audit, but it also means that your developers or devops essentially need to write two separate applications (the service-level application, and the infrastructure) that work in concert.
It’s time for the next pattern, and we think that pattern is to derive your infrastructure from the application code, rather than defining it as code.
What is Infrastructure-from-Code?
Infrastructure-from-Code (IfC) is a process that analyzes your application code to infer the cloud resources you need, and then creates and maintains them without you having to manually define them.
Rather than relying on manual configuration, IfC infers and exposes the web server to the internet simply by virtue of its presence in the code. The setup of components becomes an implementation detail handled by the tool. As with any new level of abstraction, letting go of the reins can feel scary at first. But we think it’ll unlock a generational leap in productivity, as companies are able to focus on their core service rather than undifferentiated work.
There are a few different approaches to IfC, with different providers taking different bets on how it’ll look. All share the same high-level vision of letting the service code do most of the talking, and having the IfC tool turn its requirements into infrastructure. These run a gamut of how tightly they couple with the service-level code.
In this approach, startups such as Wing and DarkLang are introducing new programming languages that aim to be cloud-centric. These language-based approaches could introduce new constructs that cannot be simply and similarly modeled in existing languages like Python, Go Lang or Java.
DarkLang, in comparison, provides different building blocks like cloud data stores and the means to expose APIs to the Internet:
Wing on the other hand is more Infrastructure-AND-Code than FROM code, combining cloud constructs and classic programming constructs in the same fabric. For example, in Wing, developers can define a compute element with the cloud.Function construct, and define a Bucket to store blob data into with a few lines of code:
The language-based approach has the potential to deliver a superior user experience, as it allows for the introduction of new concepts that would be difficult or impossible to achieve in existing programming languages. Features such as interactivity and distributed computing could be more readily implemented, making the process simpler and more intuitive for software developers.
There are four main tradeoffs with this approach:
- Software developers have to commit to learning an entirely new language, which means saying goodbye to years of practice and expertise in the programming languages they already know.
- Starting a new language means starting a new ecosystem from scratch or having a stellar interoperability story.
- Finding and hiring developers with expertise in a newer language will be difficult both technically and organizationally. (i.e Haskell)
In this approach, tools like Ampt and Nitric introduce their own SDK that developers use in their code. At deployment time, these tools analyze how the service code uses the SDK and generate the infrastructure from that. Introducing their own SDK makes inferring usage from code more predictable, and tailored to work well for the scenarios it was designed for, but it also means the SDK is always one step behind in leveraging new underlying cloud features.
For example, to expose an endpoint to the Internet with Nitric, we import the api package from the @nitric/sdk package, and define routes based on its specific syntax:
Another example is a collection package that serves as a document store for data persistence:
The use of a new platform-specific SDKs like the ones Ampt and Nitric provide require developers to learn and make use of those libraries, however they have the potential of unlocking unique capabilities only they can offer. However, this comes at the cost of sacrificing the breadth and depth of features that popular community-driven libraries may offer. While those libraries can still be used, they won’t benefit from the unique capabilities only available in the provider SDK. Ultimately, developers will have to make a tradeoff between the feature depth and breadth of popular community-driven libraries, and the power provided by the platform-specific SDKs.
Annotations + Frameworks
With this approach, tools like Encore and Shuttle let developers annotate parts of their code, and the tools then incorporate those into the tool’s framework. Depending on the tool and your deployment target, this framework may be hosted on the IfC vendor’s cloud infrastructure, or it may integrate more directly with a third-party cloud provider like AWS, GCP, or Azure. These often come with their own deployment tools.
For example, in Encore, instead of importing a service discovery SDK or a service wrapper, developers write plain functions with pre-defined function signatures, and then annotate those functions to tell Encore how to translate those to its hosted counterpart. The input and output types used become the API request/response schema, and the annotation specifies the URL path. Encore then automatically provisions the relevant infrastructure in local, preview, and cloud environments in AWS/GCP/Azure.
For other infrastructure resources, Encore takes a more SDK-like approach. Declaring infrastructure resources like relational databases, Pub/Sub, caches, cron jobs, secrets and configuration are done via Encore-provided SDKs:
Shuttle implements a dependency-injection pattern with annotations to add route context to a function and inject request-scoped objects such as headers, parameters, and body into the function. Shuttle’s approach wraps the use of popular open source libraries. For example, to use Rust’s Rocket framework, you define and annotate a method that creates a ShuttleRocket instance.
Shuttle again utilizes dependency-injection and an annotated #[shared::Postgres] in order to enable data persistence, which allows it to be coupled with another popular open source library called sqlx:
Shuttle and Encore however rely on both an SDK approach and an annotation approach, doubling the concept count and downsides for developers to consider. The SDK approach requires developers to learn and use custom libraries, sacrificing the depth and breadth of features offered by popular community-driven libraries, while the annotation approach can become its own language/DSL, requiring developers to understand the annotation processor and often manually write code to work around any issues. Multiple approaches makes it more difficult to create coherent and predictable language designs, especially when the two interplay.
See also: AWS Chalice
This approach is based only on in-code annotations, and leans on existing, open-source libraries for things like web frameworks and persistence. This establishes a stricter separation of concerns: in this approach, the IfC tool isn’t responsible for hosting or even choosing a framework, but instead focuses on understanding the developer’s use of frameworks and tools. The leading tool in this space is Klotho, goes beyond Infrastructure-from-Code and into Architecture-as/from-Code to emphasize that its job is to understand the application’s architecture, not to define it.
Klotho introduces purposefully few key annotations, called capabilities, that make existing programming languages cloud native:
exposeweb APIs to the Internet
persistmulti-modal data into different types of databases
static_unitto package static assets and upload into a CDN for distribution
For example, in a Python application, annotating the popular FastAPI module exposes all the routes defined on its routers to the Internet. Klotho is able to trace back and understand how developers defined routes using FastAPI’s native router.
When you deploy, Klotho will not only infer the infrastructure it needs to deploy, but it’ll also rewrite the service code to wire that infrastructure’s connection strings to the variables, consts, or functions you’ve annotated.
Klotho also has 2 professional and advanced capabilities:
exec_unitto delineate service boundaries and facilitate cross-exec unit calls
pubsubto enable event driven message passing across execution units
For example, annotating a plain NodeJS EventEmitter with the pubsub capability allows two separate execution units, in this case 2 separate modules or files, to communicate over plain events, but in a cloud setting be backed by an appropriate conduit like SQS / SNS / Redis streams etc
Similarly, calling functions in separate execution units automatically transforms them into over-the-wire calls backed by API calls, gRPC, Linkerd or similarly appropriate conduit.
The tradeoff however is the expanding need for the tooling to understand existing and growing sets of libraries, languages, design patterns, clouds and underlying services they provide – the dimensionality of the problem becomes large, potentially too large. Similar to the annotation+SDK approach but with no SDK to complement it, the temptation to grow the annotation system until it’s a fully-fledged language/DSL, requiring developers to understand the annotation processor and often manually write code to work around any issues.
- There are 4 main approaches to Infra-from-Code: SDK-based (Ampt, Nitric), in-code annotations based (Klotho), a combination of the two (Encore, Shuttle) and explicitly defined through a new programming language (Wing, DarkLang).
- Companies that adopt Infrastructure-from-Code are expected to have a significant advantage in terms of productivity and efficiency in shipping cloud-powered software.
- Infrastructure-from-Code is expected to become more popular in the coming years and establish itself as an alternative to existing cloud development approaches.
- Infrastructure-from-Code (IfC) is a process that enables the automated creation, configuration, and management of cloud resources from understanding the source code of a software application, without the need for explicit description.
- The second wave of IaC tools, such as Pulumi and CDK, used existing programming languages like TypeScript, Python, and Go to express the same ideas as the first wave of tools.
- Infrastructure-as-Code (IaC) tools, such as Chef, Ansible, Puppet, and Terraform, were some of the first tools to enable the creation and management of cloud infrastructure using Domain-specific languages (DSLs).