By the end of this article, you should have learned how to deploy Rust applications in Linux containers, on the AWS Fargate service. By using Linux containers to deploy Rust, you can avoid dependency conflicts, develop decoupled microservices, and easily deploy to multiple cloud platforms.
AWS Fargate is a service launched in 2017 that allows AWS users to deploy containers in the cloud. One of the primary benefits of using AWS Fargate is avoidance of managing infrastructure.
Container hosting services are ubiquitous in the industry, supported by all three major cloud vendors (Microsoft Azure, AWS, Google Cloud), and many smaller, specialized vendors. Some alternative cloud vendors that support container hosting include Railway, Northflank, and Render.
As an AWS Partner, StratusGrid has extensive experience with container-based workloads on AWS, including AWS Fargate, Lambda, EKS, and EC2. We can assist with planning out migrations and net new container workloads on AWS, and architect the cost optimization and security for those workloads.
You'll need a few different tools on your local development system to follow along with this article.
If you install the docker.io package on a headless Linux development server, you will need to install the Docker “buildx” plugin separately. Docker is moving from the “build” command to the “buildx” command, which is not currently distributed as part of the core Docker package.
The following command is confirmed to install the Docker Buildx plugin on Ubuntu 23.10 Mantic Minotaur.
apt-get install docker-buildx --yes
Firstly, let’s build a simple Rust web server. We’ll use the popular actix-web framework to build a simple API that sends some random text back to the caller. For the random text, we’ll use the uuid crate to generate a random universally unique identifier (UUID).
Start by creating a new project.
cargo new trevoruuidgen
cd trevoruuidgen
Open up the project directory in VSCode. Next, install actix-web and uuid crates, from the Integrated Terminal in VSCode.
cargo add uuid --features=v4
cargo add actix-web
In the main.rs file, we’ll start off by defining an API route, to retrieve a new UUID.
use actix_web as aw;
#[aw::get("/new_uuid")]
async fn new_uuid() -> impl aw::Responder {
aw::HttpResponse::Ok().body(uuid::Uuid::new_v4().to_string())
}
Next, we will create our asynchronous main function. Inside this function’s body, we’ll launch an HTTP server and bind it to a particular IP address (all) and port.
#[aw::main]
async fn main() -> std::io::Result<()> {
println!("Starting web server...");
aw::HttpServer::new(|| {
aw::App::new()
.service(new_uuid)
})
.bind(("0.0.0.0", 18163))
.unwrap().run().await
}
You can customize the port number, of course. Generally, choosing a high, random number is a good plan. This helps to avoid port conflicts with other applications.
Now, test our your application with this command:
cargo run
You should see the “Starting web server…” text get printed to your terminal, and then the terminal should hang. You can open a web browser and navigate to the IP address of your development system with this URL: http://<ip_address>:18163/new_uuid. Inside the browser, you should see a new UUID returned in the HTTP body payload. See the next screenshot for an example.
Awesome! Now that we’ve built a Rust web server, we need to package it up as a container for deployment. Move onto the next section!
Now that our application has been built, let’s package it up as a container image.
Thankfully, the Rust project includes an official container image, with the Rust toolchain pre-installed, that we can easily build off of! We’ll start with that as our base image, and then inject our code into a new container image.
Start by creating a file in the root of your project called Dockerfile. This file contains the instructions used by the Docker Engine to package up your application into a custom container image. Inside that file, we will add the following instructions.
FROM docker.io/rust:latest
WORKDIR /app/
ADD [".", "/app/"]
ENTRYPOINT ["cargo", "run"]
This isn’t the preferred method of building a production-grade container image for Rust applications, but it will function for now. We’ll discuss building production-grade container images in a separate article.
It’s time to build the container image and run it, to make sure it works!
docker buildx build --tag trevoruuidgen .
Now, you can run a container from the image locally, and ensure the web server spins up.
docker run --rm --detach --publish 18163:18163 trevoruuidgen
You’ll notice that your application compiles on startup, and then the web server kicks off. When you’re done, just hit CTRL+C to kill the container.
Now that we validated our container image works locally, we can publish it to the Docker Hub for deployment!
To deploy our container image to AWS Fargate, we need a publicly accessible location to store the container image. Docker Hub is a container image “registry” that hosts your application container images for deployment to various cloud platforms, including AWS Fargate.
You should already have registered an account with Docker Hub, so login and choose the Create Repository option. Provide a name and description for your repository, similar to the screenshot below, and then click on Create.
Now that the repository is created, we need to authenticate to the Docker Hub on our developer workstation, using the Docker CLI tool. Open your Docker Hub Account Settings, select the Security tab, and create a new Access Token. Make sure the access token has the write capability, as we’ll need that in order to push our container image to the repository that we just created.
Copy the access token to your clipboard and run the Docker login command that is displayed. When you’re prompted for the Password input, use your access token.
Now all we have to do is “tag” our container image into the Docker Hub, and then run the push command, to upload it! My username is trevorstr, and repository name is trevoruuidgen, but replace these values with your personal Docker Hub username and repository name.
docker tag trevoruuidgen docker.io/trevorstr/trevoruuidgen
docker push docker.io/trevorstr/trevoruuidgen
NOTE: Because we aren’t currently following best practices for production-grade container images, you’ll probably have a bloated container image layer. That’s happening because of all the build artifacts present in the image, but don’t worry about that for now. In my case, the container image layer was about 800-900 megabytes, but that’ll get much slimmer in a production quality build process.
Alright, if you’ve gotten this far, you’ve uploaded your container image into the Docker Hub! Congratulations! Now it’s time to actually deploy the container image into your AWS account, so put your cloud hat on and move to the next section.
There’s a few things you’ll need to do in your AWS account to prepare for container deployment in AWS Fargate. Let’s summarize these steps, and then we’ll break each of them down into the specific steps you’ll need to follow!
To deploy AWS Fargate tasks (containers), you must have an Amazon Virtual Private Cloud (VPC). VPC is essentially a software-defined network where you can connect virtual machines, containers, and related network resources.
If you already have a VPC environment in AWS, with an Internet Gateway and public IPs enabled, you can use that. However, if you are new to AWS and need a simple VPC to run an AWS Fargate container, you can deploy one from a template. We’ll run through those directions below.
Now that you have a VPC, it’s time to create an ECS Cluster!
An ECS Cluster for AWS Fargate doesn’t have any provisioned capacity. Rather, it’s a logical construct that must be created, so you can launch containers with AWS Fargate. You can also join EC2 compute instances to an ECS Cluster, but that’s outside the scope of this article.
That was easy! You’ll only need to do this once, and then you can deploy as many AWS Fargate containers as your heart desires.
Our next step to deploy the Rust container is to create an ECS Task Definition. This Task Definition resource contains the parameters for how to provision compute capacity and which containers you actually want to run.
Newcomers to AWS, or containers, might be overwhelmed by the sheer number of configuration options available in Task Definitions. If this is you, my recommendation is to learn container concepts before trying to deploy containers on a cloud provider. Once you’re comfortable with container concepts on your local development workstation, or in a local virtual machine, then go ahead and come back to Amazon ECS.
Keep in mind that an ECS Task Definition can actually deploy multiple containers as a single unit. However, most of the time, you’ll probably just deploy one container in each ECS Task Definition. Keeping services decoupled helps you scale certain services independently from others in your solution.
Now that the ECS Task Definition has been created, the final step is to deploy a new AWS Fargate Task using it!
NOTE: If you select a memory quantity insufficient for compiling certain Rust libraries, you may receive compilation errors during the AWS Fargate task startup. This will cause the container / task to crash, and your application will not run. I tested a 1 GB memory capacity, and compilation succeeded, but failed with 0.5 GB.
With the ECS Task Definition created, we can now deploy the container to AWS Fargate! The AWS Management Console provides a complex, but straightforward, wizard to run a new standalone Fargate task.
Follow the steps below to run your Rust web server container on Fargate!
Now that we've deployed an AWS Fargate task with our Rust application, we need to validate that we can access the application across the internet! To do this, all we need to do is find the publicly routable IPv4 address of the AWS Fargate task, and then attempt to access it with our web browser.
Follow these steps to validate access to the Rust web server.
You should be presented with a new UUID each time you refresh this page. Congratulations, you’ve deployed your Rust web server to the AWS Fargate service!
After following along with this article, you should now have the knowledge to:
With this base set of knowledge, you can customize your Rust application further. For example, you can add additional routes to your Rust application, which perform different functions. Feel free to create additional Rust microservices, each with their own, separate container image, and deploy them as their own separate Fargate tasks.
Another concept you’ll want to be familiar with is how to use Fargate services to ensure that a certain number of replicas of your service are always running. In this article, we only deployed a standalone Fargate task, but once that task exits, it will not automatically be restarted. This is where AWS Fargate “service” resources can ensure high availability and scalability of your application.
There are many other concepts, both in Rust, and AWS, that you’ll want to learn. Be sure to follow the StratusGrid blog for more knowledge in this space. In any case, hopefully you’ve learned something new from this article, so now it’s time to go forth and cloud!
Looking to enhance your existing cloud infrastructure? StratusGrid is here to guide you every step of the way. Our expertise in deploying Rust applications in Linux containers on AWS Fargate is just the beginning. We understand the complexities of cloud environments and are dedicated to helping you optimize your cloud strategy for efficiency, scalability, and security.
Contact us today, and let's build a robust, scalable, and secure cloud infrastructure together!
BONUS: Find Out if Your Organization is Ready for The Cloud ⤵️