An Intro to Amazon’s Elastic File System Cross-Regional Replication
Learn how Amazon EFS replication & cross-regional replication ensures business continuity, meets compliance, and simplifies AWS data management.
Explore Amazon DynamoDB and Rust in building a product catalog for online retail businesses. Perfect for developers seeking cloud-based solutions.
If you’re building an online retail business, you probably need a place to store your product data. While relational databases are an excellent, traditional option for storing product and order data, another approach to consider is a document database.
Document databases generally do not have linked tables with foreign keys, as relational databases do. Rather, each document in a collection (table) contains the full description of each item. This approach can result in duplicate data, but also improves performance, and can simplify application code.
AWS DynamoDB is a fully managed document storage service that makes server management completely opaque. There are no servers to log into, patch, or monitor. Another reason DynamoDB is a great database option is because it can scale along with your business. For small applications, you only pay for the actual capacity you’re using. As traffic increases, you can provision “Capacity Units” for both reading and writing data, independent from each other.
Although DynamoDB is a proprietary service to AWS, there are alternative, open-source document storage engines. For example, other open-source document storage engines include MongoDB, ArangoDB, Apache CouchDB, and Apache Cassandra. You can host these alternative database services on any cloud platform or on your own, self-managed hardware.
For the purposes of this article, we’ll stick with DynamoDB, since it’s easy to get started. We’ll start out by creating a DynamoDB table, and then explore how to use the AWS SDK for Rust to store new documents into a DynamoDB table.
Rust is an excellent software development language to use when you’re building cloud applications. It is well known for its high performance and efficiency with hardware resources. A small Rust program can consume just a couple megabytes of memory, and very few CPU cycles.
Rust improves developer efficiency as a statically typed language. Catching code errors early on in the development process helps increase likelihood that compilation and testing will succeed, further down the pipeline. The Rust toolchain, such as the compiler (rustc), package manager (Cargo), and linter (clippy) do a great job of informing developers of errors before compilation, and warnings about potential bad code.
Another reason Rust is a great language for building cloud applications is the built-in borrow checker. Compared to low-level languages like C and C++, Rust’s toolchain ensures that references to memory cannot exist beyond referenced data. A common coding error that this prevents is called “use after free,” when memory is wiped out, but still being referenced by an application.
Let’s set up a new Rust project, and install the crate for the Amazon DynamoDB service APIs. Typically you use the Cargo CLI tool to build a new Rust application. Once we create the new create, we’ll open the project in Microsoft Visual Studio Code, a cross-platform development environment. You’ll also want to install the Rust Analyzer extension for VSCode, which provides a rich set of tools for Rust developers.
mkdir $HOME\git; cd $HOME\git
cargo new aws-product-data
code aws-product-data
Now that you’ve got your Rust project open in VSCode, let’s add the necessary dependencies. You can open the integrated terminal from the VSCode Command Palette by hitting the F1 key and searching for View: Toggle Terminal.
The tokio asynchronous runtime is necessary to run code with the AWS SDK.
cargo add tokio --features=full
cargo add aws-config --features=behavior-version-latest
cargo add aws-sdk-dynamodb
In the src/main.rs file, update your main function to look like the following.
#[tokio::main]
async fn main() {
println!("Running DynamoDB test program");
}
To ensure your program works correctly, run cargo run from your project directory.
Now it’s time to set up your AWS credentials.
Any time you’re building an application that utilizes AWS APIs, you’ll need to set up your AWS credentials. AWS provides a built-in policy that allows broad access to the DynamoDB service. In a production environment, you’ll want to create a custom policy, with conditional statements, to limit access to a specific DynamoDB Table.
To set up your AWS credentials locally, run through these steps.
[default]
aws_access_key_id=xxxxxxxxx
aws_secret_access_key=xxxxxxxx
Inside your Rust main() function, use the following line to create an SDK configuration and load the credentials.
let sdk_config = aws_config::from_env().load().await;
Now that we have our Rust project and AWS credentials set up, we can start calling APIs against the DynamoDB service.
In order to write code against Amazon DynamoDB, we need to have a table. Oftentimes, you’ll end up using an infrastructure-as-code (IaC) tool to provision DynamoDB tables. AWS CloudFormation and Terraform are common IaC tools, used for this purpose. However, you can programmatically create one using the AWS SDK for Rust. Let’s start by taking a look at how to accomplish this.
Inside your main function, insert the following lines of code.
let ddb_client = ddb::Client::new(&sdk_config);
use aws_sdk_dynamodb as ddb;
use ddb::types::{KeySchemaElement, AttributeDefinition, AttributeValue};
let key_hash = KeySchemaElement::builder()
.attribute_name("category")
.key_type(ddb::types::KeyType::Hash)
.build().unwrap();
let key_range = KeySchemaElement::builder()
.attribute_name("name")
.key_type(ddb::types::KeyType::Range)
.build().unwrap();
First of all, we create a DynamoDB client, using the SDK configuration that we loaded from default options before.
Next, we prepare to build out our DynamoDB table schema. As you may know, DynamoDB tables can have a hash (partition) key and range (sort) key. The Rust KeySchemaElement type allows you to define the name of the document attribute, and the type of key that it represents (hash or range). We will create two of these, one for the hash key and one for the range key. Since the KeySchemaElement type uses the builder pattern, we must call the build() function and then unwrap() the final result.
Add the following code to your main function, after the previous section, and then we’ll describe what it’s doing.
let attr_hash = AttributeDefinition::builder()
.attribute_name("category")
.attribute_type(ddb::types::ScalarAttributeType::S)
.build().unwrap();
let attr_range = AttributeDefinition::builder()
.attribute_name("name")
.attribute_type(ddb::types::ScalarAttributeType::S)
.build().unwrap();
The AttributeDefinition type determines which data type the hash key and range key will use. There are several different data types supported by DynamoDB for attribute keys, including String (S), Number (N), and Binary (B). In our example, the product category and name attributes will both be string values.
After defining the KeySchemaElement and AttributeDefinition types, you can finally build the table creation request, send it, and retrieve the result.
let create_result = ddb_client.create_table()
.table_name("trevor-products")
.key_schema(key_hash)
.key_schema(key_range)
.attribute_definitions(attr_hash)
.attribute_definitions(attr_range)
.billing_mode(ddb::types::BillingMode::PayPerRequest)
.send().await;
The DynamoDB client object has a variety of methods, including the create_table() method, which returns a “request builder.” The request builder in turn has a variety of methods that allow you to configure the request, before you actually send it to the DynamoDB API.
As you can see, we are building the API call before we actually invoke it with the send() function. Once we call send(), a Rust future is returned, which must be awaited.
Once your DynamoDB table has been created, you can start writing documents into it. Let’s break down the following code snippet, to understand what’s happening.
let put_result = ddb_client.put_item()
.table_name("trevor-products")
.item("category", AttributeValue::S(category))
.item("name", AttributeValue::S(name))
.item("price", AttributeValue::N(price))
.send().await;
We use the DynamoDB client’s put_item() method to create a new PutItemFluentBuilder struct instance. Just like any API, we’ll need to provide an array of input parameters. For this specific API, we need the DynamoDB table name that we want to write to, along with the individual key-value pairs that we want the new document to have. The table_name() method will specify the table name where we’ll write the new item, using a string value. The item() method accepts a key-value pair, with the field name and field value. Notice that there’s an enum in the aws_sdk_dynamodb crate called AttributeValue, which wraps the data that you want to write into the document.
Now that we’ve called the put_item() API, we need to check the result to see if there’s an error or not. The put_result variable will now contain a Result that wraps the successful result or error.
if put_result.is_ok() {
println!("Successfully added item");
}
else {
println!("Failed to add item");
println!("{:#?}", put_result.err());
}
Some developers prefer using the Rust match construct to handle error checking, but an if statement works just fine.
To make our application a bit more interactive, we can prompt for user input from the terminal. Let’s build a simple Rust function that requests user input, and returns a String value.
The get_value() function below will accept a single String input argument, which determines the name of the field we’re prompting the user to enter. When the function executes, we write some prompt text and then flush() the stdout buffer. Flushing stdout is necessary to ensure that the prompt statement is printed, before we prompt for input from stdin. In some cases, receiving input from stdin will block the stdout buffer from being flushed.
Next, we allocate a buffer by creating a new, empty String. This will hold the user input when it’s entered. The call to the read_line() function will populate the buffer with the user-inputted text.
Next, we need to check if the Result from the read operation is “ok,” otherwise we’ll just return an empty String value. If the Result is ok, then we will trim off the line ending character, and return the user-inputted String to the caller.
fn get_value(value: String) -> String {
print!("Enter {}:", value);
_ = stdout().flush();
let mut new_value = String::new();
let result = std::io::stdin().read_line(&mut new_value);
if result.is_ok() {
new_value = new_value.trim_end().to_string();
println!("{}", new_value);
return new_value;
}
return "".to_string();
}
That’s our simple function that can be used to prompt the user for various data fields! Now we can call this new function from our main() function, and populate several variables for each new data field in the new DynamoDB document.
let category = get_value("category".to_string());
let name = get_value("name".to_string());
let price = get_value("price".to_string());
Make sure you put the above lines before the call to put_item(), and update your put_item() call to use the variables instead of hard-coded values. Check out the below snippet for an example.
let put_result = ddb_client.put_item()
.table_name("trevor-products")
.item("category", AttributeValue::S(category))
.item("name", AttributeValue::S(name))
.item("price", AttributeValue::N(price))
.send().await;
Go ahead and run your program with cargo run, and make sure the prompting works as expected! Once you’ve entered some data into the prompts, you can head over to the AWS Management Console for DynamoDB and validate that your new item has been written successfully.
Now that you’ve learned how to write a Rust application that interacts with DynamoDB, there’s still a lot more to do! Here are some ideas that you can expand on.
Ready to take your cloud computing to the next level? StratusGrid is here to guide you every step of the way. With our expertise in Amazon DynamoDB and the power of Rust, you're setting the foundation for scalable, efficient, and robust cloud-based solutions.
Whether you're a startup or an established enterprise, our tailored services ensure your cloud infrastructure meets your unique needs. Let's innovate together - contact StratusGrid for a consultation.
BONUS: Find Out if Your Organization is Ready for The Cloud ⤵️
Learn how Amazon EFS replication & cross-regional replication ensures business continuity, meets compliance, and simplifies AWS data management.
Discover how AWS Amplify’s serverless technologies not only streamline the development process but also focus on delivering tailored business...
Streamline AWS EC2 instance setup with StratusGrid's Profile Builder module for efficient, secure, and automated management.