AWS Cloud Blog & News | StratusGrid

Automate Software Development with Aider and Amazon Bedrock

Written by Trevor Sullivan | Mar 3, 2025 3:33:06 PM

The world of artificial intelligence has been rapidly evolving in the 2020s. While OpenAI's ChatGPT release in November 2022 is frequently recognized as one of the first publicly available releases of a general-purpose Large Language Model (LLM), many other similar services have come online. Common competitors to ChatGPT include Anthropic's Claude, Grok from xAI, and Google Gemini, among others. These services are able to answer questions, process data, generate code snippets, create project plans, and perform a variety of other day-to-day tasks.

In addition to accessing LLMs via managed services, you can also host "open" models like Microsoft Phi, Meta's Llama, Google's Gemma, and many others. You can deploy these models on a self-hosted bare metal server with an NVIDIA GPU, or you can host them in a cloud provider, like Amazon EC2. There's a variety of open source software that you can use to self-host models, one of the most popular being Ollama. Check out this video on our YouTube channel, to learn more about running Ollama on Amazon EC2.

What can I use LLMs for?

While LLMs are useful for general purpose data processing, they are also especially useful for software developers. LLMs can generate code snippets for most popular programming languages, based on the datasets they're trained on. In fact, there are specialized models designed for software development tasks, such as DeepSeek Coder or OpenCoder. These models can be used in conjunction with extensions for Integrated Development Environments (IDE) such as Microsoft Visual Studio Code (VSCode), to automate code generation. Continue is a popular VSCode extension for this purpose.

What is Aider?

While VSCode is an excellent tool, you may also wish for a greater level of automation directly in your terminal. Wouldn't it be cool if you could write software by simply expressing your intent in your terminal? Wouldn't it be helpful to have an application automatically generate git commit messages for you, and actually commit the changes it makes? That's where the open source tool called Aider comes into play.

Aider is an interactive command line tool, written in Python, that helps you automate software development, by leveraging the power of LLMs. Aider works inside the context of a git repository and will add new application source files, modify existing source files, generate commit messages, and commit your changes. This allows you, as the software developer, to focus on expressing your intent, the functionality you want to add or change in your application.

How do I set up Aider?

Since Aider is a command line tool, running in the context of a Python runtime, you can install and run it almost anywhere. You can run it natively on Windows, MacOS, or Linux systems. You can also run it inside a Linux container on Linux servers using Docker Engine or Podman. You can also use Docker Desktop, for systems with a desktop GUI, to run Aider as a Linux container atop Windows or MacOS.

Bear in mind that Aider has compatibility requirements with the Python runtime, and may not work with the latest version of the Python runtime. Be sure to check out the latest documentation when you're ready to use Aider, for how to install it, or just run it from a pre-built container image.

If you choose not to go with a pre-built container image, using the aider-install utility to install aider will ensure that a compatible Python runtime is used to run aider.

Due to the dynamic nature of the aider documentation, we won't reproduce the installation steps here.

Getting Started

Go ahead and create a new project directory, change into that directory, and then simply launch aider. If your directory isn't already a git repository, aider will detect this and prompt you to initialize a new git repository. You'll also want to ensure that your git configuration has your correct username and e-mail. See the screenshot below for an example.

As you can see, aider informed us that it's not connected to an LLM service. Let's use Amazon Bedrock, which is a managed service providing access to a variety of LLMs, including Anthropic Claude Opus, Claude Sonnet, and others. In my personal experience, I have had good success using Claude Sonnet through Bedrock.

Configure Amazon Bedrock for Aider

To set up access to Amazon Bedrock, you'll of course need to have an Amazon Web Service (AWS) account first. Log into the AWS Management Console, and run through these steps:

  • Navigate to the AWS IAM console
  • Create an IAM user
  • Attach the built-in AmazonBedrockFullAccess IAM policy to the user
  • Create an Access Key for the IAM User
  • Configure the Access Key ID and Secret Access Key as environment variables
  • Specify an AWS region with the AWS_REGION environment variable

These commands are valid for a Bash shell:

export AWS_ACCESS_KEY_ID=your-value-here
export AWS_SECRET_ACCESS_KEY=your-value-here
export AWS_REGION=us-west-2

In PowerShell on MacOS, Windows, or Linux, you can use these commands to configure the same environment variables.

$env:AWS_ACCESS_KEY_ID = 'your-value-here'
$env:AWS_SECRET_ACCESS_KEY = 'your-value-here'
$env:AWS_REGION = 'us-west-2'

Now that you've configured your AWS credentials, go ahead and launch aider again, but this time, specify a valid Amazon Bedrock model.

Find Amazon Bedrock Models

If this is your first time using the Amazon Bedrock service, you will need to enable access to certain models. Follow this documentation to enable access to the necessary models in the Amazon Bedrock console.

⚠️ NOTE: If you don't enable access to a model in Bedrock, but try to call the model from aider, you will receive an error message similar to the following: Invocation of model ID anthropic.claude-3-7-sonnet-20250219-v1:0 with on-demand throughput isn't supported. Retry your request with the ID or ARN of an inference profile that contains this model.

After you've enabled the necessary models, you can list out the model names using AWS CloudShell, embedded in the AWS Management Console.

  • Login to the AWS Management Console
  • Click the Terminal / Shell icon in the toolbar, to launch CloudShell
  • Run PowerShell in the terminal: pwsh
  • Run the PowerShell command below
Get-BDRFoundationModelList | `
Format-Table -AutoSize -Property `
ProviderName,ModelName,ModelId,InputModalities,OutputModalities

This will output a list of all the models that you can select from, provided you've enabled them in Model Access area of the Amazon Bedrock console. Check out the screenshot below for some example output. You'll want to copy one of the model names from the middle column (the ModelId property).

Once you've chosen a model to use, launch aider with the --model parameter.

IMPORTANT: Some models in Amazon Bedrock require the use of cross-region "inference profiles." In these cases, obtain the Inference Profile ID from the Bedrock console, and use that instead of the Model ID property. Check out the screenshot below for an example of Bedrock Inference Profiles.

Let's try using the Claude 3.5 Sonnet model for now.

aider --model bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0

You should now see an interactive aider prompt similar to the below screenshot.

How do I use Aider?

Now that you've got Aider fully configured, you can start entering prompts, using plain English, and Aider will attempt to do what you ask! It will even run shell commands for you to install tools, libraries, and so on.

Let's try creating a simple prompt and ask aider to create a Python program. Give the following prompt a try.

PROMPT: Create a Python program that prints out a list of the most popular LLM services and the websites to access them.

As you can see, aider will send the prompt to the LLM, generate a Python code file, and prompt you to confirm creation of the file or not. Aider also tells you the cost of the query, based on how many tokens were fed into the model, and generated by the model.

Go ahead and input "y" to confirm the file creation.

After confirming the file creation, you should also see a sensible git commit message generated, and a new commit created. You'll also be prompted to run a shell command to execute the application, and validate that it's functioning as expected. Go ahead and confirm that, and you'll see the output from the program printed out to the terminal!

Yes, it's literally that easy to build software, without even having to write any code by hand!

After execution, you can also include the output from the program to the chat context. These additional "tokens" will be fed into the next invocation of the LLM, so that it can use that previous knowledge to build off of. If the output is similar to what was expected, then the LLM knows that the program ran successfully. If an error occurs, that information will help the LLM to fix whatever went wrong in the code.

Adding Another Function

The last prompt we entered created a new Python code file and simply printed out some information. Let's try something different and prompt Aider to generate some code that makes some system changes. A simple task could be creating some semi-random text files on the filesystem.

Try out the following prompt:

PROMPT: Create another Python function, in a separate code file, that creates two directories, one names "boys" and one named "girls". In each of those directories, create 10 text files. Each file name should be a random first name of that gender, and include the "txt" file extension. Call this new function from llm_services.py.

As you can see in the screenshot below, the request we made is accurately reflected in the code changes. The LLM creates a new code file with a function that achieves what we asked for. Then, it goes back to the first code file, named llm_services.py, and updates it to invoke the new function from that new code file called name_generator.py.

Once the code changes have been applied in a new git commit, you'll be prompted to run the Python command to execute the program.

After executing the program, you may see some unexpected output, due to a bug in the code. Your exactly results may vary depending on your prompt, and any contextual tokens you've fed in as input. LLMs rarely produce the exact same results; there will always be an element of randomness, even if you use the same prompt.

In the screenshot below, you can see that our program output did not actually achieve the results that we wanted. The boys and girls directories do not contain 10 text files, like we asked for.

If you don't want to make any code changes, and simply want to ask a question about your code, or the current token context, you can use the /ask command in Aider. Let's ask Aider to check if the last command was successful or not.

As you can see, the response to our question, the LLM was able to determine that the output from the ls commands did not meet our original requirements. It even goes a step further, and identifies the root cause of the problem and suggests how it could be fixed. Since we used the /ask command, Aider will not make any changes, so let's just prompt it again to move ahead with the fix automatically.

PROMPT: Go ahead and apply the fix you recommended

You'll see that the LLM applies the changes it suggested, and creates yet another git commit, with an appropriate commit message. Once again, we run the commands to validate the correct output. The LLM took the liberty of updating the validation shell commands to get the count of items, instead of printing out the file names, which is fine. We also use the /ask command again to confirm that the LLM understands that the change it made was successful. As you can see, the LLM correctly reasoned that the fixed code has produced the desired output!

Feel free to continue exploring Aider on your own. In this article, we've implemented a couple of simple functions that produce some output, and create some text files on the local filesystem.

Caveats

While Aider with Amazon Bedrock is a powerful combination, it's important that you avoid leaning too much on it as a crutch. Think about Aider being more of a productivity tool than blindly utilizing the code that it produces, with the help of Bedrock, or other LLM services.

You may encounter situations, especially with newer or poorly documented programming languages, where the LLM may not be able to come up with a reliable solution. In those cases, you'll need to have a fundamental understanding of the language in order to write the appropriate code.

Another caveat is that Aider is open source software. There is no official support contract for it. If you encounter problems with the software, you can file an issue on the Aider GitHub repository. Hopefully someone will help you out with the issue, but there are no guarantees.

Conclusion

Aider, in conjunction with Amazon Bedrock, is a powerful, free, and open source tool that can help you accelerate software development. While our example in this article generated some Python code, you can also use Aider to generate software projects in other languages, such as Rust, C#, PowerShell, and so on.

I encourage you to explore Aider for yourself! Try out different LLM services, and different models, to see what kind of results you get. Customize your prompts and try to be more explicit if you aren't getting the results that you initially expected. Keep an eye on your costs for managed LLM services as well, to make sure those don't get out of control!