I am a big believer in the importance of testing. It is cool when you can write a batch of tests before implementing the feature and then see how the number of green tests goes up. And TDD is great. Lately, I changed my main activities. And start spending most of the time working with DevOps and Microservices on AWS. The first time it feels like fun work to do. But soon I understood that I don’t like it since most of the time it is a routine and a big number of technologies make you sick and tired. More often I found myself testing services by hand since you can’t be sure that after deploy service will work with other services correctly.
Testing pipeline
After some time I decided that it can’t last this way any longer and I need to find a way to start testing these services. The main problem is testing how services work on AWS infrastructure. While researching this question I found a technology — Localstack. Which allows you to use AWS services locally and it was definitely what I needed.
With Localstack in place, I decided to write a testing pipeline. The first step is to run Localstack. You specify AWS services needed for the tests and region if not default one is needed.
export SERVICES=s3,lambda
export DEFAULT_REGION=$AWS_DEFAULT_REGION
localstack start
Then you run Terraform commands inside the directory with .tf files.
terraform init
terraform apply -lock=false -auto-approve
Now you have your infrastructure running on localstack. And you can run tests. For example:
npm test
Then we want to destroy our infrastructure, since we don’t want to have the dynamoDB or S3 with old data next time tests will be launched.
terraform destroy -lock=false -auto-approve
After this, you can destroy processes with Localstack services. For example, if we used in our tests AWS Lambda and DynamoDB — we will run these commands.
# fuser -n tcp -k $DYNAMODB_PORT
# fuser -n tcp -k $LAMBDA_PORT
That’s it! This way you can run tests using Terraform and Localstack. However, I would like to take a closer look at Terraform part.
No more procrastination. You have time for goals! Increaser will guide you.
Terraform + Localstack
We have AWS Lambda that works with DynamoDB. And we have Terraform modules. If you the only one who works with terraforms and responsible for DevOps it is not such bad practice to keep all terraforms code and state in one place. This way you can take a specific module from the repository. And use appropriate modules for testing.
provider "aws" {
skip_credentials_validation = true
skip_metadata_api_check = true
s3_force_path_style = true
access_key = "mock_access_key"
secret_key = "mock_secret_key" endpoints {
dynamodb = "http://localhost:4569"
lambda = "http://localhost:4574"
}
}module "dynamodb" {
source = "git::ssh://<YOUR_REPO>//modules/dynamodb"
env = "test"
}module "lambda" {
source = "git::ssh://<YOUR_REPO>//modules/lambda"
env = "test"
}
If you have a small loosely coupled microservice and terraforms for it — such an approach for testing can be not so bad. But if you have a large service with a bunch of dependencies, I am not sure that this approach will have any sense. Since there are caveats out there, for example, when I write AWS Lambda that triggered by DynamoDB streams I found that Localstack has an endpoint for streams, but Terraform lacks such one. It was a problem, and I was forced to make additional changes in my infrastructure repository so that I can turn off streams. The second hit was IAM. And it is quite a big problem since I already have IAM stuff in the module. Because of this, I was forced to create IAM stuff for part of the test environment on real AWS.
Reach the next level of focus and productivity with increaser.org.