Using AWS Elastic Beanstalk to run REST APIs

  • Paul - Software Architect
    Paul Minasian
    Software Architect

When running Java APIs or other Java web application backend components and services in AWS, you need to create multiple infrastructural resources in AWS to accomplish the following:

  • Manageability
  • Scalability
  • High availability
  • Load balancing
  • Monitoring, logging, and tracing
  • Auditing
  • Network security
  • Automatic deployment

AWS Elastic Beanstalk is a service which makes it easy for you to accomplish the above by automatic creation of the necessary AWS resources, and automation of the following tasks among others:

  • CloudFormation stack
  • CloudWatch log groups
  • ELB application load balancer, listener, and target group
  • Auto scaling launch configuration
  • Auto scaling group
  • Auto scaling policies
  • CloudWatch alarms for scaling in and out
  • Security groups
  • S3 object with the uploaded application’s version
  • EC2 instance(s)
    • NGINX reverse proxy
  • RDS database
  • SQS queue
  • Deployment of your application to EC2 instance(s)

This blog post explains some of the concepts of Elastic Beanstalk and zooms in on the aspects of running multiple Java REST APIs in different environments using a shared application load balancer which will allow accessing the APIs under the same domain name (e.g., api.service.com/<api_path>). Sample REST APIs based on the Spring Boot framework are provided as part of this blog post so that you can also setup your sandbox environment and explore the discussed Elastic Beanstalk features further. 

Elastic Beanstalk

Elastic Beanstalk is a service which enables web applications to be easily deployed and scaled.

An application is designed and targeted to a specific platform. The following platforms are supported: Docker, Go, Java, .NET, Node.js, PHP, Python, and Ruby.

For the Java platform you have the following choices:

  • Tomcat platform
  • Java SE platform

The Java Tomcat platform is meant for applications that can run in a Tomcat web container. You can deploy multiple WAR files to this platform. A Spring Boot application can be compiled to a WAR file so that it can be deployed to the Tomcat platform. It is possible to bundle multiple WAR files (thus multiple REST APIs listening for HTTP requests behind different application contexts (e.g., /api1, /api2)) and deploy it to the Tomcat container. This will simplify deployments and reduce operating costs by running the components in a single environment, instead of running a separate environment for each component. However, this approach is effective for lightweight applications that do not require a lot of resources. It could also be used for dev and test environments.

The Java SE platform is meant for applications that are run from a compiled and executable JAR file. It is suitable for applications developed using a microservice framework such as Spring Boot or Quarkus. In this blog post, I will use this platform to deploy Java REST APIs. Running this type of applications using the docker platform is possible as well. The docker platform will allow you to run other non-supported programming language runtimes, and will allow for more customization. 

Within Elastic Beanstalk you can upload a specific version of your application. An application version then refers to a specific, labeled iteration of deployable code for a web application. Applications can have many versions, and each application version is unique. In a running environment, you can deploy any application version that has already been uploaded as part of the application. It is also possible to upload and directly deploy a new application version.

To run an application version, AWS infrastructural resources are needed. For that an environment in Elastic Beanstalk needs to be created to provision and manage the AWS resources. Each environment runs only one application version at a time. However, you can run the same application version or different application versions in many environments simultaneously.

There are two types of environments available, which are called environment tiers. An environment tier determines what resources Elastic Beanstalk provisions to support your application. The web server environment tier is meant for applications that receive HTTP traffic such as a REST API. The worker environment tier is meant for applications that process background tasks asynchronously using a queue (SQS) mechanism. The messages for the tasks can be sent to a queue, for instance, from an application running in the web server environment tier. Within the worker environment tier, applications can also run scheduled tasks.

The environment can be tweaked by means of an environment configuration. The configuration defines how an environment and its associated resources behave. When the configuration settings are updated, Elastic Beanstalk takes care of applying the changes to existing resources or if needed deletes and deploys new resources. The configuration can be saved to a template which can be used to create unique environment configurations. The saved configurations can be applied to environments.

Elastic Beanstalk manages resources of other AWS services or uses their functionality to implement your application’s environment. It can also integrate with AWS services that it does not directly use as part of your environments. The following AWS services among others can be used for integration:

Refer to the Integrating AWS services documentation for more information.

Running Java REST APIs

Overview

In this section, I will demonstrate how to setup two environments for two different Java REST APIs. The idea is to have one domain name for all the backend platform’s APIs and route the HTTP traffic to the correct Elastic Beanstalk environment based on the path.

The Git repository for the sample Java REST APIs is found here: aws-elasticbeanstalk-app. For simplicity, no API versioning or other production related matters are taken care of within the sample source code. 

The available endpoints are:

  • Star Trek API:
    • Health check: /startrek/
    • Films resource: /startrek/films
  • Star Wars API:
    • Health check: /starwars/
    • Films resource: /starwars/films

When you run the applications locally, prepend the following to the endpoints: http://localhost:5000

If your domain name is api.scifi.com then you can access the above API endpoints as following:

  • http://api.scifi.com/startrek/films
  • http://api.scifi.com/starwars/films

The below diagram illustrates the setup of the sandbox environment.

Sandbox environment

In this blog post, I will not be using Route 53 to register a domain. Instead, I will use the public DNS name of the shared application load balancer to access the APIs. Furthermore, in this setup, I will use the default environment configuration settings for high availability, and just update two settings to allow log information to be sent to the CloudWatch Logs from the EC2 instances, and to allow the usage of a shared application load balancer. Within the CloudWatch Logs, the following log groups will be created by the Elastic Beanstalk for the Star Trek API environment:

  • /aws/elasticbeanstalk/isaac-dev-blog-startrek-api-env/var/log/nginx/access.log
  • /aws/elasticbeanstalk/isaac-dev-blog-startrek-api-env/var/log/nginx/error.log
  • /aws/elasticbeanstalk/isaac-dev-blog-startrek-api-env/var/log/web.stdout.log
  • /aws/elasticbeanstalk/isaac-dev-blog-startrek-api-env/var/log/eb-engine.log
  • /aws/elasticbeanstalk/isaac-dev-blog-startrek-api-env/var/log/eb-hooks.log

Similar log groups will be created for the Star Wars API environment. The web.stdout.log group will contain the log data of the web application. 

Create a shared application load balancer

The first step is to create the shared application load balancer. This can be done from the EC2 management console (AWS Management Console). 

  • Name: isaac-dev-blog-api-scifi-com-lb
  • Scheme: internet-facing
  • Listeners: Protocol: HTTP, Port: 80
  • Availability Zones: all
  • Configure security groups: choose / create the security group which allows inbound and outbound HTTP traffic on port 80.
  • Configure routing: create / add existing target group which will be updated after the creation of the load balancer.

Once the load balancer is created, you will need to update the existing HTTP listener, and update its default rule.

Update existing HTTP listener

The below default rule will allow the load balancer to respond with 404 when no other rules are matched. The other rules will be added when you create an Elastic Beanstalk environment. The configuration of this load balancer will be revisited again after the creation of the environments. 

Allow load balancer to respond with 404

Create the environments with applications

Within the Elastic Beanstalk management console (AWS Management Console), create two new web server environments:

  • Star Trek API
    • Application name: Star Trek API
    • Environment name: isaac-dev-blog-startrek-api-env
    • Domain: leave empty for autogenerated value
    • Platform: Java
    • Platform branch: Corretto 11 running on 64bit Amazon Linux 2
    • Platform version: 3.1.6 (or most recent)
    • Application Code
      • Upload your code: startrek-api.jar
      • Version label: v1.0.0
  • Star Wars API
    • Application name: Star Wars API
    • Environment name: isaac-dev-blog-starwars-api-env
    • Domain: leave empty for autogenerated value
    • Platform: Java
    • Platform branch: Corretto 11 running on 64bit Amazon Linux 2
    • Platform version: 3.1.6 (or most recent)
    • Application Code
      • Upload your code: starwars-api.jar
      • Version label: v1.0.0

Now click on Configure more options. Choose for High availability option as the default configuration template. 

To enable CloudWatch Logs, click on the Edit for the Software section, and enable the log streaming option. Click on Save.

Enable CloudWatch logs

Now click on Load balancer section. Select the Shared option and select the already created application load balancer.

Modify load balancer

Update the default process’ HTTP code and Health check path. Elastic Beanstalk will create an application load balancer target for the default process, and will use this target in the Auto Scaling Group for registration / deregistration of EC2 instances. 

Use /startrek/ or /starwars/ dependent on the application. 

Processes

Add a rule to route the traffic to the default process based on the path condition.

Use /startrek* or /starwars* dependent on the application.

Add rule to route the traffic

Click on Save.

Leave the rest of the configurational settings as is. Now click on Create environment. It may take more than a minute to create all the AWS resources, and make the environment fully operational. Once the environment becomes operational, you will need to update some of the shared application load balancer’s settings.

Update the security group

In my case, Elastic Beanstalk created a security group, and attached it to the shared load balancer. This security group does not become part of the environment managed by the Elastic Beanstalk, and thus it is also not found as a resource in the environment’s CloudFormation stack.

That security group functions as the source for the security group of the EC2 instance on which the application runs. For reasons not known to me, it did not create the inbound rules, and the HTTP traffic was initially denied. Thus, I had to explicitly add the HTTP inbound rules.

Update the security group

Update the listener rules

The creation of both application environments resulted in the creation of listener rules. These rules do not become part of the environment managed by the Elastic Beanstalk, and thus cannot be updated within the Elastic Beanstalk environment but need to be updated within the EC2 environment.

Update the listerners rules

I did update two rules and removed the Host header condition. This would allow me to access the APIs using the DNS name of the shared application load balancer. 

Update listener and API access

Now you can use the environment’s or the shared load balancer’s URL to navigate to the API’s films resource endpoint.

Routing to API endpoint

Using the shared load balancer’s URL, you can access both APIs. In my case the URLs were the following.

  • http://isaac-dev-blog-api-scifi-com-lb-696583505.eu-central-1.elb.amazonaws.com/startrek/films
  • http://isaac-dev-blog-api-scifi-com-lb-696583505.eu-central-1.elb.amazonaws.com/starwars/films

For each environment, Elastic Beanstalk will create the following AWS resources:

  • CloudFormation stack
  • CloudWatch log groups
  • ELB application load balancer target group
  • Auto scaling launch configuration
  • Auto scaling group, and auto scaling policies
  • CloudWatch alarms for scaling in and out
  • Security groups
  • S3 object with the uploaded application’s version
  • EC2 instance
    • NGINX reverse proxy
    • Your application

You can use the AWS Management Console to navigate to each of these resources to further explore their setup and configuration. If you change the configuration of a resource which is managed by Elastic Beanstalk, and is thus also part of the environment's CloudFormation stack, you can ask CloudFormation to show the drift results. Please refer to the Elastic Beanstalk documentation to understand what is allowed to be changed / customized outside of the Elastic Beanstalk environment.

All that is remaining, is to configure the desired domain name (e.g., api.scifi.com), for instance, using Amazon Route 53 service to send the traffic to this load balancer which will then route the traffic to the correct Elastic Beanstalk environment. 

Summary

In this blog post, you have learned more about Elastic Beanstalk and running Java REST APIs using this service. You saw how to deploy the provided sample Java REST APIs and access their endpoints. You have also learned how to create a shared application load balancer, integrate this load balancer within each Elastic Beanstalk environment, and be able to route the traffic to the environment based on the URL path. Using Route 53 you can also send the traffic to the shared load balancer when using your custom domain name such as for instance api.scifi.com.