Improving Application Performance Using DAX

While we develop serverless applications the response time of lambda functions is more that’s why API takes more time and it causes the performance of your application. Sometimes it takes more time to read data from dynamodb and return the result. To overcome these problems AWS gives one solution Amazon DynamoDB Accelerator(DAX).

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB. Due to this the performance increases 10 times more, from milliseconds to microseconds and millions of requests per second. And we use DAX with Amazon dynamodb.

How DAX works?

DAX is designed to work within an Amazon Virtual Private Cloud (Amazon VPC) environment. That means we can’t access DAX  clusters directly over the internet. Amazon vpc gives a virtual network that closely resembles a traditional data center. Using VPC we can control its IP address range, subnets, routing tables, network gateways, and security settings. To know more about DAX Cluster Components.

First, we have to launch a DAX cluster in your virtual network, and then by using Amazon VPC security groups we can control access to the cluster. And most importantly, you must attach any Lambda function that needs to access a DAX cluster to a VPC that can access the cluster.

As shown in the above diagram,

  1. At first, the HTTP request is sent by the client to the API gateway.
  2. Then API gateway sends this request to the specific lambda functions.
  3. The Lambda functions are running inside your VPC, which allows them to access VPC resources such as your DAX cluster.
  4. The DAX cluster is also inside your VPC, which means it is accessible by the Lambda functions.
  5. The DAX client directs all of your application’s DynamoDB API requests to the DAX cluster. If DAX can process one of these API requests directly, it does so. Otherwise, it passes the request through to DynamoDB.
  6. Finally, the DAX cluster returns the results to your application.

DAX performs all the required operations to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.

And one more thing if you have already developed an application using dynamodb, You do not need to modify application logic because DAX is compatible with existing DynamoDB API calls. Learn more in the DynamoDB Developer Guide.

You can enable DAX with just a few clicks in the AWS Management Console or by using the AWS SDK. Let’s see how we do that.

Steps to create DAX cluster

There are two ways to create a DAX cluster

  1. using AWS CLI 
  2. using AWS management console

Here we see using the console. To know about using AWS CLI please refer here

step1:Create a Subnet Group

First, we need to create a subnet group. A subnet group is a collection of one or more subnets within your VPC. Once the cluster is created then nodes are deployed to the subnets of the subnet group.

To create a subnet group

    1. Go to the  DynamoDB console
    2. select DAX.
    3. Select Create subnet group.
    4. After that open Create subnet group window, add the following:
      a. Name—Add a short name for the subnet group.
      b. Description—Add a description for the subnet group.
      c. VPC ID—Select the identifier for your Amazon VPC environment.
      d. Subnets—Select one or more subnets from the list.

Once you select all the data as per your requirement click on the save button.

Step2: Create DAX Cluster

In the next step, we will create a DAX cluster in your default Amazon VPC.

To create a DAX cluster:

  1. Go to the DynamoDB console 
  2. under DAX, and select Clusters.
  3. Select Create cluster.
  4. After that open Create cluster window, add the following:
    1. Cluster name—Add a short name for your DAX cluster.
    2. Cluster description—Add a description for the cluster.
    3. Node type—Select the node type for all of the nodes in the cluster.
    4. Cluster size—Select the number of nodes in the cluster. A cluster consists of one primary node and up to nine read replicas.
    5. Encryption—Select enable encryption for your DAX cluster to help protect data at rest. For more information, see DAX Encryption at Rest.
    6. IAM service role for DynamoDB access—Select Create new, and enter the following information:
      • IAM role name—Enter a name for an IAM role, for example, DAXServiceRole. The console creates a new IAM role, and your DAX cluster assumes this role at runtime.
      • IAM policy name—Enter a name for an IAM policy, for example, DAXServicePolicy. The console creates a new IAM policy and attaches the policy to the IAM role.
      • IAM role policy—Choose Read/Write. This allows the DAX cluster to perform read and write operations in DynamoDB.
      • Target DynamoDB table—Choose All tables.
    7. Subnet group—Select the subnet group that you created in the first step which is to create a subnet group.
    8. Security Groups—Choose default.
  5. Once you add data as per your requirement click on Launch cluster.

After that, on the Clusters dashboard, we can see the DAX cluster is listed with creating status.

It will take some time to create a cluster, after the cluster is ready its status changes to Available. 

Step3: Configure Security Group Inbound Rules

Your Amazon DynamoDB Accelerator (DAX) cluster uses TCP port 8111 for communication, so you must authorize inbound traffic on that port. This allows Amazon EC2 instances in your Amazon VPC to access your DAX cluster.

The DAX cluster uses TCP port 8111, so we need to authorize inbound traffic on that port.

 configure security group inbound rules

  1. Go to the Amazon EC2 console 
  2. In the navigation pane, select Security Groups.
  3. Select the default security group. On the Actions menu, select Edit inbound rules.
  4. Select Add Rule, and enter the following information:
    • Port Range—Enter 8111.
      Source—Enter default, and then choose the identifier for your default security group.

When the settings are done as you want them, click on  Save.

Hence we are done with the Dax cluster configuration now we will see the programming part.

First, you need to add client Maven dependency to your application’s Project Object Model (POM) file. 

<!--Dependency:-->
<dependencies>
    <dependency>
     <groupId>com.amazonaws</groupId>
     <artifactId>amazon-dax-client</artifactId>
     <version>x.x.x.x</version>
    </dependency>
</dependencies>

Here we consider one example where we have the User table and we will get user details using the DAX client.

Then we created a DAXClientManager class in which the getDaxMapperClient method gives the Dax client. We need to pass a DAX cluster endpoint to it.

package com.demo.user.manager;
import com.amazon.dax.client.dynamodbv2.AmazonDaxClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper;

import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapperConfig;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapperConfig.TableNameOverride;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.util.EC2MetadataUtils;
import com.google.gson.Gson;
public class DaxClientManager {
 private static Regions REGION = Regions.US_EAST_2;
 private static DynamoDBMapper healthkitMapper;
 private static DynamoDBMapperConfig healthkitMapperConfig;
 public static DynamoDBMapper getDaxMapperClient(String daxEndpoint) {
 System.out.println("Creating a DAX client with cluster endpoint " + daxEndpoint);
 AmazonDaxClientBuilder daxClientBuilder = AmazonDaxClientBuilder.standard();
 daxClientBuilder.withRegion(REGION).withEndpointConfiguration(daxEndpoint);
 AmazonDynamoDB client = daxClientBuilder.build();
 
 healthkitMapperConfig = new DynamoDBMapperConfig.Builder().withTableNameOverride(
 TableNameOverride.withTableNameReplacement("User")).build(); 
 healthkitMapper = new DynamoDBMapper(client, healthkitMapperConfig);
 
 return healthkitMapper;
 }
}

UserDaoImpl.java

public class UserDaoImpl {
 private static final Logger log = LogManager.getLogger(UserDaoImpl.class);
 private static String daxEndpoint="pass your dax endpoint";
 private static final DynamoDBMapper mapper1 = DaxClientManager.getDaxMapperClient(daxEndpoint);


private static volatile UserDaoImpl instance;
 private static int pageSize = 10;
 private UserDaoImpl () {
 }

 public static UserDaoImpl instance() {

 if (instance == null) {
 synchronized (UserDaoImpl .class) {
 if (instance == null)
 instance = new UserDaoImpl ();
 }
 }
 return instance;
 }
/**
  * Gets All Users details
  * 
  * @return List<User> : Details of all User
  */
 public List<User> getAllUsers(String metadata) {
 String currentDate=java.time.Clock.systemUTC().instant().toString();
 List<User> userList = new ArrayList<User>();
 Map<String, String> expressionAttributesNames = new HashMap<>();
 expressionAttributesNames.put("#metadata", "metadata");
 expressionAttributesNames.put("#updatedAt", "updatedAt");
 Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
 expressionAttributeValues.put(":metadata", new AttributeValue().withS(metadata));
 expressionAttributeValues.put(":updatedAt", new AttributeValue().withS(currentDate));
 DynamoDBQueryExpression<User> queryExpression = new DynamoDBQueryExpression<>();
 queryExpression.withKeyConditionExpression("#metadata = :metadata and #updatedAt<=:updatedAt");
 queryExpression.setIndexName("metadata-updatedAt-index");
 queryExpression.withExpressionAttributeNames(expressionAttributesNames);
 queryExpression.withExpressionAttributeValues(expressionAttributeValues);


queryExpression.setConsistentRead(false);
 queryExpression.withScanIndexForward(false);
 queryExpression.setLimit(pageSize);
 System.out.println("queryExpression "new Gson().toJson(queryExpression));
 try {
 do {
 QueryResultPage<User> queryPage = mapper.queryPage(User.class, queryExpression);
 if (queryPage.getResults() != null && !queryPage.getResults().isEmpty()) {
 userList.addAll(queryPage.getResults());
 }
 queryExpression.setExclusiveStartKey(queryPage.getLastEvaluatedKey());

 } while (queryExpression.getExclusiveStartKey() != null);
 }catch(Exception e) {
 System.out.println("Exception "new Gson().toJson(e));
 }
 System.out.println("userList  "new Gson().toJson(userList ));
 return userList ;
 }
}

Our user table is like below,
User.java

package com.demo.user.domain;

import java.io.Serializable;
import java.util.Date;

import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAutoGenerateStrategy;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAutoGeneratedKey;
import 

com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAutoGeneratedTimestamp;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBGeneratedUuid;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBIndexHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBIndexRangeKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBRangeKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;

@DynamoDBTable(tableName = "User")
public class User implements Serializable {

 private static final long serialVersionUID = 1L;
    private static final String METADATA_INDEX = "ByMetadataIndex";
 private static final String EMAIL_INDEX = "ByEmailIndex";
 
 @DynamoDBGeneratedUuid(DynamoDBAutoGenerateStrategy.CREATE)
 private String userId;

 @DynamoDBIndexHashKey(globalSecondaryIndexName =  METADATA_INDEX)
 @DynamoDBRangeKey
 private String metadata;
 
 @DynamoDBAttribute
 private Date updatedAt;

 @DynamoDBAttribute
 private String name;
 
 @DynamoDBIndexHashKey(globalSecondaryIndexName = EMAIL_INDEX)
 private String email;
 
 @DynamoDBAttribute
 private String phone;
 
 @DynamoDBAttribute
 private String address;

 @DynamoDBHashKey(attributeName = "userId")
 @DynamoDBAutoGeneratedKey 
 public String getUserId() {


 return userId;
 }

 public void setUserId(String userId) {
 this.userId = userId;
 }

 public String getMetadata() {
 return metadata;
 }

 public void setMetadata(String metadata) {
 this.metadata = metadata;
 }


 public String getName() {
 return name;
 }

 public void setName(String name) {
 this.name = name;
 }


 public String getEmail() {
 return email;
 }

 public void setEmail(String email) {
 this.email = email;
 }

 public String getPhone() {
 return phone;
 }

 public void setPhone(String phone) {
 this.phone = phone;
 }

 public String getAddress() {
 return address;


}

 public void setAddress(String address) {
 this.address = address;
 }
 
 @DynamoDBAutoGeneratedTimestamp(strategy=DynamoDBAutoGenerateStrategy.ALWAYS)
 public Date getUpdatedAt() {
 return updatedAt;
 }

 public void setUpdatedAt(Date updatedAt) {
 this.updatedAt = updatedAt;
 }
 
 }

And finally, our lambda function is as follows,
GetAllUsersFunction.java

package com.demo.user.function;

import java.util.ArrayList;
import java.util.List;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.demo.user.dao.UserDaoImpl;
import com.demo.user.domain.User;
import com.demo.user.pojo.GetPaginatedRequest;
Import com.demo.user.pojo.GetPaginatedResponse;
import com.demo.user.util.Constants;

/**
 * This Class provides lambda method for GetAllUsers API call  * 
 */
public class GetAllUsersFunction implements
RequestHandler<GetPaginatedRequest, GetPaginatedResponse<List<User>>> {

 private static final Logger log = LogManager.getLogger(GetAllUsersFunction .class.getName());
 private static final UserDaoImpl userDao= UserDaoImpl.instance();

 @Override
 public GetPaginatedResponse<List<User>> handleRequest(GetPaginatedRequest request, Context arg1) {
 GetPaginatedResponse<List<User>> response = new GetPaginatedResponse<>(); 
 List<User> userList = new ArrayList<User>();
 try {
 userList = userDao.getAllUsers(Constants.PERSONDETAILS);
 if(userList != null && !userList.isEmpty()) {
 response.setData(userList);
 response.setMessage("All Users fetched successfully");
 response.setCount(userList.size());
 }
 else { 
 response.setMessage("No Users Data");
 response.setData(userList);
 response.setCount(0);
 }
 }
 catch(Exception e) {
 log.error("EXCEPTION::" + e.getMessage());
 throw new RuntimeException("Internal Server Error");
 }
 return response;
 }
}

To get request data and return response data we have created GetPaginatedRequest and GetPaginatedResponse POJO classes as follows,
GetPaginatedRequest.java

package com.demo.user.pojo;


public class GetPaginatedRequest {
 private String firstIndex;
 private String lastIndex;
 private Boolean isForward;
 
 public String getFirstIndex() {
 return firstIndex;
 }
 public void setFirstIndex(String firstIndex) {
 this.firstIndex = firstIndex;
 }
 public String getLastIndex() {
 return lastIndex;
 }
 public void setLastIndex(String lastIndex) {
 this.lastIndex = lastIndex;
 }
 public Boolean getIsForward() {
 return isForward;
 }
 public void setIsForward(Boolean isForward) {
 this.isForward = isForward;
 }
}

GetPaginatedReaponse.java

package com.demo.user.pojo;

public class GetPaginatedResponse  <T>{

 private String message;
 private T data;
 private String error;
 private String fistIndex;
 private String lastIndex;
 private String lastEvaluatedKey;
 private int count;
 
 public String getMessage() {
 return message;
 }
public void setMessage(String message) {
 this.message = message;
 }
 public T getData() {
 return data;
 }
 public void setData(T data) {
 this.data = data;
 }
 public String getError() {
 return error;
 }
 public void setError(String error) {
 this.error = error;
 }
 public String getLastEvaluatedKey() {
 return lastEvaluatedKey;
 }
 public void setLastEvaluatedKey(String lastEvaluatedKey) {
 this.lastEvaluatedKey = lastEvaluatedKey;
 }
 public int getCount() {
 return count;
 }
 public void setCount(int count) {
 this.count = count;
 }
 public String getFistIndex() {
 return fistIndex;
 }
 public void setFistIndex(String fistIndex) {
 this.fistIndex = fistIndex;
 }
 public String getLastIndex() {
 return lastIndex;
 }
 public void setLastIndex(String lastIndex) {
 this.lastIndex = lastIndex;
 } 
 }

Now we need to upload that lambda function and after successfully uploading lambda we do the following steps,
Integration of lambda with VPC:

  1. Go to the DynamoDB console 
  2. Go to the Functions.
  3. Select your Lambda function.
  4. Then scroll down to the VPC section to configure VPC.
  5. Then click on Edit. It will open the Edit VPC window.
  6. On the Edit VPC window, select the VPC for your function.
  7. Select the VPC subnets for Lambda to use to set up your VPC configuration.
  8. Choose the VPC security groups for Lambda to use to set up your VPC configuration. There is a table that shows the inbound and outbound rules for the security groups that you choose.
  9. Then click on the Save button.

Hence now all configuration is done and your API performance is increased. You can check the time required for the API processing is reduced. So it will help you to reduce the processing time of your API and improve its performance.

Content Team

This blog is from Mindbowser‘s content team – a group of individuals coming together to create pieces that you may like. If you have feedback, please drop us a message on contact@mindbowser.com

Get in touch for a detailed discussion.

Hear From Our 100+ Customers
coma

Mindbowser helped us build an awesome iOS app to bring balance to people’s lives.

author
ADDIE WOOTTEN
CEO, SMILINGMIND
coma

We had very close go live timeline and MindBowser team got us live a month before.

author
Shaz Khan
CEO, BuyNow WorldWide
coma

They were a very responsive team! Extremely easy to communicate and work with!

author
Kristen M.
Founder & CEO, TotTech
coma

We’ve had very little-to-no hiccups at all—it’s been a really pleasurable experience.

author
Chacko Thomas
Co-Founder, TEAM8s
coma

Mindbowser is one of the reasons that our app is successful. These guys have been a great team.

author
Dave Dubier
Founder & CEO, MangoMirror
coma

Mindbowser was very helpful with explaining the development process and started quickly on the project.

author
Hieu Le
Executive Director of Product Development, Innovation Lab
coma

The greatest benefit we got from Mindbowser is the expertise. Their team has developed apps in all different industries with all types of social proofs.

author
Alex Gobel
Co-Founder, Vesica
coma

Mindbowser is professional, efficient and thorough. 

author
MacKenzie R
Consultant at XPRIZE
coma

Very committed, they create beautiful apps and are very benevolent. They have brilliant Ideas.

author
Laurie Mastrogiani
Founder, S.T.A.R.S of Wellness
coma

MindBowser was great; they listened to us a lot and helped us hone in on the actual idea of the app.” “They had put together fantastic wireframes for us.

author
Bennet Gillogly
Co-Founder, Flat Earth
coma

They're very tech-savvy, yet humble.

author
Uma Nidmarty
CEO, GS Advisorate, Inc.
coma

Ayush was responsive and paired me with the best team member possible, to complete my complex vision and project. Could not be happier.

author
Katie Taylor
Founder, Child Life On Call
coma

As a founder of a budding start-up, it has been a great experience working with Mindbower Inc under Ayush's leadership for our online digital platform design and development activity.

author
Radhika Kotwal
Founder of Courtyardly
coma

The team from Mindbowser stayed on task, asked the right questions, and completed the required tasks in a timely fashion! Strong work team!

author
Michael Wright
Chief Executive Officer, SDOH2Health LLC
coma

They are focused, patient and; they are innovative. Please give them a shot if you are looking for someone to partner with, you can go along with Mindbowser.

author
David Cain
CEO, thirty2give
coma

We are a small non-profit on a budget and they were able to deliver their work at our prescribed budgets. Their team always met their objectives and I'm very happy with the end result. Thank you, Mindbowser team!!

author
Bart Mendel
Founder, Mindworks