Configure DynamoDB in serverless
For our Serverless Framework app, we had previously created our DynamoDB table through the AWS console. This can be hard to do when you are creating multiple apps or environments. Ideally, we want to be able to do this programmatically. In this section we’ll look at how to use infrastructure as code to do just that.
Create the Resource
Serverless Framework supports CloudFormation to help us configure our infrastructure through code. CloudFormation is a way to define our AWS resources using YAML or JSON, instead of having to use the AWS Console. We’ll go into this in more detail later in this section.
Let’s create a directory to add our resources.
$ mkdir resources/
Add the following to resources/dynamodb-table.yml
.
Resources:
NotesTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.tableName}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: noteId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: noteId
KeyType: RANGE
# Set the capacity to auto-scale
BillingMode: PAY_PER_REQUEST
Let’s quickly go over what we are doing here.
-
We are describing a DynamoDB table resource called
NotesTable
. -
We get the table name from the custom variable
${self:custom.tableName}
. This is generated dynamically in ourserverless.yml
. We will look at this in detail below. -
We are also configuring the two attributes of our table as
userId
andnoteId
and specifying them as our primary key. -
Finally, we are provisioning the read/write capacity for our table through a couple of custom variables as well. We will be defining this shortly.
Add the Resource
Now let’s add a reference to this resource in our project.
Add the following resources:
block to the bottom of our serverless.yml
with the following:
# Create our resources with separate CloudFormation templates
resources:
# DynamoDB
- ${file(resources/dynamodb-table.yml)}
Add the following custom:
block at the top of our serverless.yml
above the provider:
block.
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider section.
stage: ${opt:stage, self:provider.stage}
# Set the table name here so we can use it while testing locally
tableName: ${self:custom.stage}-notes
We added a couple of things here that are worth spending some time on:
-
We first create a custom variable called
stage
. You might be wondering why we need a custom variable for this when we already havestage: dev
in theprovider:
block. This is because we want to set the current stage of our project based on what is set through theserverless deploy --stage $STAGE
command. And if a stage is not set when we deploy, we want to fallback to the one we have set in the provider block. So${opt:stage, self:provider.stage}
, is telling Serverless Framework to first look for theopt:stage
(the one passed in through the command line), and then fallback toself:provider.stage
(the one in the provider block). -
The table name is based on the stage we are deploying to -
${self:custom.stage}-notes
. The reason this is dynamically set is because we want to create a separate table when we deploy to a new stage (environment). So when we deploy todev
we will create a DynamoDB table calleddev-notes
and when we deploy toprod
, it’ll be calledprod-notes
. This allows us to clearly separate the resources (and data) we use in our various environments. -
Finally, we are using the
PAY_PER_REQUEST
setting for theBillingMode
. This tells DynamoDB that we want to pay per request and use the On-Demand Capacity option. With DynamoDB in On-Demand mode, our database is now truly serverless. This option can be very cost-effective, especially if you are just starting out and your workloads are not very predictable or stable. On the other hand, if you know exactly how much capacity you need, the Provisioned Capacity mode would work out to be cheaper.
A lot of the above might sound tricky and overly complicated right now. But we are setting it up so that we can automate and replicate our entire setup with ease. Note that, Serverless Framework (and CloudFormation behind the scenes) will be completely managing our resources based on the serverless.yml
. This means that if you have a typo in your table name, the old table will be removed and a new one will be created in place. To prevent accidentally deleting serverless resources (like DynamoDB tables), you need to set the DeletionPolicy: Retain
flag. We have a detailed post on this over on the Seed blog.
We are also going to make a quick tweak to reference the DynamoDB resource that we are creating.
Update our environment variables with the new generated table name. Replace the environment:
block with the following:
# These environment variables are made available to our functions
# under process.env.
environment:
tableName: ${self:custom.tableName}
stripeSecretKey: ${env:STRIPE_SECRET_KEY}
Replace the iamRoleStatements:
block in your serverless.yml
with the following.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
# Restrict our IAM role permissions to
# the specific table for the stage
Resource:
- "Fn::GetAtt": [ NotesTable, Arn ]
Make sure to copy the indentation properly. These two blocks fall under the provider
block and need to be indented as such.
A couple of interesting things we are doing here:
-
The
environment:
block here is basically telling Serverless Framework to make the variables available asprocess.env
in our Lambda functions. For example,process.env.tableName
would be set to the DynamoDB table name for this stage. We will need this later when we are connecting to our database. -
For the
tableName
specifically, we are getting it by referencing our custom variable from above. -
For the case of our
iamRoleStatements:
we are now specifically stating which table we want to connect to. This block is telling AWS that these are the only resources that our Lambda functions have access to.
Next, let’s add our S3 bucket for file uploads.
For help and discussion
Comments on this chapter