Amazon DynamoDB is a fully managed NoSQL database service that
makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
provides fast and predictable performance with seamless scalability.
DynamoDB tables do not have fixed schemas, and table consists of items and each item may have a different number of attributes.
DynamoDB synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability.
DynamoDB supports fast in-place updates. A numeric attribute can be incremented or decremented in a row using a single API call.
DynamoDB uses proven cryptographic methods to securely authenticate users and prevent unauthorized data access.
Durability, performance, reliability, and security are built in, with SSD (solid state drive) storage and automatic 3-way replication.
DynamoDB supports two different kinds of primary keys:
Partition Key (previously called the Hash key)
A simple primary key, composed of one attribute
DynamoDB uses the partition key’s value as input to an internal hash function; the output from the hash function determine the partition where the item will be stored.
No two items in a table can have the same partition key value.
Partition Key and Sort Key (previously called the Hash and Range key)
A composite primary key composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.
DynamoDB uses the partition key value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored.
All items with the same partition key are stored together, in sorted order by sort key value.
It is possible for two items to have the same partition key value, but those two items must have different sort key values.
DynamoDB Secondary indexes
add flexibility to the queries, without impacting performance.
are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient.
DynamoDB throughput and single-digit millisecond latency makes it a great fit for gaming, ad tech, mobile, and many other applications.
ElastiCache can be used in front of DynamoDB in order to offload high amount of reads for non frequently changed data.
When the question has considerations for scalability, always think about DynamoDB as it is the most recommended database solution to handle the huge amount of data/records.
Traditional Side Cache
When you’re using a cache for a backend data store, a side-cache is perhaps the most common known approach. Canonical examples include Redis and Memcached.
TTL
TTL allows you to define a per-item timestamp to determine when an item is no longer needed.
Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput.
TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs.
By default, a Query operation does not return any data on how much read capacity it consumes.
However, you can specify the ReturnConsumedCapacity parameter in a Query request to obtain this information. Settings>:
NONE—no consumed capacity data is returned. (This is the default).
TOTAL—the response includes the aggregate number of read capacity units consumed.
INDEXES—the response shows the aggregate number of read capacity units consumed, together with the consumed capacity for each table and index that was accessed.
AWS Management Console
NoSQL Workbench
Cross-platform, client-side GUI application for modern database development and operation.
It currently supports both DynamoDB and Amazon Keyspaces (for Apache Cassandra).
DynamoDB Local
With the downloadable version of Amazon DynamoDB, you can develop and test applications without accessing the DynamoDB web service.
Instead, the database is self-contained on your computer.
When you're ready to deploy your application in production, remove the local endpoint in the code. It then points to the DynamoDB web service.
DynamoDB Local is available as a download (requires JRE), as an Apache Maven dependency, or as a Docker image.
Example:
>> aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": [
“Notes”
]
}
PartiQL
PartiQL is a SQL-compatible query language used to select, insert, update, and delete data in Amazon DynamoDB.
Using PartiQL, you can easily interact with DynamoDB tables and run ad hoc queries.
Example
private static List<ParameterizedStatement> getPartiQLTransactionStatements() {
List<ParameterizedStatement> statements = new ArrayList<ParameterizedStatement>();
statements.add(new ParameterizedStatement()
.withStatement("EXISTS(SELECT * FROM Notes where UserId='StudentA')"));
return statements;
}
AWS Command Line
Interface (AWS CLI)
Example:
>> aws dynamodb put-item --table-name Notes --item '{"UserId":{"S":"StudentA"},"NoteId":{"N":"11"},"Note":{"S":"HelloWorld!"}}
SDKs
Low-level interface
Document interface
High-level interface
Automatically scales horizontally
Runs exclusively on Solid State Drives (SSDs) to obtain predictable low-latency, and high-scale request workloads cost efficiently.
For applications where database utilization cannot be predicted, Amazon DynamoDB can be used with Auto Scaling, which can help to scale dynamically to any load. Auto-Scaling needs to be applied to the DynamoDB table and Global Secondary Index that use separate read /write capacity.
Allows provisioned table reads and writes
Scale up throughput when needed
Scale down throughput four times per UTC calendar day.
Automatically partitions, reallocates and re-partitions the data and provisions additional server capacity as the table size grows or provisioned throughput is increased.
Global Secondary indexes (GSI) can be created upfront or added later.
If you want to manage the intermittent high load then use SQS to hold the database requests instead of overloading the DynamoDB database. Then have a service that asynchronously pulls the messages and writes them to DynamoDB.
Each DynamoDB table is automatically stored in the three geographically distributed locations for durability.
Read consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item.
DynamoDB allows user to specify whether the read should be eventually consistent or strongly consistent at the time of the request
Eventually Consistent Reads (Default)
Eventual consistency option maximizes the read throughput.
Consistency across all copies is usually reached within a second
However, an eventually consistent read might not reflect the results of a recently completed write.
Repeating a read after a short time should return the updated data.
Strongly Consistent Reads
Strongly consistent read returns a result that reflects all writes that received a successful response prior to the read.
Query, GetItem, and BatchGetItem operations perform eventually consistent reads by default.
Query and GetItem operations can be forced to be strongly consistent.
Query operations cannot perform strongly consistent reads on Global Secondary Indexes.
BatchGetItem operations can be forced to be strongly consistent on a per-table basis.
Fine Grained Access Control (FGAC) gives a high degree of control over data in the table.
FGAC helps control who (caller) can access which items or attributes of the table and perform what actions (read/write capability).
FGAC is integrated with IAM, which manages the security credentials and the associated permissions.
Data in Transit Encryption
can be done by encrypting sensitive data on the client side or using encrypted connections (TLS).
DynamoDB supports Encryption at rest
Encryption at rest enables encryption for the data persisted (data at rest) in the DynamoDB tables.
Encryption at rest includes the base tables, secondary indexes.
Encryption at rest automatically integrates with AWS KMS for managing the keys used for encrypting the tables.
Encryption at rest can be enabled only for a new table and not for an existing table.
Encryption once enabled for a table, cannot be disabled.
DynamoDB Streams do not support encryption.
On-Demand Backups of encrypted DynamoDB tables are encrypted using S3’s Server-Side Encryption.
Encryption at rest encrypts your data using 256-bit AES encryption.
DynamoDB provides fast access to items in a table by specifying primary key values
DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
DynamoDB Secondary indexes
is a data structure that contains a subset of attributes from a table
is associated with exactly one table, from which it obtains its data
requires an alternate key for the index partition key and sort key
additionally can define projected attributes which are copied from the base table into the index along with the primary key attributes.
is automatically maintained by DynamoDB
any addition, modification, or deletion of items in the base table, any indexes on that table are also updated to reflect these changes.
helps reduce the size of the data as compared to the main table, depending upon the project attributes and hence helps improve provisioned throughput performance.
They are automatically maintained as sparse objects. Items will only appear in an index if they exist in the table on which the index is defined, making queries an index very efficient.
DynamoDB Secondary indexes supports two types
Global secondary index – an index with a partition key and a sort key that can be different from those on the base table
This index can span all the data in a table across all partitions.
Key values do not need to be unique.
It can be created when a table is created, or it can be added to an existing table.
It can be deleted.
It supports eventual consistency only.
It has its own provisioned throughput settings for read and write operations.
Queries return only attributes that are projected into the index.
If eventual consistency is suitable for your application, we recommend that you use global secondary indexes.
Local secondary index – an index that has the same partition key as the base table, but a different sort key
The total size of an item collection cannot exceed 10 GB.
Index is located on the same table partition as the items that have a given partition key value.
It can be created only when a table is created.
It cannot be deleted.
It supports eventual consistency and strong consistency.
•Queries can return attributes that are not projected into the index.
Global Secondary Indexes (GSI) are indexes that contain partition or composite partition-and-sort keys that can be different from the (primary) keys in the table on which the index is based.
Global secondary index is considered “global” because queries on the index can span all items in a table, across all partitions.
Multiple secondary indexes can be created on a table, and queries issued against these indexes.
Applications benefit from having one or more secondary keys available to allow efficient access to data with attributes other than the primary key.
GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table.
GSIs support eventual consistency. DynamoDB automatically handles item additions, updates and deletes in a GSI when corresponding changes are made to the table asynchronously
Data in a secondary index consists of GSI alternate key, primary key and attributes that are projected, or copied, from the table into the index.
Attributes that are part of an item in a table, but not part of the GSI key, primary key of the table, or projected attributes are not returned on querying the GSI index.
GSI are sparse by default. When you create a global secondary index, you specify a partition key and optionally a sort-key. Only items in the parent table that contain those attributes appear in the index.
When you create a GSI, you must specify read and write capacity units for the expected workload on that index.
If GSIs don’t have enough write capacity, table writes are throttled!
GSIs manage throughput independently of the table they are based on and the provisioned throughput for the table and each associated GSI needs to be specified at creation time
Read provisioned throughput
provides one Read Capacity Unit with two eventually consistent reads per second for items < 4KB in size.
provides one Write Capacity Unit with one write per second for items < 1KB in size.
Write provisioned throughput
consumes 1 write capacity unit if,
new item is inserted into table
existing item is deleted from table
existing items is updated for project attributes
consumes 2 write capacity units if
existing item is updated for key attributes, which results in deletion and addition of the new item into the index.
LSI or GSI?
If data size in an item collection > 10 GB, use GSI.
If eventual consistency is acceptable for your scenario, use GSI.
You gain more flexibility with GSI. You can define a maximum of 5 local secondary indexes. There is an initial limit of 20 global secondary indexes per table, and you can increase this limit.
With GSI, you have the flexibility to create them after the table is created. You must create the LSI when the table is defined.
Sparse Indexes
Local secondary index are indexes that has the same partition key as the table, but a different sort key.
Local secondary index is “local” cause every partition of a local secondary index is scoped to a table partition that has the same partition key.
LSI allows search using a secondary index in place of the sort key, thus expanding the number of attributes that can be used for queries which can be conducted efficiently
LSI are updated automatically when the primary index is updated and reads support both strong and eventually consistent options
LSIs can only be queried via the Query API
LSIs cannot be added to existing tables at this time
LSIs cannot be modified once it is created at this time.
LSI cannot be removed from a table once they are created at this time
LSI consumes provisioned throughput capacity as part of the table with which it is associated
Read Provisioned throughput
if data read is index and projected attributes
provides one Read Capacity Unit with one strongly consistent read (or two eventually consistent reads) per second for items < 4KB
data size includes the index and projected attributes only
if data read is index and a non projected attribute
consumes double the read capacity, with one to read from the index and one to read from the table with the entire data and not just the non projected attribute.
Write provisioned throughput
consumes 1 write capacity unit if,
new item is inserted into table
existing item is deleted from table
existing items is updated for project attributes
consumes 2 write capacity units if
existing item is updated for key attributes, which results in deletion and addition of the new item into the index
AWS DynamoDB throughput capacity depends on the read/write capacity modes for processing reads and writes on the tables.
There are two types of read/write capacity modes:
On-demand
Provisioned
Provisioned Mode
Provisioned mode requires you to specify the number of reads and writes per second as required by the application.
Default for all new tables
Automatically adjusts read and write throughput capacity in response to dynamically changing request volumes with zero downtime
Just set your desired throughput utilization target, minimum and maximum limits
Continuously monitors actual throughput consumption using Amazon CloudWatch
No additional cost to use
Available in all AWS Regions
Best for general scaling needs for most applications with relatively predictable scaling needs
Provisioned throughput is the maximum amount of capacity that an application can consume from a table or index.
If the provisioned throughput capacity on a table or index is exceeded, it is subject to request throttling.
Provisioned mode is a good for applications
predictable application traffic
consistent traffic
ability to forecast capacity requirements to control costs
Provisioned mode provides the following capacity units
Read Capacity Units (RCU)
Total number of read capacity units required depends on the item size, and the consistent read model (eventually or strongly)
one RCU represents
one strongly consistent read per second for an item up to 4 KB in size
two eventually consistent reads per second, for an item up to 4 KB in size i.e. 8 KB
Transactional read requests require two read capacity units to perform one read per second for items up to 4 KB.
DynamoDB must consume additional read capacity units for items greater than 4 KB for e.g. for an 8 KB item size, 2 read capacity units to sustain one strongly consistent read per second, 1 read capacity unit if you choose eventually consistent reads, or 4 read capacity units for a transactional read request would be required.
Item size is rounded off to 4 KB equivalents for e.g. a 6 KB or a 8 KB item in size would require the same RCU
Write Capacity Units (WCU)
Total number of write capacity units required depends on the item size only
one write per second for an item up to 1 KB in size
Transactional write requests require 2 write capacity units to perform one write per second for items up to 1 KB.
DynamoDB must consume additional read capacity units for items greater than 1 KB for an 2 KB item size, 2 write capacity units would be required to sustain one write request per second or 4 write capacity units for a transactional write request
Item size is rounded off to 1 KB equivalents for e.g. a 0.5 KB or a 1 KB item would need the same WCU
On-demand Mode
On-demand mode provides flexible billing option capable of serving thousands of requests per second without capacity planning.
Uses pay-per-request pricing instead of a provisioned pricing model.
DynamoDB adapts rapidly to accommodate new peaks in level of traffic
Best for spiky, unpredictable workloads
There is no need to specify the expected read and write throughput
Charged for only the reads and writes that the application performs on the tables in terms of read request units and write request units.
DynamoDB adapts rapidly to accommodate the changing load.
Read & Write Capacity Units performance remains the same as provisioned mode.
DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
Writes to the table will be automatically propagated to all replicas.
Cross-region replication currently supports single master mode. A single master has one master table and one or more replica tables.
Read replicas are updated asynchronously as DynamoDB acknowledges a write operation as successful once it has been accepted by the master table. The write will then be propagated to each replica with a slight delay.
Cross-region replication can be helpful in scenarios
Efficient disaster recovery, in case a data center failure occurs.
Faster reads, for customers in multiple regions by delivering data faster by reading a DynamoDB table from the closest AWS data center.
Easier traffic management, to distribute the read workload across tables and thereby consume less read capacity in the master table.
Easy regional migration, by promoting a read replica to master
Live data migration, to replicate data and when the tables are in sync, switch the application to write to the destination region
Cross-region replication costing depends on
Provisioned throughput (Writes and Reads)
Storage for the replica tables.
Data Transfer across regions
Reading data from DynamoDB Streams to keep the tables in sync.
Cost of EC2 instances provisioned, depending upon the instance types and region, to host the replication process.
NOTE : Cross Region replication on DynamoDB was performed defining AWS Data Pipeline job which used EMR internally to transfer data before the DynamoDB streams and out of box cross region replication support.
DynamoDB Global Tables is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
Applications can now perform reads and writes to DynamoDB in AWS regions around the world, with changes in any region propagated to every region where a table is replicated.
Global Tables help in building applications to advantage of data locality to reduce overall latency.
Global Tables ensures eventual consistency
Global Tables replicates data among regions within a single AWS account, and currently does not support cross account access.
DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table in the last 24 hours, after which they are erased.
DynamoDB Streams maintains ordered sequence of the events per item however across item are not maintained
Example
For e.g., suppose that you have a DynamoDB table tracking high scores for a game and that each item in the table represents an individual player. If you make the following three updates in this order:
Update 1: Change Player 1’s high score to 100 points
Update 2: Change Player 2’s high score to 50 points
Update 3: Change Player 1’s high score to 125 points
DynamoDB Streams will maintain the order for Player 1 score events. However, it would not maintain the order across the players. So Player 2 score event is not guaranteed between the 2 Player 1 events
Can be used for multi-region replication to keep other data stores up-to-date with the latest changes to DynamoDB or to take actions based on the changes made to the table
DynamoDB Streams write a stream record with the primary key attribute(s) of the modified items.
A stream record contains information about a data modification to a single item in a DynamoDB table. You can configure the stream to capture additional information, such as the "before" and "after" images of modified items.
DynamoDB Streams APIs helps developers consume updates and receive the item-level data before and after items are changed.
Allows read at up to twice the rate of the provisioned write capacity of the DynamoDB table.
Have to be enabled on a per-table basis.
DynamoDB Streams is designed so that every update made to the table will be represented exactly once in the stream. No Duplicates
DynamoDB Triggers (just like database triggers) is a feature which allows execution of custom actions based on item-level updates on a table
DynamoDB triggers can be used in scenarios like sending notifications, updating an aggregate table, and connecting DynamoDB tables to other data sources.
DynamoDB Trigger flow
Custom logic for a DynamoDB trigger is stored in an AWS Lambda function as code.
A trigger for a given table can be created by associating an AWS Lambda function to the stream (via DynamoDB Streams) on a table.
When the table is updated, the updates are published to DynamoDB Streams.
In turn, AWS Lambda reads the updates from the associated stream and executes the code in the function.
It's a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
DAX does all the heavy lifting required to add in-memory acceleration to the tables, without requiring developers to manage cache invalidation, data population, or cluster management.
DAX is fault-tolerant and scalable.
DAX is intended for applications that require high-performance reads.
DAX cluster has a primary node and zero or more read-replica nodes. Upon a failure for a primary node, DAX will automatically fail over and elect a new primary. For scaling, add or remove read replicas
Cluster based, Multi-Availability Zone (Multi-AZ).
DAX is a read-through cache because it is API compatible with DynamoDB read APIs and caches GetItem, BatchGetItem, Scan, and Query results if they don’t currently reside in DAX.
A read-through cache is effective for read-heavy workloads.
As a write-through cache, DAX allows you to issue writes directly, so that your writes are immediately reflected in the item cache. and therefore have the writes be reflected directly in the item cache. You do not need to manage cache invalidation logic, because DAX handles it for you.
For strongly consistent read request from an application, DAX Cluster pass all request to DynamoDB & does not cache for these requests.
Write-through cache
Write-through caches are advantageous, especially in conjunction with a read-through cache.
They greatly simplify the use of caches—you no longer need to write or test cache population or invalidation logic.
Because a write-through cache automatically caches the update, it introduces a slight amount of latency as compared to writing directly to an underlying data store itself.
However, the advantage is that the data that is written is consistent with the underlying data store and is now available for reads.
Advantageous for read-heavy workloads and doesn’t help with latency or throughput for write-heavy workloads.
Some workloads in the IoT or ad tech space have a considerable amount of data that is written once and read never. In these scenarios, it often doesn’t make sense to use a write-through cache.
Write - Around
You can employ a write-around pattern in which writes go directly to DynamoDB.
Only the data that is read—and thus has a higher potential to be read again—is cached.
Side-cache
When you’re using a cache for a backend data store, a side-cache is perhaps the most commonly known approach. Canonical examples include both Redis and Memcached.
These are general-purpose caches that are decoupled from the underlying data store and can help with both read and write throughput, depending on the workload and durability requirements.
Read-through cache
This pattern of loading data into the cache only when the item is requested is often referred to as lazy loading.
The advantage of this approach is that data that is populated in the cache has been requested and has a higher likelihood of being requested again.
The disadvantage of lazy loading is the cache miss penalty on the first read of the data, which takes more time to retrieve the data from the table instead of directly from the cache.
Read Side-Cache
Read-Through Cache
Improve privacy and security, especially those dealing with sensitive workloads with compliance and audit requirements, by enabling private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.
VPC endpoints for DynamoDB support IAM policies to simplify DynamoDB access control, where access can be restricted to a specific VPC endpoint.
VPC endpoints can be created only for Amazon DynamoDB tables in the same AWS Region as the VPC.
DynamoDB Streams cannot be access using VPC endpoints for DynamoDB.
Redistributes read and write capacity to partitions that need them due to becoming "hot" partitions (when use exceeds capacity)
Enabled automatically for every DynamoDB table
Hot partitions are accommodated until total traffic exceeds total table capacity, at which point throttling starts
Generally speaking, the more distinct partition key values your workload accesses, the more those requests will be spread evenly across the partitioned space, and the less likely you'll have a hot key or a hot partition.
You can address hot partitions by choosing the right partition key value and using hashed keys if your ideal partition key has poor uniformity, as provided in the following examples:
User ID, where the application has many users: Good
Status code, where there are only a few possible status codes: Bad
Item creation date, rounded to the nearest time period (for example, day, hour, or minute): Bad
Device ID, where each device accesses data at relatively similar intervals: Good
Device ID, where even if there are many devices being tracked, one is by far more popular than all of the others: Bad
Index Storage
DynamoDB is an indexed data store
Billable Data = Raw byte data size + 100 byte per-item storage indexing overhead
Provisioned throughput
Pay flat, hourly rate based on the capacity reserved as the throughput provisioned for the table.
One Write Capacity Unit provides one write per second for items < 1KB in size.
One Read Capacity Unit provides one strongly consistent read (or two eventually consistent reads) per second for items < 4KB in size.
Provisioned throughput charges for every 10 units of Write Capacity and every 50 units of Read Capacity.
Reserved capacity
Significant savings over the normal price
Pay a one-time upfront fee
Keep item size small
Store metadata in DynamoDB and large BLOBs in Amazon S3
Use table per day, week, month etc for storing time series data.
Use conditional or Optimistic Concurrency Control (OCC) updates
Optimistic Concurrency Control is like Optimistic locking in the RDMS.
OCC is generally used in environments with low data contention, conflicts are rare and transactions can be completed without the expense of managing locks and transactions.
OCC assumes that multiple transactions can frequently be completed without interfering with each other.
Transactions are executed using data resources without acquiring locks on those resources and waiting for other transaction locks to be cleared.
Before a transaction is committed, it is verified if the data was modified by any other transaction. If so, it would be rollbacked and needs to be restarted with the updated data.
OCC leads to higher throughput as compared to other concurrency control methods like pessimistic locking, as locking can drastically limit effective concurrency even when deadlocks are avoided.
Avoid hot keys and hot partitions
Querying and Scanning Data
Avoiding Sudden Spikes in Read Activity
Reduce page size
Isolate scan operations
Some applications handle this load by rotating traffic hourly between two tables—one for critical traffic, and one for bookkeeping.
Other applications can do this by performing every write on two tables: a "mission-critical" table, and a "shadow" table.
Configure your application to retry any request that receives a response code that indicates you have exceeded your provisioned throughput.
Taking Advantage of Parallel Scans
A parallel scan can be the right choice if the following conditions are met:
The table size is 20 GB or larger.
The table's provisioned read throughput is not being fully used.
Sequential Scan operations are too slow.
Choosing TotalSegments
The best setting for TotalSegments depends on your specific data, the table's provisioned throughput settings, and your performance requirements.
We recommend that you begin with a simple ratio, such as one segment per 2 GB of data.
For example, for a 30 GB table, you could set TotalSegments to 15 (30 GB / 2 GB). Your application would then use 15 workers, with each worker scanning a different segment.
In Amazon DynamoDB, you use expressions to denote the attributes that you want to read from an item.
You also use expressions when writing an item to indicate any conditions that must be met (also known as a conditional update), and to indicate how the attributes are to be updated.
Types:
Projection Expressions:
To read data from a table, you use operations such as GetItem, Query, or Scan. Amazon DynamoDB returns all the item attributes by default.
A projection expression is a string that identifies the attributes that you want.
Example:
aws dynamodb get-item \
--table-name ProductCatalog \
--key file://key.json \
--projection-expression "Description, RelatedItems[0], ProductReviews.FiveStar"
Update Expressions:
An update expression specifies how UpdateItem will modify the attributes of an item—for example, setting a scalar value or removing elements from a list or a map.
Syntax:
update-expression ::=
[ SET action [, action] ... ]
[ REMOVE action [, action] ...]
[ ADD action [, action] ... ]
[ DELETE action [, action] ...]
Expression attribute name:
It's a placeholder that you use in an Amazon DynamoDB expression as an alternative to an actual attribute name.
Example:
aws dynamodb get-item \
--table-name ProductCatalog \
--key '{"Id":{"N":"123"}}' \
--projection-expression "#c" \
--expression-attribute-names '{"#c":"Comment"}'
Condition Expression
To determine which items should be modified.
If the condition expression evaluates to true, the operation succeeds; otherwise, the operation fails.
Example:
aws dynamodb put-item \
--table-name ProductCatalog \
--item file://item.json \
--condition-expression "attribute_not_exists(Id)"
Filter Expression
A filter expression determines which items within the Query results should be returned to you.
All of the other results are discarded.
A filter expression is applied after a Query finishes, but before the results are returned.
You must provide the name of the partition key attribute and a single value for that attribute.
Example
aws dynamodb query \
--table-name Thread \
--key-condition-expression "ForumName = :name and Subject = :sub" \
--expression-attribute-values file://values.json
The arguments for --expression-attribute-values are stored in the values.json file.
{
":name":{"S":"Amazon DynamoDB"},
":sub":{"S":"DynamoDB Thread 1"}
}
You can create an AWS Lambda function and then trigger that function during user pool operations such as user sign-up, confirmation, and sign-in (authentication) with a Lambda trigger.