Content
Command Query Responsibility Segregation (CQRS)
CQRS is an architectural pattern that separates reads from writes.
A query reads from the data source, whereas a command writes or mutates the state.
CQRS is a derivative architectural pattern from the design pattern Bertrand Meyer coined as Command and Query Separation (CQS).
6 Rs of migration
Twelve-Factor App methodology
Principle I. Codebase
“One codebase tracked in revision control, many deploys”
Your code base should have a logical version control system that’s easy to understand.
Principle II. Dependencies
“Explicitly declare and isolate dependencies”
This principle maintains that you should never rely on the implicit existence of system-wide packages. Instead”
Principle III. Config
“Store config in the environment”
An application and its configuration should be completely independent. Further, storing configs constantly in code should be avoided entirely.
Your configurations should have a separate file and shouldn’t be hosted within the code repository.
Solution:
Use AWS Secrets Manager
Principle IV. Backing services
“Treat backing services as attached resources”
In a 12-factor app, any services that don’t support the core app must be accessed as a service. These non-core essential services might include: Databases, External storage, Message queues, Etc.
The code for a twelve-factor app makes no distinction between local and third-party services.
Solution:
AWS: Default model for Lambda
Principle V. Build, release, run
“Strictly separate build and run stages”
A 12-factor app is strict about separating the three stages of building, releasing, and running.
Start the build process by storing the app in source control, then build out its dependencies.
Separating the config information means you can combine it with the build for the release stage
Then it’s ready for the run stage. It’s also important that each release have a unique ID.
Solutions:
CodePipeline
CodeDeploy
CodeCommit
CodeBuild
Principle VI. Processes
“Execute the app as one or more stateless processes”
Store any data that is required to persist in a stateful backing service, such as databases. The idea is that the process is stateless and shares absolutely nothing.
Solutions:
Treat Lambda functions as stateless.
Principle VII. Port binding Principle
“Export services via port binding”
12-factor apps must always be independent from additional applications. Every function should be its own process—in full isolation.
Example: Add a web server library or similar to the core app. This means the app can wait requests on a defined port, whether that’s HTTP or a different protocol.
It does not rely on runtime injection of a web server into the run environment to create a web-facing service.
Solution:
Not applicable to AWS Lambda
Applicable for container or Amazon EC2 based applications
Principle VIII. Concurrency
“Scale out via the process model”
A true 12-factor app is designed for scaling.
Principle IX. Disposability
“Maximize robustness with fast startup and graceful shutdown”
The concept of disposable processes means that an application can die at any time, but it won’t affect the user—the app can be replaced by other apps, or it can start right up again.
Building disposability into your app ensures that the app shuts down gracefully: it should clean up all utilized resources and shut down smoothly.
Principle X. Dev/prod parity
“Keep development, staging, and production as similar as possible”
Principle XI. Logs
“Treat logs as event streams”
Unlike monolithic and traditional apps that store log information in a file, this principle maintains that you should stream logs to a chosen location—not simply dump them into a log file.
Solution:
Correlation IDs can play an important role when creating log files for microservice applications. They help you track issues in a complex distributed system.
Example:
09-02-2015 15:03:24 ui-svc INFO [uuid-123] ……
09-02-2015 15:03:25 catalog-svc INFO [uuid-123] ……
09-02-2015 15:03:26 checkout-svc ERROR [uuid-123] ……
09-02-2015 15:03:27 payment-svc INFO [uuid-123] ……
09-02-2015 15:03:27 shipping-svc INFO [uuid-123] ……
To do this and to debug and track services, use Mapped Diagnostic Context (MDC) with logging. MDC is important for debugging and understanding microservices.
Principle XII. Admin processes
“Run admin/management tasks as one-off processes”
The final 12-factor app principle proposes separating administrative tasks from the rest of your application. These tasks might include migrating a database or inspecting records.
Though the admin processes are separate, you must continue to run them in the same environment and against the base code and config of the app itself. Shipping the admin tasks code alongside the application prevents drift.
Domain driven design
Placing the project's primary focus on the core domain and domain logic.
Basing complex designs on a model of the domain.
Initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.
Bounded Context
DDD deals with large models by dividing them into different bounded contexts and being explicit about their interrelationships.
A bounded context encapsulates the details of a single domain, such as domain model, data model, application services, and defines the integration points with other bounded contexts/domains.
This matches perfectly with the definition of a microservice, which is autonomous, well-defined interfaces, implementing a business capability.
This makes context mapping (and DDD, in general) an excellent tool in the architect’s toolbox for identifying and designing microservices.
A bounded context encapsulates a single domain.
DDD deals with large models by dividing them into different Bounded Contexts and being explicit about their interrelationships.
Anti-corruption Layer
An Anti-corruption Layer is used to communicate between bounded contexts.
It translates from one context to the other, so that data in each context reflects the language and the way that context treats the data.
When one system needs to talk to another system and if these 2 systems are not compatible, you can use an intermediate layer which is capable of translating first systems communication to the protocol which is used by the second system and vice versa, this intermediate system is called as an Anti Corruption Layer since that layer helps to avoid the data corruption which would have happened otherwise.
This is not a new concept and the traditionally, Enterprise Service Bus (ESB) has been used as this intermediate layer.
A domain-driven design
Defines the integration points with other domains
Aligns well with characteristics of microservices
Ubiquitous Language
Practice of building up a common, rigorous language between developers and users.
A model acts as a Ubiquitous Language to help communication between software developers and domain experts.
Various factors draw boundaries between contexts, the dominant being human culture.
Bounded contexts have unrelated concepts (ex. support tickets in a customer support context).
They also share concepts (ex. products and customers).
Polyglot Persistence
When storing data, it is best to use multiple data storage technologies, chosen based upon the way data is being used by individual applications or components of a single application.
Each service must own its own data.
Polyglot Programming
It is the idea that applications should be written in a mix of languages to take advantage of the fact that different languages are suitable for tackling different problems.
All interactions with AWS services are via the AWS API.
Use SigV4 to sign all interactions with the AWS API with AccessKeyId and SecretAccessKey.
Create a user in IAM, download the access key/secret access key
Use the aws configure command to configure the CLI tools to interface with your amazon account using the IAM user access key/secret access key, and default region (Default output format can be left blank)
Configured credentials can be found in ~/.aws/credentials
Region and other configuration parameters can be found in ~/.aws/config
The default credential provider chain looks for credentials in this order:
Specified in the code
Environment variables
Default credential profile in the credentials file
Amazon ECS container credentials
Amazon EC2 instance role
Example Java:
Public static void main(String[] args) throws IOException {
AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
try {
System.out.println(“Uploading a new object to S3 from a file\n”);
File file = new File(uploadFileName);
s3client.putobject(new PutObjectRequest(
bucketname, keyName, file));
} catch (AmazonServiceException ase) {
System.out.println(“Error Message: “ + ase.getmessage());
} catch (AmazonClientException ace) {
System.out.println(“Error Message: “ + ace.getmessage());
}
How do users/applications authenticate?
If running in an integrated development environment (IDE), add credentials to the AWS Toolkit setup.
If using the AWS Management Console, sign in as an IAM user.
Amazon Cognito vends scoped, temporary credentials to untrusted environments such as web and mobile apps.
With the AWS SDK, you can use AWS services in your application by using your preferred programming language.
SDKs are available in a number of programming languages and technology platforms, such as Android, iOS, Go, Java, JavaScript, .NET, Node.js, PHP, Python, and Ruby.
Levels
High-level API
Provides methods you need to perform an operation, use the high-level API because of its simplicity .
Has one class per conceptual resource
Defines service resources and individual resources
Example; when uploading a file to an Amazon S3 bucket, the high-level API uses the file size to determine whether to upload the file in a single operation or upload it in multiple parts.
Some AWS SDKs, such as the AWS SDK for Python (Boto3), provide higher-level APIs called the B.
Example:
def listResource():
s3resource = boto3.resource('s3')
bucket = s3resource.Bucket('mybucket')
for object in bucket.objects.all():
print(object.key, object.last_modified)
Example Java:
// High Level API – TransferManager Class
File file = new File(file_path);
TransferManager xfer_mgr = TransferManagerBuilder.standard().build();
try {
Upload xfer = xfer_mgr.upload(bucketname, keyName, file);
XferMgrProgress.showTransferProgress(xfer);
XferMgrProgress.waitForCompletion(xfer);
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
}
Low-Level API
Set of client classes, each exposing a direct mapping of an AWS service’s API.
These client objects have a method for each operation that the service supports, with corresponding objects representing the request parameters and the response data.
Full control over the requests
Example:
def listClient():
s3client = boto3.client('s3')
response = s3client.list_objects_v2(Bucket='mybucket')
for content in response['Contents']:
print(content['Key'],content['LastModified’])
Example Java:
// Low level API
File file = new File(file_path);
AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
try {
System.out.println(“Uploading a new object to S3 from a file\n”);
s3client.putobject(new PutObjectRequest(bucketname, keyName, file));
} catch (AmazonServiceException ase) {
System.out.println(“Error Message: “ + ase.getmessage());
} catch (AmazonClientException ace) {
System.out.println(“Error Message: “ + ace.getmessage());
}
APIs can change. Therefore, if you rely on a particular version of the API in your code, AWS recommends that you lock the service to the version number that you are using.
[profile development]
aws_access_key_id=foo
aws_secret_access_key=bar
api_versions =
ec2 = 2015-03-01
cloudfront = 2015-09-17
Switching profiles is also supported through your IDE when paired with AWS Toolkits.
You can use AWS services by running the AWS Command Line Interface (AWS CLI) commands from your terminal program.
By default, the access key ID and the secret access key are associated with the IAM account that you use to access AWS resources.
You can also specify a default AWS Region for all commands.
Invoke locally:
Linux shells
Windows command line
macOS
Invoke remotely:
EC2 over SSH/PuTTY
AWS Cloud9: Which is authenticated with the permissions of the logged-in AWS user automatically.
AWS CloudShell
AWS Systems Manager Session
Example: Creating a lambda function
aws lambda create-function --function-name ProcessDynamoDBRecords \
--zip-file fileb://function.zip --handler index.handler --runtime nodejs12.x \
--role arn:aws:iam::123456789012:role/lambda-dynamodb-role
A named profile is a collection of settings and credentials that you can apply to an AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command.
Using AWS CLI, you can configure multiple named profiles as a collection of settings and credentials used to interact with your AWS services.
With named profiles, you can switch between accounts, users, roles, and regions.
IDEs, such as VS Code, Eclipse, Visual Studio, and JetBrains, support named profiles.
To add additional profiles to your config and credentials files, run $ aws configure --profile.
Global settings affect all services. By contrast, environment variables affect only AWS SDKs and tools.
You can also store settings specific to Amazon S3 in the config file.
Locations for credentials and config files:
Linux, macOS, or Unix:
~/.aws/credentials
~/.aws/config
Windows:
C:\Users\USERNAME\.aws\credentials
C:\Users\USERNAME\.aws\config
Credentials configuration example:
>> aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
>> aws configure --profile user1
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: PsdaswtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json
Environment variables file example:
.aws/config
[default]
region=us-west-2
output=json
[profile dev]
region=us-east-1
output=json
retry_mode=standard
max_attempts = 4
s3 =
max_concurrent_requests = 20
max_queue_size = 10000
multipart_threshold = 64MB
api_versions =
ec2 = 2015-03-01
cloudfront = 2015-09-017
Example Profile with a lambda function:
aws lambda create-function --function-name
someFunctionName --profile aws-lab-env
Content
Operation calls on AWS services can be synchronous or asynchronous.
Examples:
Lambda can be invoked synchronously or asynchronously.
Amazon S3 invokes functions asynchronously.
The CreateTable operation for DynamoDB is an asynchronous operation
Synchronous/Blocking
Client makes a request and waits for the command to complete.
Asynchronous/Non-Blocking
Client makes a request but does not wait for the command to be complete.
The waiter utility provides an API for handling the polling task.
You can configure waiters with a maximum number of attempts and the back-off strategy between each attempt.
Examples:
aws dynamodb describe-table --table-name Notes --query "Table.TableStatus"
aws dynamodb wait table-exists --table-name Notes
aws cloudformation cancel-update-stack --stack-name myteststack
Example 2: (checking a table .NET)
// creates a table with “request” information such as table name
var response = client.CreateTable(request);
do
{ System.Threading.Thread.Sleep(5000); // Wait 5 seconds.
try
{
var res = client.DescribeTable(new DescribeTableRequest
{ TableName = tableName });
Console.WriteLine("Table name: {0}, status: {1}",
res.Table.TableName,
res.Table.TableStatus);
status = res.Table.TableStatus;
}
catch (ResourceNotFoundException)
{ /* handle the potential exception. */ }
} while (status != "ACTIVE");
Example: (checking a table (Waiter) Python)
# Create table and wait until it is ready
# Get the service resource.
dynamodb = boto3.resource('dynamodb')
# Create the DynamoDB table.
table = dynamodb.create_table(
TableName=‘Notes',
...)
# Wait until the table exists.
table.meta.client.get_waiter('table_exists').wait(TableName=‘Notes')
# Print out some data about the table.
print(table.item_count)
Example: (checking a table (Waiter) Java
//Create waiter to wait on successful creation of table.
Waiter waiter = client.waiters().tableExists();
try{
waiter.run(new WaiterParameters<>(new DescribeTableRequest(tableName));
}
catch(WaiterUnrecoverableException e){
//Explicit short circuit undesired state.}
catch(WaiterTimedOutException e){
//Failed to transition into desired state after polling}
catch(DynamoDBException e){
//Unexpected service exception }
tjyt
AWS:
400 series: Handle error in application
500 series: Retry operation
Java and .NET SDK
AmazonServiceException or subclass: Request was correctly transmitted to the service. However, the service was not able to process it and returned an error response instead.
AmazonClientException: This exception indicates that a problem occurred inside the Java client code, either while trying to send a request to AWS or while trying to parse a response from AWS
IllegalArgumentException – This exception is thrown if you pass an illegal argument when performing an operation on a service.
AWS SDK for Python (Boto3)
botocore.exceptions.ClientError
Available for multiple IDEs (Rider, Eclipse, WebStorm, Visual Studio Code, PyCharm, IntelliJ, AWS Cloud9, Azure DevOps)
Make it easier to create, debug, and deploy applications using AWS
Considerations:
(JVM) caches DNS name lookups. Because AWS resources use DNS name entries that occasionally change, we recommend that you configure your JVM with a TTL value of no more than 60 seconds.
Environment variables provide another way to specify configuration options and credentials. They can be useful for scripting or temporarily setting a named profile as the default.
AWS SDK implements an exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses.
AWS Cloud9 is an IDE that supports developers who develop for the cloud.
With AWS Cloud9 you can write, run, and debug code using only a browser.
You can run your development environment on a managed EC2 instance or an existing Linux server that supports Secure Shell (SSH).
Isolate your project’s resources by maintaining multiple development environments.
Programmers can simultaneously edit the same code because it has features that support pair programming.
AWS Cloud9 comes prepackaged with tools for popular programming languages, including Java, JavaScript, Python, PHP, .NET, and more.
Credentials Options:
In an EC2 environment, we recommend creating IAM roles that provide temporary credentials and follow AWS best practices.
Attach an IAM instance profile to the AWS Cloud9 instance.
When using an SSH environment, you can store your access credentials within the environment. However, this method is considered less secure.
Use AWS Security Token Service (AWS STS) to request limited permissions, temporary credentials for IAM users, or for users that you authenticate through identity federation.
Temporary credentials are the basis for roles. With AWS STS, you can specify the temporary credentials expiry interval. However, any service requests you make with expired credentials will fail, so you must request a new set of temporary credentials.
Temporary security credentials are not stored with the user. Temporary security credentials are generated dynamically and provided to the user only when requested.
Web identity federation – You can allow users to sign in using a well-known third-party identity provider. Examples are: log in with Amazon, Facebook, Google, or any OpenID Connect (OIDC) 2.0 compatible provider.
Example: Obtaining role credentials
sts_client = boto3.client('sts')
# Call the assume_role
assumed_role_object=sts_client.assume_role(
RoleArn="arn:aws:iam::account-of-role-to-assume:role/name-of-role",
RoleSessionName="AssumeRoleSession1"
)
# Get the temporary credentials
credentials=assumed_role_object['Credentials']
# Use the temporary credentials
s3_resource=boto3.resource(
's3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
)
# Use the Amazon S3 resource object
for bucket in s3_resource.buckets.all():
print(bucket.name)
Content
Amazon S3 defines two sets of Amazon S3 commands for working with AWS CLI: s3 and s3api.
>> aws configure get region
us-east-2
>> aws s3 ls
2021-04-20 10:41:08 notes-bucket
>> aws s3 mb s3://lab-bucket --region us-west-1
make_bucket: lab-bucket
>> aws s3api list-buckets --query 'Buckets[].Name'
[
“lab-bucket",
“notes-bucket”
]
>> aws s3api get-bucket-location --bucket lab-bucket
{ "LocationConstraint": "us-west-1“ }
>> aws s3api get-bucket-location --bucket notes-bucket
{ "LocationConstraint": null }
Define the version
>>aws s3api get-bucket-versioning --bucket notes-bucket --generate-cli-skeleton output
{ "Status": "Status", "MFADelete": "MFADelete"}
>>aws s3api put-bucket-versioning --bucket notes-bucket --versioning-configuration Status=Enabled
>>aws s3api get-bucket-versioning --bucket notes-bucket
{ "Status": "Enabled"}
Retrieve Metadata:
aws s3api head-object --bucket notes-bucket --key index.html
{
"AcceptRanges": "bytes",
"ContentType": "text/html",
"LastModified": "Thu, 16 Apr 2021 18:19:14 GMT",
"ContentLength": 77,
"VersionId": "null",
"ETag": "\"30a6ec7e1a9ad79c203d05a589c8b400\"",
"Metadata": {}
}
SDK
Steps
Configure Amazon S3 settings for the SDK (Configuration details, Credentials information ) .
[default]
region = us-east-2
output = json
[profile Beta]
region = us-west-2
output = json
s3 =
max_concurrent_requests = 20
max_queue_size = 10000
multipart_threshold = 64MB
multipart_chunksize = 16MB
max_bandwidth = 50MB/s
addressing_style = path
Define dependencies (Python & Java Example).
boto3.s3
boto3.s3.transfer
-----------------
software.amazon.awssdk.services.s3
software.amazon.awssdk.services.s3.model
software.amazon.awssdk.services.s3.S3Client
software.amazon.awssdk.services.s3.S3AsyncClient
Create an S3 client (service reference) to make service requests (Python & Java Example). A session is an object that stores the configuration state, and you can pull this information to create the client
# create an S3 “low level” client interface
s3client = boto3.client(‘s3’)
# create an S3 “high level” resource interface
s3resource = boto3.resource (‘s3’)
----------You can also manage your own session and create low-level clients or resource clients from it:
# Create S3 resource using custom session
session = boto3.session.Session(profile_name='staging')
# Retrieve region from session object
current_region = session.region_name
# Create a high-level resource from custom session
resource = session.resource('s3')
bucket = resource.Bucket('notes-bucket')
# Region-specific endpoints require the LocationConstraint parameter (Create a Bucket)
bucket.create(
CreateBucketConfiguration={
'LocationConstraint': current_region
}
)
----------------JAVA
S3Client s3 = S3Client.builder()
.region(region)
.credentialsProvider(ProfileCredentialsProvider.create("profile_name"))
.build();
//create a bucket
s3Client.createBucket(new CreateBucketRequest(bucketName));
Perform operations.
“Close” the S3 client when the operations are completed.
HeadBucket API
The HeadBucket operation is an HTTP HEAD request to the bucket. It returns the headers that an HTTP GET operation would return.
A HeadBucket request is useful in determining whether a bucket exists and if you have permission to access it.
If the bucket exists and you have permission to access it, the action returns a code 200 OK.
If the bucket does not exist returns a generic 404 Not Found or 403 Forbidden code.
Example Java:
public static boolean bucketExisting(S3Client s3, String bucketName) {
try {
//Create HeadBucket request to determine if bucket exists and you have permissions
HeadBucketRequest request = HeadBucketRequest.builder()
.bucket(bucketName)
.build();
HeadBucketResponse result = s3.headBucket(request);
if (result.sdkHttpResponse().statusCode() == 200) {System.out.println("Bucket existing! "); }
}
catch (AwsServiceException awsEx) {
switch (awsEx.statusCode()) {
case 404:
System.out.println("No such bucket existing.");
case 400 :
System.out.println(“Attempted to access a bucket from a Region other than where it exists.");
case 403 :
System.out.println("Permission errors in accessing bucket...");
}
}
return;
}
Python Example
def verifyBucketName(s3Client, bucket):
try:
## check if a bucket already exists in AWS
s3Client.head_bucket(Bucket=bucket)
# If the previous command is successful, the bucket is already in your account.
raise SystemExit('This bucket has already been created')
except botocore.exceptions.ClientError as e:
error_code = int(e.response['Error']['Code'])
if error_code == 404:
## If you receive a 404 error code, a bucket with that name
## does not exist anywhere in AWS.
print('Existing Bucket Not Found, please proceed')
if error_code == 403:
## If you receive a 403 error code, a bucket with that name exists
## in another AWS account.
raise SystemExit('This bucket has already owned by another AWS Account')
Create a Bucket:
Python
s3_client = boto3.client('s3', region_name=region)
location = {'LocationConstraint': region}
s3_client.create_bucket(Bucket=bucket_name, CreateBucketConfiguration=location)
---Waiter
waiter = s3Client.get_waiter('bucket_exists’)
waiter.wait(Bucket=bucket)
Java
if (!s3Client.doesBucketExistV2(bucketName)) {
s3Client.createBucket(new CreateBucketRequest(bucketName));
String bucketLocation = s3Client.getBucketLocation(new GetBucketLocationRequest(bucketName));
System.out.println("Bucket location: " + bucketLocation); }
---Waiter
WaiterResponse<HeadBucketResponse> waiterResponse =s3Waiter.waitUntilBucketExists(bucketRequestWait);
waiterResponse.matched().response().ifPresent(System.out::println);
Manage Objects
Get
Get object and metadata: GetObject (can also be used to return a range of bytes)
Get object metadata: HeadObject, GetObjectAcl, GetObjectTagging
Python Example:
def get_object(bucket, object_key):
try:
obj = s3.Object(bucketname, object_key)
body = obj.get()['Body'].read()
logger.info("Got object '%s' from bucket '%s'.", object_key, bucket.name)
except ClientError:
logger.exception(("Couldn't get object '%s' from bucket '%s'.",
object_key, bucket.name))
raise
else:
return body
Java Example:
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides()
.withCacheControl("No-cache")
.withContentDisposition("attachment; filename=example.txt");
GetObjectRequest getObjectRequestHeaderOverride = new GetObjectRequest(bucketName, key)
.withResponseHeaders(headerOverrides);
headerOverrideObject = s3Client.getObject(getObjectRequestHeaderOverride);
displayTextInputStream(headerOverrideObject.getObjectContent());
AWS Select:
Amazon S3 Select analyzes and processes data in an object in Amazon S3 buckets faster and cheaper than the GET method.
With GET, you retrieve the whole object in an Amazon S3 bucket.
With SELECT, you can retrieve a subset of data from an object by using simple SQL expressions.
Your applications do not have to use compute resources to scan and filter the data from an object.
Python Example:
import boto3
s3 = boto3.client('s3')
r = s3.select_object_content(
Bucket='jbarr-us-west-2',
Key='sample-data/airportCodes.csv',
ExpressionType='SQL',
Expression="select * from s3object s where s.\"Country (Name)\" like '%United States%'",
InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}},
OutputSerialization = {'CSV': {}},
)
for event in r['Payload']:
if 'Records' in event:
records = event['Records']['Payload'].decode('utf-8')
print(records)
elif 'Stats' in event:
statsDetails = event['Stats']['Details']
print("Stats details bytesScanned: ")
Generate Presigned URL
Python URL:
import boto3
url = boto3.client('s3').generate_presigned_url(
ClientMethod='get_object',
Params={'Bucket': ‘notes-bucket', 'Key': 'OBJECT_KEY'},
ExpiresIn=3600)
Java URL:
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(“notes-bucket”, objectKey)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
Bulk Operations
High-level commands automatically handle multipart uploads and cleanup of incomplete uploads.
Examples:
aws s3 cp ./aFIle.txt s3://notes-bucket/docs/
aws s3 sync s3://notes-bucket s3://other-bucket --exclude "*another/*"
Iterate through the Bucket
Example Java:
List<Bucket> buckets = s3.listBuckets();
System.out.println("Your {S3} buckets are:");
for (Bucket b : buckets) {
System.out.println("* " + b.getName());}
Example Python:
# Retrieve the list of existing buckets
response = s3.list_buckets()
# Output the bucket names
print('Existing buckets:')
for bucket in response['Buckets']:
print(f' {bucket["Name"]}')
Paging
Some AWS operations return results that are paginated, meaning that the results are listed one page at a time.
Using a continuation token, you can process the next set of paginated results.
For example, listing the contents of your Amazon S3 bucket returns paginated results, up to 1000 objects at a time.
In the response, pay attention to this value:
IsTruncated – A value of true indicates that the response is incomplete, and it was truncated to the number of objects specified by MaxKeys.
ContinuationToken – If sent with the request, it is included in the response.
NextContinuationToken – Sent when IsTruncated is true, which means that the bucket contains more keys that can be listed. In the request, use the value of the NextContinuationToken as the value for new continuation-token, to continue the next list requests to Amazon S3.
Example Java:
ListObjectsV2Request listReq = ListObjectsV2Request.builder()
.bucket(bucketName).maxKeys(1).build();
ListObjectsV2Iterable listRes = s3.listObjectsV2Paginator(listReq);
// Process response pages
listRes.stream()
.flatMap(r -> r.contents().stream())
.forEach(content -> System.out.println(" Key: " + content.key() ));
Example Python:
paginator = client.get_paginator('list_objects')
page_iterator = paginator.paginate(Bucket='notes-bucket',
PaginationConfig={'MaxItems': 10})
for page in page_iterator:
print(page['Contents'])
Static Website
You can configure an S3 bucket as a static website. A static website contains static resources such as HTML or images, but it does not contain server-side processing or scripting.
Command: aws s3 website s3://notes-bucket/ --index-document index.html --error-document error.html
Interfaces
Object persistence interface
The AWS SDK for .NET provides an object persistence model that you can use to map your client-side classes to Amazon DynamoDB tables.
Each object instance then maps to an item in the corresponding tables.
To save your client-side objects to the tables, the object persistence model provides the DynamoDBContext class an entry point to DynamoDB. This class provides you with a connection to DynamoDB. You can then access tables, perform various CRUD operations, and run queries.
Document interface
Many AWS SDKs provide a document interface that you can use to perform data plane operations (create, read, update, delete) on tables and indexes. With a document interface, you do not need to specify data type descriptors.
The data types are implied by the semantics of the data itself.
These AWS SDKs also provide methods to convert JSON documents to and from native Amazon DynamoDB data types.
Low-level interface
Every language-specific AWS SDK provides a low-level interface for Amazon DynamoDB, with methods that closely resemble low-level DynamoDB API requests.
DynamoDB Client Example
Java Normal Client Builder
DynamoDbClient client = DynamoDbClient.builder()
.region(Region.US_WEST_2)
.credentialsProvider(ProfileCredentialsProvider.builder()
.profileName("myProfile")
.build())
.build();
Java Local Client Builder
DynamoDbClient client = DynamoDbClient.builder()
.endpointOverride(URI.create("http://localhost:8000"))
// The region is meaningless for local DynamoDb but required for client builder validation
.region(Region.US_EAST_1)
.credentialsProvider(StaticCredentialsProvider.create(
AwsBasicCredentials.create("dummy-key", "dummy-secret")))
.build();
Java Default Client
DynamoDbClient client = DynamoDbClient.create();
Python Client
import boto3
# Get the service resource.
dynamodb = boto3.client('dynamodb')
# Create the DynamoDB table.
table = dynamodb.create_table(
TableName='Notes',
KeySchema=[…],
AttributeDefinitions=[…],
ProvisionedThroughput={…}
)
DynamoDB Create Table
Java
//Set Key attributes and values to be used in request
List<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
keySchema.add(KeySchemaElement.builder().attributeName(”UserId”).keyType(KeyType.HASH).build());
keySchema.add(KeySchemaElement.builder().attributeName(“NoteId”).keyType(KeyType.RANGE).build());
//Set throughput on table to be used in request
ProvisionedThroughput provisionedThroughput = ProvisionedThroughput.builder()
.writeCapacityUnits(5L)
.readCapacityUnits(5L)
.build();
//Build CreateTable request
CreateTableRequest request = CreateTableRequest.builder()
.attributeDefinitions(attributeDefinitions)
.keySchema(keySchema)
.provisionedThroughput(provisionedThroughput)
.tableName(“Notes”)
.build();
//Create table using request object
CreateTableResponse response = ddb.createTable(request);
Command
>> aws dynamodb create-table --cli-input-json file://notestable.json --region us-west-2
JSON
{
"AttributeDefinitions": [
{"AttributeName": "UserId",
"AttributeType": "S"},
{"AttributeName": "NoteId",
"AttributeType": "N"},
{"AttributeName": "Is_Incomplete",
"AttributeType": "S"}],
"TableName": "Notes",
"KeySchema": [
{"AttributeName": "UserId",
"KeyType": "HASH"},
{"AttributeName": "NoteId",
"KeyType": "RANGE"}],
"ProvisionedThroughput": {
"ReadCapacityUnits": 1,
"WriteCapacityUnits": 1},
"LocalSecondaryIndexes": [
{"IndexName": "Review",
"KeySchema": [
{"AttributeName": "UserId",
"KeyType": "HASH"},
{"AttributeName": "Is_Incomplete",
"KeyType": "RANGE"}],
"Projection":
{"ProjectionType": "ALL"
}}]}
Using a waiter (JAVA)
//Define a waiter on DynamoDB client
DynamoDbWaiter dbWaiter = ddb.waiter();
//Create table using request object
CreateTableResponse response = ddb.createTable(“Notes");
// Wait until the Amazon DynamoDB table is created
WaiterResponse<DescribeTableResponse> waiterResponse = dbWaiter.waitUntilTableExists(“Notes");
waiterResponse.matched().response().ifPresent(System.out::println);
Update Table
Java
Table table = dynamoDB.getTable(“Notes");
ProvisionedThroughput = new ProvisionedThroughput()
.withReadCapacityUnits(15L)
.withWriteCapacityUnits(15L);
table.updateTable(provisionedThroughput);
Delete Table
Java
Table table = dynamoDB.getTable(“Notes");
table.delete();
List Table
Java
TableCollection<ListTablesResult> tables = dynamoDB.listTables();
Iterator<Table> iterator = tables.iterator();
while (iterator.hasNext()) { Table = iterator.next();
System.out.println(table.getTableName());}
Create Item
Command PUT
>> aws dynamodb put-item
--table-name Notes
--item \
'{"Userid":{"S":"StudentD"}, "NoteId":{"N":"42"}, "Notes":{"S":"Test note"}}'
Command Batch Write
Batch operations read and write items in parallel to minimize response latencies.
BatchGetItem – Read up to 100 items from one or more tables up to 16 MB of data.
BatchWriteItem – Create or delete up to 25 items in one or more tables up to 16 MB of data.
Example:
aws dynamodb batch-write-item \
--request-items file://request-items.json
Read Items
Command GET: To read an item from a DynamoDB table, use the GetItem operation.
>> aws dynamodb get-item
--table-name Notes
--key '{"UserId": {"S": "StudentA"}, "NoteId": {"N": "11"}}'
{
"Item": {
"Note": {
"S": "Hello World\n"
},
"NoteId": {
"N": "11"
},
"UserId": {
"S": "StudentA"
}
}
}
Query: If you only need one item, use getItem. For a collection of items, use query and use filter expressions or Limit.
Must include a key condition expression
=, <, >, <=, >=, AND, BETWEEN, and begins_with
The Query operation in Amazon DynamoDB finds items based on primary key values.
DynamoDB paginates the results from Query operations.
By default, DynamoDB returns a result only 1 MB in size or less.
Results are divided into pages.
Determine whether there are more results.
Check for LastEvaluatedKey element.
Absence of LastEvaluatedKey indicates that there are no additional items to retrieve.
Command:
aws dynamodb query
--table-name Notes
--key-condition-expression "UserId = :userid"
--expression-attribute-values ‘{":userid":{"S":"StudentA”}’
{
"Count": 8,
"Items": [
{"UserId": {"S": "StudentA"}},
{"UserId": {"S": "StudentB"}},
{"UserId": {"S": "StudentC"}},
{"UserId": {"S": "StudentD"}},
{"UserId": {"S": "StudentE"}},
{"UserId": {"S": "StudentF"}},
{"UserId": {"S": "StudentG"}},
{"UserId": {"S": "StudentH"}}
],
"LastEvaluatedKey": {
"UserId": {"S": "StudentH"},
"NoteId": {"N": "88"}
},
"ScannedCount": 8
}
By default, a Query operation does not return any data on how much read capacity it consumes.
However, you can specify the ReturnConsumedCapacity parameter in a Query request to obtain this information. The following are the valid settings for ReturnConsumedCapacity.
NONE—no consumed capacity data is returned. (This is the default).
TOTAL—the response includes the aggregate number of read capacity units consumed.
INDEXES—the response shows the aggregate number of read capacity units consumed, together with the consumed capacity for each table and index that was accessed.
Scan:
Returns a result set
Maximum of 1 MB of data retrieved
Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the scan operation's impact by setting a smaller page size.
Filter expressions are applied after a scan finishes but before results are returned
A Scan operation in Amazon DynamoDB reads every item in a table or a secondary index.
In general, scans are less efficient than queries.
A Scan operation in Amazon DynamoDB reads every item in a table or a secondary index.
Therefore, a Scan consumes the same amount of read capacity, regardless of whether a filter expression is present.
The typical DynamoDB scan operation processes data sequentially. For large tables, this can cause the scan operation to take a long time to complete.
AWS DynamoDB provides a parallel scan operation that allows for multiple threads or workers to scan different sections or segments of a table simultaneously. This can improve scan performance.
Command Example:
aws dynamodb scan
--table-name Notes
--key-condition-expression "UserId = :userid"
--expression-attribute-values ‘{":userid":{"S":"StudentA"}}’
Update
Command:
UpdateItem updates only passed attributes.
>> aws dynamodb update-item
--table-name Notes
--key '{"UserId": {"S": "StudentC"}, "NoteId": {"N": “12"}, “Favorite": {“S: “Yes"}}‘
Conditional writes succeed only if item attributes meet one or more expected conditions.
>> aws dynamodb update-item
--table-name Notes
--key '{"UserId": {"S": "StudentD"}, "NoteId": {"N": “42"}}’
--update-expression “SET Notes = :newnote”
--condition-expression “Favorite NOT yes”
--expression-attribute-values file://expression-attribute-values.json
Delete items
Command:
>> aws dynamodb delete-item
--table-name Notes
--key
'{"UserId": {"S": "StudentB"}, "NoteId": {"N": “23"}, "Notes": {“S”: “AWS T&C"}}‘
Mapping a Table with a Class:
In this code example, the @DynamoDBTable annotation maps the NotesItems class to the Notes table.
Java
@DynamoDBTable(tableName="Notes")
public static class NotesItems {
//Set up Data Members that correspond to columns in the Notes table
private String UserId;
private Integer NoteId;
private String Notes;
@DynamoDBHashKey(attributeName="UserId")
public String getUserId() { return this.UserId; }
public void setUserId(String UserId) { this.UserId = UserId; }
@DynamoDBRangeKey(attributeName="NoteId")
public Integer getNoteId() { return this.NoteId; }
public void setNoteId(Integer NoteId) { this.NoteId = NoteId; }
@DynamoDBAttribute(attributeName="Notes")
public String getNotes() { return this.Notes; }
public void setNotes(String Notes) { this.Notes = Notes; }
@Override
public String toString() {
return "Notes [User=" + UserId + ", NoteId=" + NoteId + ", Notes=" + Notes + "]";
}
}
s
Complete Example
Domain Object
@DynamoDBTable(tableName = “CloudAirTripSectors”)
public class TripSector implements Serializable {
@DynamoDBHashKey
private Long date;
@DynamoDBRangeKey
@DynamoDBIndexHashKey(globalSecondaryIndexName = ORIGIN_CITY_INDEX)
private String originCity;
public TripSector() { }
}
DynamoDB mapper
@Override
Public List<TripSector> findTripsToCity(String city) {
Map<String, AttributeValue> eav = new Hashmap<>();
eav.put(“:v1” , new AttributeValue().withS(city));
DynamoDBQueryExpression<TripSector> query = new DynamoDBQueryExpression<TripSector>()
.withIndexName(TripSector.DESTINATION_CITY_INDEX).withConsistentRead(false)
.withKeyConditionExpression(“destinationCity =:v1”).withExpressionAttributeValues(eav);
return mapper.query(TripSector.class, query);
}
Lambda handler
public class FindTripsToCityHandler
extends LambdaHandlerWithXRayBase
implements RequestHandler<TripSearchRequest, LambdaResult<List<TripSector>>>
{
@Override
public LambdaResult<List<TripSector>> handleRequest(TripSearchRequest input, Context context) {
LambdaResult<List<TripSector>> result = new LambdaResult<List<TripSector>>();
LambdaLogger logger = context.getLogger();
logger.log("Starting " + this.getClass().getName() + " Lambda with city " + input.getCity() + "\n");
try {
List<TripSector> listTripSector = DynamoDBTripDao.instance().findTripsToCity(input.getCity());
result.setData(listTripSector);
logger.log("Found " + listTripSector.size() + " total trips.\n");
for(TripSector item : listTripSector){
System.out.println(item.toString());
}
result.setSucceeded(true);
return result;
} catch (Exception e) {
result.setSucceeded(false);
result.setErrorCode(1);
result.setMessage(e.getMessage());
e.printStackTrace();
}
return result;
}
}
DynamoDB Accelerator (DAX)
Supported read operations
GetItem, BatchGetItem, Query, Scan
Supported write operations
BatchWriteItem, UpdateItem, DeleteItem, PutItem
DAX maintains an item cache to store the results from GetItem and BatchGetItem operations.
For strongly consistent read request from an application, DAX Cluster pass all request to DynamoDB & does not cache for these requests.
Sharding write-heavy partition keys
One way to better distribute writes across a partition key space in Amazon DynamoDB is to expand the space.
You can do this in several different ways.
You can add a random number to the partition key values to distribute the items among partitions.
Example
For a partition key that represents today's date, you might choose a random number between 1 and 200 and concatenate it as a suffix to the date.
This yields partition key values like 2014-07-09.1, 2014-07-09.2, and so on, through 2014-07-09.200.
However, to read all the items for a given day, you would have to query the items for all the suffixes and then merge the results.
For example, you would first issue a Query request for the partition key value 2014-07-09.1. Then issue another Query for 2014-07-09.2, and so on, through 2014-07-09.200. Finally, your application would have to merge the results from all those Query requests.
Or you can use a number that is calculated based on something that you're querying on.
Now suppose that each item has an accessible OrderId attribute, and that you most often need to find items by order ID in addition to date.
Before your application writes the item to the table, it could calculate a hash suffix based on the order ID and append it to the partition key date. The calculation might generate a number between 1 and 200 that is fairly evenly distributed, similar to what the random strategy produces.
You can easily perform a GetItem operation for a particular item and date because you can calculate the partition key value for a specific OrderId value.
To read all the items for a given day, you still must Query each of the 2014-07-09.N keys (where N is 1–200), and your application then has to merge all the results. The benefit is that you avoid having a single "hot" partition key value taking all of the workload.
Scaling read-heavy partition keys
Provisioned I/O capacity for the table is divided evenly among these physical partitions.
Therefore, a partition key design that doesn't distribute I/O requests evenly can create "hot" partitions that result in throttling and use your provisioned I/O capacity inefficiently.
Example:
Some products might be popular among customers, so those items would be consistently accessed more frequently than the others.
As a result, the distribution of read activity on ProductCatalog would be highly skewed toward those popular items.
f
Best Practice
Each Query or Scan request with a smaller page size uses fewer read operations and creates a "pause" between each request.
For example, suppose that, each item is 4 KB, and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations.
AWS Serverless The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications.
AWS SAM command line interface (AWS SAM CLI)
Verify that AWS SAM template files are written according to the specification.
Invoke Lambda functions locally.
Step through debug Lambda functions.
Package and deploy serverless applications to the AWS Cloud.
A serverless application is a combination of Lambda functions, event sources, and other resources that work together to perform tasks.
It provides shorthand syntax to express functions, APIs, databases, and event source mappings.
With just a few lines per resource, you can define the application you want and model it using YAML. During deployment, SAM transforms and expands the SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster.
AWS SAM template specification.
You use this specification to define your serverless application.
It provides you with a simple and clean syntax to describe the functions, APIs, permissions, configurations, and events that make up a serverless application.
AWS SAM command line interface (AWS SAM CLI).
You use this tool to build serverless applications that are defined by AWS SAM templates.
The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on.
Tests serverless applications locally
Generates code blueprints to bootstrap development
Provides response object and function logs
Uses open source Docker-Lambda images
Emulates timeout, memory limits, and runtimes
Verify that AWS SAM template files are written according to the specification.
Invoke Lambda functions locally.
Step through debug Lambda functions.
Package and deploy serverless applications to the AWS Cloud.
Install/upgrade
pip install --upgrade aws-sam-cli
AWS SAM with AWS toolkits and debuggers to test and debug locally.
Consistently provision resources in multiple environments. Model (AWS SAM)
Resource types:
Lambda functions
API Gateway APIs
DynamoDB tables
Variables
Lambda environment variables
API stage variables
Open specification (Apache 2.0)
The optional Transform section specifies one or more macros that AWS CloudFormation uses to process your template.
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources: …
Sam resources mapping
Serverless API = Amazon API Gateway
Serverless Function = AWS Lambda
Serverless SimpleTable = Amazon DynamoDB
Serverless StateMachine = AWS Step Functions
Resources
AWS::Serverless::Api
Creates a collection of Amazon API Gateway resources and methods that can be invoked through HTTPS endpoints.
An AWS::Serverless::Api resource need not be explicitly added to a AWS Serverless Application Definition template. A resource of this type is implicitly created from the union of Api events defined on AWS::Serverless::Function resources defined in the template that do not refer to an AWS::Serverless::Api resource.
AWS::Serverless::Application
Embeds a serverless application from the AWS Serverless Application Repository or from an Amazon S3 bucket as a nested application.
Nested applications are deployed as nested AWS::CloudFormation::Stack resources, which can contain multiple other resources including other AWS::Serverless::Application resources.
AWS::Serverless::Function
Creates an AWS Lambda function, an (IAM) execution role, and event source mappings that trigger the function.
The AWS::Serverless::Function resource also supports the Metadata resource attribute, so you can instruct AWS SAM to build custom runtimes that your application requires.
AWS::Serverless::HttpApi
Creates an Amazon API Gateway HTTP API, which enables you to create RESTful APIs with lower latency and lower costs than REST APIs.
AWS::Serverless::LayerVersion
Creates a Lambda LayerVersion that contains library or runtime code needed by a Lambda Function.
Creates a DynamoDB table with a single attribute primary key. It is useful when data only needs to be accessed via a primary key.
To use the more advanced functionality of DynamoDB, use an AWS::DynamoDB::Table resource instead.
AWS::Serverless::StateMachine
Creates an AWS Step Functions state machine, which you can use to orchestrate AWS Lambda functions and other AWS resources to form complex and robust workflows.
AWS SAM translates the resources in its template (YAML or JSON) into an equivalent AWS CloudFormation template.
The CloudFormation template is then applied to create and update AWS resources.
Example:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::serverless-2016-10-31
Globals: //Global Variables
Api:
MethodSettings:
- LoggingLevel: INFO
Resources: //Lambda Function
listFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: list-function/
Handler: app.lambda_handler
Runtime: python3.8
Policies: AmazonDynamoDBReadOnlyAccess
Events:
listNotes:
Type: Api
Properties:
Path: /notes
Method: get
Example - AWS::Serverless::Function
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31‘
Resources:
MySimpleFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
CodeUri: s3://<source-bucket>/MyCode.zip
Policies: AmazonDynamoDBFullAccess
Environment:
Variables:
TABLE_NAME: !Ref Table
Events:
MyUploadEvent:
Type: S3
Properties:
Id: !Ref MyBucket
Events: Create
MyBucket:
Type: AWS::S3::Bucket
Example 2 - AWS::Serverless::API - Implicit Expression :
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
GetFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.get
Runtime: nodejs12.x
CodeUri: s3://bucket/api_backend.zip
Policies: AmazonDynamoDBReadOnlyAccess
Environment:
Variables:
TABLE_NAME: !Ref Table
Events:
GetResource:
Type: Api
Properties:
Path: /resource/{resourceId}
Method: get
Example 3 - AWS::Serverless::API - Explicit Expression:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
Api:
Type: AWS::Serverless::Api
Properties:
StageName: prod
DefinitionUri: swagger.yml
Example 4 - AWS::Serverless::SimpleTable - Simple Table:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
MyTable:
Type: AWS::Serverless::SimpleTable
Properties:
PrimaryKey:
Name: id
Type: String
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
Deploying Application
ZIP the function files.
Upload it to an Amazon S3 bucket.
Add a CodeUri property, specifying the location of the .zip file in the bucket for each function in appspec.yml.
Call the AWS CloudFormation CreateChangeSet operation with appspec.yml.
aws cloudformation package --template-file app_spec.yml --output-template-file new_app_spec.yml --s3-bucket <your-bucket-name>
Call the AWS CloudFormation ExecuteChangeSet operation with the name of the change set you created in step 04.
aws cloudformation deploy --template-file new_app_spec.yml --stack-name <your-stack-name> --capabilities CAPABILITY_IAM
Canary Deployment Example:
Resources:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
CodeURI: s3://bucket/codezip
AutoPublishAlias: Live
DeploymentPreference:
Type: Canary10Percent10Minutes
Alarms:
# A list of alarms that you want to monitor
- !Ref AliasErrormetricGreaterThanZeroAlarm
- !Ref latestVersionErrormetricGreaterThanZeroAlarm
Hooks:
# Validation Lambda functions that are run
# before & after traffic shifting
PreTraffic: ! Ref PreTrafficLambdaFunction
PostTraffic: ! Ref PostTrafficLambdaFunction
Lambda Deployment Types:
Canary: Traffic is shifted in two increments.
The options specify the percentage of traffic that's shifted to your updated Lambda function version in the first increment, and the interval, in minutes, before the remaining traffic is shifted in the second increment.
Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment.
All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version at once.
Types:
Canary10Percent30Minutes
Canary10Percent5Minutes
Canary10Percent10Minutes
Canary10Percent15Minutes
Linear10PercentEvery10Minutes
Linear10PercentEvery1Minute
Linear10PercentEvery2Minutes
Linear10PercentEvery3Minutes
AllAtOnce
SAM deployment and Build process
Step 1: Prepare the SAM application
Place the function code at the root level of the working directory along with the YAML file
sam-app/
├── README.md
├── events/
│ └── event.json
├── hello_world/
│ ├── __init__.py
│ ├── app.py #Lambda handler logic.
│ └── requirements.txt #Dependencies
├── template.yaml #AWS SAM template
└── tests/
└── unit/
├── __init__.py
└── test_handler.py
Step 2: Build your application
First, change into the project directory, where the template.yaml file for the sample application is located.
The sam build command builds any dependencies that your application has, and copies your application source code to folders under .aws-sam/build
Command: sam build
Output;
.aws-sam/
└── build/
├── HelloWorldFunction/
└── template.yaml
Step 3: Deploy your application to the AWS Cloud
Command: sam deploy --guided
Step 4: (Optional) Test your application locally
Command: sam local start-api
curl http://127.0.0.1:3000/hello
Local Testing
sam local invoke
Run an AWS Lambda function locally in a Docker container.
sam local start-api
Replicate an Amazon API Gateway endpoint locally.
sam local generate-event
Generate sample payloads from different event sources.
Supported services include Amazon S3, API Gateway, and Amazon SNS
Workflow
1.Init – To initialize a new AWS SAM project, use the sam init command. You can choose a built-in application template or a custom template.
>>sam init
>>sam init --runtime python3.8 --app-template hello-world
>>sam init --location /path/to/template.zip
>>sam init --location https://example.com/path/to/template.zip
Which template source would you like to use?
1 - AWS Quick Start Templates
2 - Custom Template Location
Choice:_
Package (opt) - Packages an AWS SAM application. This command creates a .zip file of your code and dependencies, and uploads the file to Amazon Simple Storage Service (Amazon S3).
sam package [OPTIONS] [ARGS]...
Build – To build your AWS SAM application, run the sam build command. You can build inside a container.
>>sam build
Building codeuri: C:\my_app\hello_world runtime: python3.9 metadata: {} architecture: x86_64 functions: ['HelloWorldFunction’] ...
Test – To run your AWS SAM application locally for testing purposes, run the sam local invoke command.
>>sam local invoke "HelloWorldFunction" -e event.json
>>sam build --use-container
Deploy – To deploy through an interactive prompt, if you are using a configuration file, run the sam deploy command.
>>sam deploy
>>sam deploy --template-file deploy.yml
>>$ sam build && sam package --s3-bucket <bucket_name>
Handler
When a Lambda function is invoked, the handler code runs.
The handler is a specific code method (Java, C#) or function (Node.js, Python) that you’ve created and included in your package.
You specify the handler when creating a Lambda function.
Arguments
Event object
When your Lambda function is invoked.
The event differs in structure and contents, depending on which event source created it.
For example:
API Gateway creates will contain details related to the HTTPS request that was made by the API client. Details include path, query string, request body.
Amazon S3 when a new object is created includes details about the bucket and the new object.
Example:
{ "Records": [ { "eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-2",
"eventTime": "2019-09-03T19:37:27.192Z",
"eventName": "ObjectCreated:Put",
"userIdentity": { "principalId": "AWS:AIDAINPONIXQXHT3IKHL2" },
"requestParameters": {"sourceIPAddress": "205.255.255.255" },
"responseElements": {"x-amz-request-id": "D82...", "x-amz-id-2": "vl...="},
"s3": { "s3SchemaVersion": "1.0", "configurationId": "828. . .", "bucket": { "name": “notes-bucket",
"ownerIdentity": { "principalId": "A3I... }, "arn": "arn:aws:s3:::lambda-artifacts-dea..." }, "object": { "key": "b2...", "size": 1305107, "eTag": "b21…", "sequencer": "0C0F6F405D6ED209E1“ }
}}}]}
Context object (optional)
Allows your function code to interact with the Lambda runtime environment.
The contents and structure of the context object vary, based on the language runtime your Lambda function is using.
However, at a minimum, it will contain the following elements:
AWS RequestId – Used to track specific invocations of a Lambda function (important for error reporting or when contacting AWS Support).
Timeout – The amount of time in milliseconds that remain before your function timeout occurs. (Lambda functions can run a maximum of 900 seconds as of this publishing, but you can configure a shorter timeout.)
Logging – Each language runtime provides the ability to stream log statements to CloudWatch Logs. The context object contains information about which CloudWatch Logs stream your log statements will be sent to.
Example:
{
"callbackWaitsForEmptyEventLoop": true,
"functionVersion": "$LATEST",
"functionName": "printContext",
"memoryLimitInMB": "128",
"logGroupName": "/aws/lambda/printContext",
"logStreamName": "2019/12/02/[$LATEST]b3352ee98b4048eb8266cf5d4379693d",
"invokedFunctionArn": "arn:aws:lambda:ap-southeast-2:180814441145:function:printContext",
"awsRequestId": "501444a8-6f91-4d6d-a088-179e0df01a31"
}
Example (Methods Java)
getRemainingTimeInMillis()
getFunctionName()
getFunctionVersion().
getInvokedFunctionArn()
getMemoryLimitInMB()
getAwsRequestId()
getLogGroupName()
getLogStreamName()
getIdentity()
getClientContext()
getLogger()
Context Java:
exports.handler = async function(event, context) {
console.log('Remaining time: ', context.getRemainingTimeInMillis())
console.log('Function name: ', context.functionName)
return context.logStreamName
}
Handler Switching
Our handler functions looks like filename.function-name
For example, if we export a function named "handler" in a file named index.js, our handler name would be index.handler. This means we can ship different files within our function and export different methods.
Another useful feature is that we can dynamically tell AWS which of these filename / export name permutations to choose from, on the fly, without the need to redeploy code. The command to dynamically reconfigure a Lambda handler is:
$ aws lambda update-function-configuration \
--function-name ${FUNCTION_NAME} \
--handler ${FILE_NAME}.${METHOD_NAME}
When your application is reconfigured in this manner, AWS will begin creating new instances of your application with the appropriate environment variables set.
AWS will also ensure all the currently running requests finish.
New requests will then begin going to the new instances. The old instances will then all be destroyed once they have no more requests to handle. None of the old instances will be reused, so any state within the processes will be lost.
Function Handler Examples
Python (1)
def handler_name(event, context):
...
return some_value
def my_handler(event, context):
message = 'Hello {} {}!'.format(event['first_name'],
event['last_name'])
return {
'message' : message
}
Python (2)
# Initializations outside of handler to make a more unit-testable function
dynamoDBResource = boto3.resource('dynamodb')
pollyClient = boto3.client('polly')
s3Client = boto3.client('s3')
def lambda_handler(event, context):
# Extract the user parameters from the event and environment
userId = event["userId"]
noteId = event["noteId"]
voiceId = event['voiceId']
mp3Bucket = os.environ['MP3_BUCKET_NAME']
ddbTable = os.environ['Notes_Table']
# Get the note text from the database
text = getNote(dynamoDBResource, ddbTable, userId, noteId)
# Save an MP3 file locally with the output from polly
filePath = createMP3File(pollyClient, text, voiceId, noteId)
# Host the file on S3 that is accessed by a presigned url
signedURL = hostFileOnS3(s3Client, filePath, mp3Bucket, userId, noteId)
return signedURL
Java
In the example Java code, the first handler parameter is the input to the handler (myHandler), which can be any of the following:
Event data (published by an event source, such as Amazon S3)
Custom input you provide, such as an Integer object (as in this example) or any custom data object.
MyOutput output handlerName(MyEvent event, Context)
{ ... }
package example;
Import com.amazonaws.services.lambda.runtime.Context;
Import com.amazonaws.services.lambda.runtime.RequestHandler;
Public class Hello implements RequestHandler<Integer, String>{
Public String myHandler(init myCount, Context context) {
return String.valueOf(myCount);
}
}
Environment Variables
Use environment variables to pass operational parameters to your function.
Lambda API
CreateFunction
Create a Lambda function through a deployment package
Package type can be a ZIP or a container image
UpdateFunctionCode
Update the code of the Lambda function
Code must be unpublished
UpdateFunctionConfiguration
Modify the version-specific settings of the Lambda function
Invoke
Invokes a Lambda function synchronously or asynchronously
Creating a Lambda function Command
>> aws lambda create-function --function-name dictate-function --handler app.lambda_handler --runtime python3.8 –role arn:aws:iam::563926481938:role/lambdaPollyRole --environment Variables={TABLE_NAME=$notesTable} --zip-file fileb://dictate-function.zip
{ "LastUpdateStatus": "Successful",
"FunctionName": "dictateABC",
"LastModified": "2021-08-18T20:46:28.341+0000",
"RevisionId": "a76cada1-ee56-4b0b-924e-d2c38fbdf9ab",
"MemorySize": 128,
"Environment": { "Variables": { "TABLE_NAME": "Notes" } },
"State": "Active",
"Version": "$LATEST",
"Role": "arn:aws:iam::563926481938:role/lambdaPollyRole",
"Timeout": 3,
"Handler": "app.lambda_handler",
"Runtime": "python3.8",
"TracingConfig": { "Mode": "PassThrough“ },
"CodeSha256": "Oow63xlj1nCVbbfY2rJGTjAAclBrufctZxzjfXt0MOU=",
"Description": "",
"CodeSize": 1273,
"FunctionArn": "arn:aws:lambda:us-west-2:563926481938:function:dictateABC",
"PackageType": "Zip"
}
Update Config Command
>> aws lambda update-function-configuration --function-name dictate-function
--environment Variables="{MP3_BUCKET_NAME=$apiBucket, Notes_Table=$notesTable}"
{
"FunctionName": "dictate-function",
. . .
"Environment": {
"Variables": {
"MP3_BUCKET_NAME": "labstack-ykantos-qbtxaus339lj-pollynotesapibucket",
"Notes_Table": "pollynotes_table"
}
},
. . .
}
Versions
Lambda functions are versioned.
When you publish a new version, the version number is incremented, and the earlier version is stored.
You can publish one or more versions of your Lambda function. Each version has a unique ARN.
Aliases
You can use aliases to invoke specific versions of the function.
A Lambda alias is like a pointer to a specific function version. Aliases can be updated to point to other versions.
Examples:
arn:aws:lambda:aws-region:acct-id:function:dictate-function:Prod
arn:aws:lambda:aws-region:acct-id:function:dictate-function:Test
Layers
A layer is a .zip file archive that contains code that is beyond the business logic, such as libraries, custom runtimes, and other dependencies.
With layers, you can share such code across different versions of the same function or across multiple functions in a workload.
The functions continue to have the same access to the dependencies but use less storage.
Layers help keep your function deployment package small. That means that during a cold start, your deployment package can download faster.
A deployed layer is an immutable version. Each time a new layer is published, the version number increments.
When you include a layer in a function, you specify the layer version that you want to use.
You can include up to five layers per function, which count towards the standard Lambda deployment size limits.
Limits:
250 MB
5 layers
Invoke a Lambda function
At a minimum, you need the function name, the function’s ARN, or the function’s partial ARN.
If you are invoking a specific version number, qualify the ARN with the version prefix.
Other parameters you can invoke the function with include:
invocation-type – Specifies how your function should run.
Event: Asynchronous
RequestResponse: (default invocation) Synchronous
DryRun: Do not run the function but perform some checks, such as checking if the caller has permissions and if the inputs are valid.
log-type – If the invocation type is RequestResponse, set to Tail to include the execution log in the response. This populates the x-amz-log-result header with log data from your function.
client-context – In a JSON format, pass client-specific information that you can then process through the context variable.
Payload – JSON input to the function.
Qualifier – Invoke a specific version of the Lambda function using the version number, or alias. To invoke the most current version of the Lambda function, qualify the functions ARN with $LATEST, or keep the ARN unqualified.
Outfile – File name where the output content will be saved.
Structure:
invoke
--function-name <value>
[--invocation-type <value>]
[--log-type <value>]
[--client-context <value>]
[--payload <value>]
[--qualifier <value>]
<outfile>
Example:
>> aws lambda invoke --function-name dictate-function --payload '{"UserId": "newbie","NoteId": "2","VoiceId": "Joey"}' response.txt
{"ExecutedVersion": "$LATEST", "StatusCode": 200 }
Errors
Request too large or invalid
Caller Missing permission
Account Max function instances reached
Too many requests (1,000 concurrent run limit) : To avoid errors because of the account’s 1,000 concurrent run limit and to minimize throttling, you can reserve part of that concurrency for your Lambda.
Deployment Package
Node.js and Python
To create a Lambda function you first create a Lambda function deployment package, a .zip file consisting of your code and any dependencies.
The contents of the Zip file are available as the current working directory of the Lambda function.
Use npm/pip to install libraries
All dependencies must be at root level.
Java
Your deployment package can be a .zip file or a standalone jar; it is your choice.
Use Maven / Eclipse IDE plugins
Compiled class and resource files at root level, required jars in /lib directory
C#
A .NET Core Lambda deployment package is a zip file of your function's compiled assembly along with all its assembly dependencies.
Also contains a proj.deps.json file, which signals to the .NET Core runtime all your function's dependencies and a proj.runtimeconfig.json file, which is used to configure the .NET Core runtime.
The .NET CLIs publish command can create a folder with all these files, but by default the proj.runtimeconfig.json will not be included because a Lambda project is typically configured to be a class library.
To force the proj.runtimeconfig.json file to be written as part of the publish process, pass in the command line argument: /p:GenerateRuntimeConfigurationFiles=true to the publish command.
Use NuGet / Microsoft Visual Studio plugins
All assemblies (.dll) at root level
Deploy
Using Containers
Create your project and code
Build container using base image from AWS(Lambda Runtime Interface Client)
Upload container to Amazon ECR (10 GB)
Create Lambda function using container image
Best Practices
Separate the Lambda handler (entry point) from your core logic
You can make a more unit-testable function.
Avoid using recursive code
Use environment variables to pass operational parameters to your function.
. Separate shared dependencies and upload them as a layer. This way, you can test your function code more effectively.
Performance test your Lambda function for memory
Load test your Lambda function for timeouts
Understand Lambda limits
Control the dependencies in your function's deployment package
Minimize your deployment package size to its runtime necessities
Minimize the complexity of your dependencies
Reuse the run context.
Use environment variables.
Control the function's deployment package dependencies.
Share common dependencies with layers.
Test your Lambda function performance for memory utilization.
Delete Lambda functions that are no longer in use.
Mapping Template
It is a script expressed in Velocity Template Language (VTL) and applied to the payload using JSONPath expressions.
Use mapping templates to do the following:
Map parameters one-to-one.
Map a family of integration response status codes (matched by a regular expression) to a single response status code.
Example:
Payload:
{
“UserId": "StudentA",
“Notes": [
{
“Note": "Hello World!",
“NoteId": "11”
}
]
}
Request Model
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "NotesInputModel",
"type": "object",
“required": [“UserId”, “NoteId”, “Note”],
"properties": {
"UserId": { "type": "string" },
“Notes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"Note": { "type": "string" },
"NoteId": { "type": “integer" }
}
}
}
Request Validation
{
"name" : "params-only",
"validateRequestBody" : "false",
"validateRequestParameters" : "true"
}
Mapping Template
#set($inputRoot = $input.path('$'))
{
“Environment”: “$stageVariables.environment”,
"Notes": [
{
"NoteId": "$elem.NoteId",
“Note": "$elem.Note“
}]}
Output
{
“Environment”: “prod”,
"Notes": [
{
"NoteId": "11",
“Note": "Hello World!“
}]}
"OR"
{
“Environment”: “dev”
"Notes": [
{
"NoteId": "11",
“Note": "Hello World!“
}]}
REST APIs invoke URL
You can test a REST API in various ways.
You can use the REST API’s invoke URL, the API Gateway console, or third-party tools such as Postman.
https://{restapi_id}.execute-api.{region}.amazonaws.com/{stage_name}/
where:
{restapi_id} is the API identifier
{region} is the Region
{stage_name} is the stage name of the API deployment
Example Using command:
>> aws apigateway test-invoke-method --rest-api-id 81jpgj2f0j --resource-id Prq5yc5aq6 --http-method GET --path-with-query-string '/'
{
"status": 200,
"body": "",
"headers": {
"Content-Type": "application/json"
},
"multiValueHeaders": {
"Content-Type": [
"application/json"
]
},
"log": "Execution log for request c6b6f73a-1\nWed Aug 18 19:48:34 UTC 2021 : Starting execution for request: c6b6f73a-1\nWed Aug 18 19:48:34 UTC 2021 : HTTP Method: GET, Resource Path: /\nWed Aug 18 19:48:34 UTC 2021 : Method request path: {}\nWed Aug 18 19:48:34 UTC
2021 : Method request body before transformations: \nWed Aug 18 19:48:34 UTC 2021 : Method response body after transformations: \nWed Aug 18 19:48:34 UTC 2021 : Method response headers: {Content-Type=application/json}\nWed Aug 18 19:48:34 UTC 2021 : Successfully completed execution\nWed Aug 18 19:48:34 UTC 2021 : Method completed with status: 200\n",
"latency": 12
}
In-place deployment
The application on each instance in the deployment group is stopped.
The latest application revision is installed, and the new version of the application is started and validated.
Only deployments that use the EC2/On-Premises Compute products and services can use in-place deployments.
Blue/green deployment
The traffic is shifted from your current compute environment to a new one with your updated application revisions.
Canary
Traffic is shifted in two increments.
Linear
Traffic is shifted in equal increments.
All-at-once
All traffic is shifted from the original Lambda function to the updated Lambda function version all at once.
Configuring The application
.NET
Instrument all of your AWS SDK for .NET clients by calling RegisterXRayForAllServices before you create them.
AWSSDKHandler.RegisterXRayForAllServices();
To specify a specific service, use RegisterXRay
AWSSDKHandler.RegisterXRay<IAmazonDynamoDB>()
Java
Instrument an Amazon DynamoDB client, and pass a tracing handler to AmazonDynamoDBClientBuilder
Python
The X-Ray SDK for Python has a class named xray_recorder that provides the global recorder.
Example tracing conde with java
using Amazon.XRay.Recorder.Handlers.AwsSdk;
using Amazon.XRay.Recorder.Handlers.SqlServer;
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
AWSSDKHandler.RegisterXRayForAllServices();
app.UseXRay("SampleApp"); // name of the app
// rest of app configuration
}
public void RunSqlQuery(string connectionString)
{
using (var connection = new SqlConnection(connectionString))
{
var query = "SELECT * FROM Products FOR XML AUTO, ELEMENTS";
var command = new TraceableSqlCommand(query, connection);
command.Connection.Open();
await command.ExecuteXmlReaderAsync();
}
}
Content
Update your App
Update your code.
Push the new code.
$ git add .
$ git commit –m “v2.0”
$ eb deploy
Monitor the deployment progress.
$ eb status
Create an Application Source Bundle
When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you'll need to upload a source bundle.
Your source bundle must meet the following requirements:
Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)
Not exceed 512 MB
Not include a parent folder or top-level directory (subdirectories are fine)
If you want to deploy a worker application that processes periodic background tasks, your application source bundle must also include a cron.yaml file.
This code snippet demonstrates how to retrieve a secret from AWS Secrets Manager (Python).
def get_secret():
secret_name = "<<{{MySecretName}}>>"
region_name = "<<{{MyRegionName}}>>"
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager’,
region_name=region_name
)
try:
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
etc
Access secrets in the Parameter Store from AWS CloudFormation.
Create parameters in your CFN/SAM template.
Set the Type to:
AWS::SSM::Parameter::Value<String>
Use the Default property to define the path
in Parameter Store.
Use references ( Ref: ) in your template to make use of the values.
WS Cloud Development Kit (AWS CDK)
It is an open-source software development framework to define your cloud application resources using familiar programming languages.
AWS CDK uses the familiarity and expressive power of programming languages for modeling your applications.
It provides high-level components called constructs that preconfigure cloud resources with proven defaults, so you can build cloud applications with ease.
AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation.
Example Java - Creating a Fargate Service
export class MyEcsConstructStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new ec2.VpcNetwork(this, 'MyVpc', {
maxAZs: 3 // Default is all AZs in region
});
const cluster = new ecs.Cluster(this, 'MyCluster', {
vpc: vpc
});
// Create a load-balanced Fargate service and make it public
new ecs.LoadBalancedFargateService(this, 'MyFargateService', {
cluster: cluster, // Required
cpu: '512', // Default is 256
desiredCount: 6, // Default is 1
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"), // Required
memoryMiB: '2048', // Default is 512
publicLoadBalancer: true // Default is false
});
Amazon Kinesis Client Library (KCL)
The Amazon Kinesis Client Library (KCL) is compiled into your application to enable fault-tolerant consumption of data from the stream.
The KCL ensures that for every shard, there is a record processor running and processing that shard.
The library also simplifies reading data from the stream. The KCL uses an Amazon DynamoDB table to store control data.
It creates one table per application that is processing data.
The KCL is a Java Library, however support for other languages, such as Python, are available using a multi-language interface called the MultiLangDaemon.
This daemon is Java-based and runs in the background when you are using a KCL language other than Java.
Therefore, if you install the KCL for Python and write your consumer application entirely in Python, you still need Java installed on your system because of the MultiLangDaemon.
The KCL performs the complex tasks associated with distributed computing, like load-balancing, responding to failures, checkpointing, and resharding.
Kinesis Producer Library (KPL)
The KPL is an easy-to-use, highly configurable library that helps you write to a Kinesis data stream.
Kinesis Connector Library
The Amazon Kinesis Connector Library helps Java developers integrate Amazon Kinesis with other AWS and non-AWS services.
The current version of the library provides connectors for Amazon DynamoDB, Amazon Redshift, Amazon S3, and Amazon Elasticsearch.
Content
Using Cognito with Lambda
public class HelloWorld implements RequestHandler<String, String> {
@Override
public String handleRequest(String input, Context context)
{
String callerIndentityId = context.getIdentity().getIdentityId();
String callerIndentityPoolId = context.getIdentity().getIdentityPoolId();
//
// TODO: perform checks against the PoolId and Identity Id
//
return “Ok”;
}
}
IAM Policy Using Cognito
“Version”: 2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“dynamodb:Query”,
“dynamodb:GetItem”,
“dynamodb:BatchGetItem”,
“dynamodb:PutItem”,
“dynamodb:UpdateItem”,
“dynamodb:DeleteItem”,
“dynamodb:BatchWriteItem”
],
“Resource”: [
“arn:aws.dynamodb:us-east-1:12345678:table/PowerWeek2019App4_Assets”
],
“Condition”: {
“ForAllValues:StringEquals” : {
“dynamodb:Leadingkeys”: [“${cognito-identity.amazonaws.com:sub}”]
}
}
When you initialize a new service client without supplying any arguments, the AWS SDK for Java attempts to find AWS credentials using the default credential provider chain implemented by the DefaultAWSCredentialsProviderChain class.
The default credential provider chain looks for credentials in this order:
Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The AWS SDK for Java uses the EnvironmentVariableCredentialsProvider class to load these credentials.
Java system properties - aws.accessKeyId and aws.secretKey. The AWS SDK for Java uses the SystemPropertiesCredentialsProvider to load these credentials.
Web Identity Token credentials from the environment or container.
The default credential profiles file- typically located at ~/.aws/credentials (location can vary per platform), and shared by many of the AWS SDKs and by the AWS CLI. The AWS SDK for Java uses the ProfileCredentialsProvider to load these credentials.
Amazon ECS container credentials- loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. The AWS SDK for Java uses the ContainerCredentialsProvider to load these credentials. You can specify the IP address for this value.
Instance profile credentials- used on EC2 instances, and delivered through the Amazon EC2 metadata service. The AWS SDK for Java uses the InstanceProfileCredentialsProvider to load these credentials. You can specify the IP address for this value.
Content