Clone the following git repository: https://github.com/owolabioromidayo/hs2
Navigate to the lambda folder.
Move .app.py.example to app.py and input your values for DB_NAME, CONNECTION_STRING, and TRAINING_PASSWORD
Open this folder in your terminal and run the following commands:
docker build -t model-trainer .
Install awscli using the following commands
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Now we authenticate the AWS CLI.
Log in to the AWS Website.
Click the drop-down on the top right (your name) and select Security Credentials.
Select access keys and create a new access key. Save the .csv file as you will be allowed to view it only once.
With this new information in hand, run aws configure and provide the ID and Key.
Push your docker image to AWS ECR so we can upload it to our AWS Lambda function eventually. Replace 123456789012 with your AWS account ID and set the region value to the region where you want to create the Amazon ECR repository in the following commands (from us-east-1) .
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
aws ecr create-repository --repository-name model-trainer --image-scanning-configuration scanOnPush=true --image-tag-mutability MUTABLE
docker tag model-trainer:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/model-trainer:latest
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/model-trainer:latest
For a better understanding of the above commands, visit https://docs.aws.amazon.com/lambda/latest/dg/images-create.html
Now we are going to create our Lambda function and link it to the image.
Access the AWS Lambda Service portal while signed in to their website.
Create a new lambda function using the Container Image option as displayed above.
Name the function and select the model image you have uploaded.
Click on create function.
Go to Configuration -> General configuration -> Edit and change your Memory to 3GB so we don't run into issues as we get more data.
Move your .env.example file to a .env file and update with your configuration. Then run lambda/test_lambda.py and check your MongoDB "ml" collection to see that the service works.
Then go back to your folder for the main service and run the tests/test_ml_endpoints.py file after updating your ML_ENDPOINT environment variable. All responses should return 200 if set up correctly. Ensure your SERVER_ENDPOINT points to the URL of the deployed Heroku server.
Now you are done deploying both services on the backend. Head over to the Frontend Service page to continue.