Intelehealth Cloud server setup on AWS

Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database storage, content delivery and other functionality to help businesses scale and grow. Explore how millions of customers are currently leveraging AWS cloud products and solutions to build sophisticated applications with increased flexibility, scalability and reliability.

Create an AWS account :- First you have to create an AWS account. There are 2 ways to create account.

      • Create your own account by yourself.
      • Your company have a parent account, who will create an account (IAM) for you and also provide required permission.


Here is the list of some important feature in AWS to work on and all are possible options to work as in Intelehealth requred

  • IAM (Identity and Access Management)
  • EC2 (Elastic Cloud Computing)
    1. Instances
    2. AMI
    3. EBS (Elastic Block Storage)
    4. Elastic IP
    5. Load Balancer
  • S3 (Simple Storage Solution)
  • RDS (Relational DataBase Services)
  • Cloud Watch
  • VPC(Subnet, Route 53, Security Group, Key-pair )

IAM (Identity & Access Management)

AWS Identity and Access Management (IAM) enables you to manage access to AWS serdvices and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. You have other people in your group who have varied access and authorization permissions. When you use IAM users, it is easier to assign policies to specific users that access specific services and associated resources.

  • Users
  • Groups
  • Roles
  • Policies

Users: - You can create an IAM user from IAM console.

  • using Root account is not secure
  • create IAM user to work on separate functions (EC2, S3, ....)
  • you can provide specific permission to IAM user by policies.
  • Policies contains all required permissions to provided by the root.

Roles: - You can create roles in IAM and manage permissions to control which operations can be performed by the entity, or AWS service, that assumes the role. You can also define which entity is allowed to assume the role. In addition, you can use service-lined roles to delegate permissions to AWS services that create and manage AWS resources on your behalf. IAM roles allow you to delegate access to users or services that normally don't have access to your organization's AWS resources. IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to make AWS API calls. Consequently, you don't have to share long-term credentials or define permissions for each entity that requires access to a resource. Create role which contain policies details.

Groups: - A group is a collection of IAM users. Groups let you assign permissions to a collection of users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group. If a new user joins your organization and should have administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user's permissions, you can remove him or her from the old group and add him or her to the new group.

Policies: - You manage access in AWS by creating policies and attaching them to IAM identities or AWS resources. A policy is an object in AWS that, when associated with an entity or resource, defines their permissions. AWS evaluates these policies when a principal, such as a user, makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents.

Policy Type: -

        • Identity-based policies
        • Resource-based policies
        • Service control policies (SCP)
        • Access control policies (ACL)

The following identity-based policy allows the implied principal to list a single Amazon S3 bucket named example_bucket:

{ "Version": "2012-10-17",

"Statement": {

"Effect": "Allow",

"Action": "s3:ListBucket",

"Resource": "arn:aws:s3:::example_bucket"

}

} click_here & click_here

EC2 (Elastic Cloud Computing)

Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.

INSTANCE: -

An EC2 instance is a virtual server in Amazon’s Elastic Compute Cloud (EC2) for running applications on the Amazon Web Services (AWS) infrastructure. AWS is a comprehensive, evolving cloud computing platform; EC2 is a service that allows business subscribers to run application programs in the computing environment. The EC2 can serve as a practically unlimited set of virtual machines. Amazon provides a variety of types of instances with different configurations of CPU, memory, storage, and networking resources to suit user needs. Each type is also available in two different sizes to address workload requirements.

    • Instance type: -

Each virtual machine, called an "instance", functions as a virtual private server. Amazon sizes instances based on "Elastic Compute Units". The performance of otherwise identical virtual machines may vary. On November 28, 2017, AWS announced a bare-metal instance type offering marking a remarkable departure from exclusively offering virtualized instance types

      1. General Purpose: M5, M4, T2
      2. Compute Optimized: C5, C4
      3. Memory Optimized: X1e, X1, R4
      4. Accelerated Computing: P3, P2, G3, F1
      5. Storage Optimized: H1, I3, D2
      • cost

Amazon charged about $0.0058/hour ($4.176/month) for the smallest "Nano Instance" (t2.nano) virtual machine running Linux or Windows. Storage-optimized instances cost as much as $4.992/hour (i3.16xlarge). "Reserved" instances can go as low as $2.50/month for a three-year prepaid plan. The data transfer charge ranges from free to $0.12 per gigabyte, depending on the direction and monthly volume

    • Free Tier

Amazon offered a bundle of free resource credits to new account holders. The credits are designed to run a "micro" sized server, storage (EBS), and bandwidth for one year. Unused credits cannot be carried over from one month to the next.

    • Reserve Instance

Reserved instances enable EC2 or RDS service users to reserve an instance for one or three years. The corresponding hourly rate charged by Amazon to operate the instance is 35-75% lower than the rate charged for on-demand instances. Reserved Instances can be purchased in three different ways: All Upfront, Partial Upfront and No Upfront. The different purchase options allow for different structuring of payment models.

    • Spot Instance

Cloud providers maintain large amounts of excess capacity they have to sell or risk incurring losses. Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available at up to 90% discount compared to On-Demand prices. As a trade-off, AWS offers no SLA on these instances and customers take the risk that it can be interrupted with only two minutes of notification when Amazon needs the capacity back. Researchers from the Israeli Institute of Technology found that ״they (Spot instances) are typically generated at random from within a tight price interval via a dynamic hidden reserve price”.

    • Data Transfer charge

Data transfer costs are fees for moving data across AWS services to and from EC2. There’s a fee for transferring data into EC2 from another AWS service, and a fee for transferring data out of EC2 to another AWS service; these fees differ for each AWS service. In N. Verginia, approx cost will be $0.002 / GB / Month.

    • Operating System

The EC2 service offered Linux and later Sun Microsystems' OpenSolaris and Solaris Express Community Edition. In October 2008, EC2 added the Windows Server 2003 and Windows Server 2008 operating systems to the list of available operating systems. In March 2011, NetBSD AMIs became available. In November 2012, Windows Server 2012 support was added.

    • Reliability

To make EC2 more fault-tolerant. Amazon engineered Availability Zones that are designed to be insulated from failures in other availability zones. Availability zones do not share the same infrastructure. Applications running in more than one availability zone can achieve higher availability. EC2 provides users with control over the geographical location of instances that allows for latency optimization and high levels of redundancy. For example, to minimize downtime, a user can set up server instances in multiple zones that are insulated from each other for most causes of failure such that one backs up the other.

AMI: -

An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). From an AMI, you launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch multiple instances of an AMI, as shown in the following figure. Your instances keep running until you stop or terminate them, or until they fail. If an instance fails, you can launch a new one from the AMI. click_here

An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You must specify a source AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations.

      • A template for the root volume for the instance (for example, an operating system, an application server, and applications)
      • Launch permissions that control which AWS accounts can use the AMI to launch instances
      • A block device mapping that specifies the volumes to attach to the instance when it's launched

AMI TYPE :-

              • Region
              • Operating system
              • Architecture (32-bit or 64-bit)
              • Launch Permissions
              • Storage for the Root Device

Launch Permissions : -

              • Public: an AMI that can be used by anyone.
              • Paid: a for-pay AMI that is registered with Amazon DevPay and can be used by anyone who subscribes for it. DevPay allows developers to mark-up Amazon's usage fees and optionally add monthly subscription fees.
              • Shared: a private AMI that can only be used by Amazon EC2 users who are allowed access to it by the developer.

EBS (Elastic Block Storage): -

Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while paying a low price for only what you provision.

Amazon EBS is designed for application workloads that benefit from fine tuning for performance, cost and capacity. Typical use cases include Big Data analytics engines (like the Hadoop/HDFS ecosystem and Amazon EMR clusters), relational and NoSQL databases (like Microsoft SQL Server and MySQL or Cassandra and MongoDB), stream and log processing applications (like Kafka and Splunk), and data warehousing applications (like Vertica and Teradata). EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. With Amazon EBS, you pay only for what you use.

EBS Volume Type: -

Elastic IP: -

An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. An Elastic IP address is a public IPv4 address, which is reachable from the internet. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the internet; for example, to connect to your instance from your local computer. click_here

You can associate an Elastic IP address with any instance or network interface for any VPC in your account. With an Elastic IP address, you can mask the failure of an instance by rapidly remapping the address to another instance in your VPC. Note that the advantage of associating the Elastic IP address with the network interface instead of directly with the instance is that you can move all the attributes of the network interface from one instance to another in a single step.

      • It's associated with your AWS account.
      • The elastic IP address is also accessible over the internet.
      • The elastic IP address is persistent. It's associated with your account until you choose to release it.
      • You can simply redirect the one instance traffic to another instance by simply detaching Elastic IP address from the old instance and attach to the new instance.
      • AWS do not support the elastic IP address for IPv6.
      • You can detach elastic IP from one instance & attach that same IP to a different instance.
      • If you stop and restart the instance, still the elastic IP address will be associated with the instance.

But the problem with public address of the instance is that it will change every time when you stop and restart the instance. If you use elastic IP address instead of public IP, it will not change even when you stop and restart the instance.

      1. It' not associated with your AWS account.
      2. When you stop & restart the instance, every time AWS will allocate the new public IP address to that instance.
      3. You can't manually attach or detach the public IP from your instance.
      4. Public IP is resolute. Public IP address of the instance will be released when you.....
                    • Stop and Start the instance
                    • Terminate the instance
                    • Associate an Elastic IP address with the instance.

So, if you want to attach a persistent and re-attachable IP address to your instance, then you can use the Elastic IP address.

Load Balancer: -

Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault tolerant. It monitors the health of registered targets and routes traffic only to the healthy targets.

Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers. Amazon ECS services can use either type of load balancer. Application Load Balancers are used to route HTTP/HTTPS (or Layer 7) traffic. Network Load Balancers and Classic Load Balancers are used to route TCP (or Layer 4) traffic.

      • Application Load Balancer

It makes routing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each container instance in your cluster. Application Load Balancers support dynamic host port mapping.

      • Network Load Balancer

It makes routing decisions at the transport layer (TCP/SSL). It can handle millions of requests per second. After the load balancer receives a connection, it selects a target from the target group for the default rule using a flow hash routing algorithm. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.

      • Classic Load Balancer

It makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). Classic Load Balancers currently require a fixed relationship between the load balancer port and the container instance port.

S3 (Simple Storage Solution)

Amazon S3 is object storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry. S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives customers flexibility in the way they manage data for cost optimization, access control, and compliance. S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3. And Amazon S3 is the most supported cloud storage service available, with integration from the largest community of third-party solutions, systems integrator partners, and other AWS services. click_here

Benifits: -

        • UNMATCHED DURABILITY, AVAILABILITY, & SCALABILITY
        • MOST COMPREHENSIVE SECURITY & COMPLIANCE CAPABILITIES
        • QUERY IN PLACE
        • FLEXIBLE MANAGEMENT
        • MOST SUPPORTED BY PARTNERS, VENDORS, & AWS SERVICES
        • EASY, FLEXIBLE DATA TRANSFER

Use Cases: -

        • BACKUP & RECOVERY
        • DATA ARCHIVING
        • DATA LAKES & BIG DATA ANALYTICS
        • HYBRID CLOUD STORAGE
        • CLOUD-NATIVE APPLICATION DATA
        • DISASTER RECOVERY

The basic storage units of Amazon S3 are objects which are organized into buckets and identified within each bucket by a unique, user-assigned key. Buckets and objects can be created, listed, and retrieved using either a REST-style HTTP interface or a SOAP interface, but new Amazon S3 features will not be supported for SOAP, Amazon recommends using either the REST API or the AWS SDKs.

Bucket names and keys are chosen so that objects are addressable using HTTP URLs:

        • http://s3.amazonaws.com/bucket/key
        • http://bucket.s3.amazonaws.com/key
        • http://bucket/key

S3 Storage Classes

        1. S3 Standard is the default class.
        2. S3 Standard Infrequent Access (IA) is used for less frequently accessed data.
        3. S3 Reduced Redundancy Storage (RRS) is designed for noncritical, reproducible data at lower levels of redundancy.
        4. Amazon Glacier is designed for long-term storage of data that is infrequently accessed and for which retrieval latency of minutes or hours are acceptable.

Pricing

Amazon S3 pricing varies depending on the different S3 storage classes. Prices vary from storage usage, number of requests, and data transfers. At its inception, Amazon charged end users $0.15 per gigabyte-month, with additional charges for bandwidth used in sending and receiving data, and a per-request (get or put) charge.

S3 logs

Amazon S3 allows users to enable or disable logging. If enabled, the logs are stored on Amazon S3 buckets which can then be analyzed. These logs contain useful information such as:

        • Date / time of access to requested content
        • Protocol used (HTTP, FTP, etc.)
        • HTTP Status Code
        • Turnaround time
        • HTTP Request

S3 API

S3 API allows operations on different components of Amazon S3 solution such as Buckets, Objects, and the Service.

Access Permissions

By default, all Amazon S3 resources—buckets, objects, and related sub resources (for example, lifecycle configuration and website configuration)—are private: only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy.

Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. Access policies you attach to your resources (buckets and objects) are referred to as resource-based policies. For example, bucket policies and access control lists (ACLs) are resource-based policies. You can also attach access policies to users in your account. These are called user policies. You may choose to use resource-based policies, user policies, or some combination of these to manage permissions to your Amazon S3 resources. The introductory topics provide general guidelines for managing permissions. click_here

RDS (RELATIONAL DATABASE SERVICE)

Amazon Relational Database Service (or Amazon RDS) is a distributed relational database service by Amazon Web Services (AWS). It is a web service running "in the cloud" designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS do not offer ssh connection to RDS instances.

Amazon RDS supports for

        • MySQL Databases 2009
        • Oracle Database 2011
        • Microsoft SQL Server 2012
        • Postgre SQL 2013
        • Amazon Aurora 2014
        • Maria DB 2015

Overview of Amazon RDS:-

      • When you buy a server, you get CPU, memory, storage, and IOPS, all bundled together. With Amazon RDS, these are split apart so that you can scale them independently. If you need more CPU, less IOPS, or more storage, you can easily allocate them.
      • Amazon RDS manages backups, software patching, automatic failure detection, and recovery.
      • To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB instances, and it restricts access to certain system procedures and tables that require advanced privileges.
      • You can have automated backups performed when you need them, or manually create your own backup snapshot. You can use these backups to restore a database. The Amazon RDS restore process works reliably and efficiently.
      • You can get high availability with a primary instance and a synchronous secondary instance that you can fail over to when problems occur. You can also use MySQL, MariaDB, or PostgreSQL Read Replicas to increase read scaling.
      • You can use the database products you are already familiar with: MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server.
      • In addition to the security in your database package, you can help control who can access your RDS databases by using AWS Identity and Access Management (IAM) to define users and permissions. You can also help protect your databases by putting them in a virtual private cloud.

DB Instance: -

The basic building block of Amazon RDS is the DB instance. A DB instance is an isolated database environment in the cloud. A DB instance can contain multiple user-created databases, and you can access it by using the same tools and applications that you use with a stand-alone database instance. You can create and modify a DB instance by using the AWS Command Line Interface, the Amazon RDS API, or the AWS Management Console.

Regions and Availability Zone: -

Amazon RDS Multi-Availability Zone (AZ) allows users to automatically provision and maintain a synchronous physical or logical “standby” replica, depending on database engine, in a different Availability Zone (independent infrastructure in a physically separate location). Multi-AZ database instance can be developed at creation time or modified to run as a Multi-AZ deployment later. Multi-AZ deployments aim to provide enhanced availability and data durability for MySQL, MariaDB, Oracle, PostgreSQL and SQL Server instances and are targeted for production environments.

Security: -

security group controls the access to a DB instance. It does so by allowing access to IP address ranges or Amazon EC2 instances that you specify. Amazon RDS uses DB security groups, VPC security groups, and EC2 security groups. In simple terms, a DB security group controls access to a DB instance that is not in a VPC, a VPC security group controls access to a DB instance inside a VPC, and an Amazon EC2 security group controls access to an EC2 instance and can be used with a DB instance.

Performance: - click_here

        • RDS Metrics
        • Instance Metrics
        • Slow query log

Cost management: -

        • Instance type
        • Instance count
        • Auto-Scaling

Cloud Watch

Amazon CloudWatch is a monitoring and management service built for developers, system operators, site reliability engineers (SRE), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications and services that run on AWS, and on-premises servers. You can use CloudWatch to set high resolution alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to optimize your applications, and ensure they are running smoothly.

Benifits: -

      • ACCESS ALL YOUR DATA FROM A SINGLE PLATFORM
      • RICHEST AND DEEPEST INSIGHTS FOR AWS RESOURCES
      • VISIBILITY ACROSS YOUR APPLICATIONS, INFRASTRUCTURE, AND SERVICES
      • REDUCE MEAN TIME TO RESOLUTION (MTTR) AND IMPROVE TOTAL COST OF OWNERSHIP (TCO)
      • DRIVE INSIGHTS TO OPTIMIZE APPLICATIONS AND OPERATIONAL RESOURCES
      • PAY AS YOU GO

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources.

API: - click_here

      • Action
        1. DeleteAlarms
        2. DescribeAlarms, .....
      • Data Type
        1. AlarmHistoryItem
        2. Datapoint, .....
      • Common Parameter
      • Common Error
      • Regions & Endpoints

VPC (Virtual Private Cloud)

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications. click_here

By launching your instances into a VPC instead of EC2-Classic, you gain the ability to:

      • Assign static private IPv4 addresses to your instances that persist across starts and stops
      • Optionally associate an IPv6 CIDR block to your VPC and assign IPv6 addresses to your instances
      • Assign multiple IP addresses to your instances
      • Define network interfaces, and attach one or more network interfaces to your instances
      • Change security group membership for your instances while they're running
      • Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering)
      • Add an additional layer of access control to your instances in the form of network access control lists (ACL)
      • Run your instances on single-tenant hardware

AWS VPC allows users to connect to the Internet, a user's corporate data_center, and other users' VPCs.

Users are able to connect to the Internet by adding an Internet Gateway to their VPC, which assigns the VPC a public IPv4 Address.

Users are able to connect to a data center by setting up a Hardware Virtual Private Network connection between the data center and the VPC. This connection allows the user to "interact with Amazon EC2 instances within a VPC as if they were within [the user's] existing network."

VPC Limits: - Click_here

SUBNET

Subnet is “part of the network”, in other words, part of entire availability zone. Each subnet must reside entirely within one Availability Zone and cannot span zones.

ROUTE 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.

SECURITY GROUP

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. Click_Here

      • By default, security groups allow all outbound traffic.
      • Security group rules are always permissive; you can't create rules that deny access.
      • Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
          1. Protocol
          2. Port Range
          3. ICMP type & code
          4. Source and destination

KEY-PAIR

Amazon EC2 uses public–key cryptography to encrypt and decrypt login information. Public–key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair. Click_Here

SET_UP

Create Instance:-

    1. Search for EC2 and open the Instances dashboard page and click on Launch Instance.
    2. Choose the AMI(Amazon Machine Image). You can select an AMI provided by AWS, our user community, or the AWS Marketplace; or you can select one of your own AMIs. As per Intelehealth requirement, you can select (ami-04ea996e7a3e7ad6b )
    3. Choose the Instance Type, contains (General/Copute, RAM, vCPU). As per Intelehealth requirement, you can select (t2.large, m5.large, m4.large, c5.xlarge, c4.xlarge)
    4. Configure Instace detail. You have to look into Network, Subnet, Auto-assign Public_Ip, Shutdown Behavior, Monitoring.
    5. Add Address as per your Requirement.
    6. Place a Tag for the Instance which help to monitor cost of the Instance.
    7. Configure security group. You can create one for this Instance or you can use one group for all Instances.
    8. Review the Instance configuration Click on launch button. (NOTE:- Here Launch button will not launch your Instance).
    9. Now pop-up will ask for key pair, select one option.
    10. click the check box and click on "Launch Instance" (NOTE: - This will launch instance).

IMP:-

    • If you are not reserved any instance and launching randomly, this is not conformer that your Instance will be create because AWS will check the availability of the Instance type in that particular AZ.

Reserve Instance: -

To purchase RI, be careful to select Platform , Instance type *, Tenancy, Term, Offering class, Payment option.

After finding best price you have to add that instance to cart. Then you buy the instance. Instance type will be the key point for this.

After Reserve your Instance, This will be reserve Instance type for the specific region and apply to any running Instance with the same type.

If you have an Instance of m4.large and running, then cost will be $0.1 / Hour. After Reserve an Instance of m4.large, Your running Instance will automatically attached to RI and cost will be reduce as per RI cost.

Cross Account Sign in: -

Please follow the link, which will help to create the link to cross account sign In. Click_her

NOTE:- 038132173145 is Amal aws ID.

Step 1: Create a policy in the prod Account (Amal), which contains all the permissions which need to the dev account. For now, we have a policy name 'Intelehealth_policy' with Amal account. ________ But if you want to create a new policy, you have to select the dependent permissions like EC2, S3, RDS, e.t.c.

Step 2: Create a role in the prod Account (Amal), which contains all the roles which need to the dev account. For now, we have a role name 'CrossAccountSignin_Intelehealth' with Amal account. ________ But if you want to create a new role, you have to select 'Another aws account' and you have to provide the dev account ID and respected policy.

Step 3: Now from the dev account, you have to create a policy which allow to access cross account.

{ "Version": "2012-10-17",

"Statement": [ { "Sid": "VisualEditor0",

"Effect": "Allow",

"Action": "*",

"Resource": "arn:aws:iam::038132173145:role/CrossAccountSignin_Intelehealth"

} ]

}

Step 4: Create a user who need the cross account sign in, attach the above policy to the user and some other permission wht you want.

Step 5: Login to the dev user. on the top right, you can see your username. click on it, this will show some drop down list, select 'Switch role' from there.

Step 6: Enter prod account ID (038132173145) and role name (CrossAccountSignin_Intelehealth) and click switch role.

This will show you prod user console. Now you have the respective control of the prod account.