Getting Started
Contents
Tips (Japanese only)
Relion3.1 AWS ParallelCluster benchmark results
Relion4.0 AWS ParallelCluster benchmark results
First, navigate to the Cloud9 console page.
You can use this link, Cloud9 console, or find Cloud9 in the Services menu at the top left in the navigation bar,
or enter "cloud9" in the search box and search for Cloud9.
After Cloud9 console opens, click "Create environment".
Enter "Project name" (e.g. protein240101) into the "Name" box.
The project name must meet the following requirements to satisfy the naming rules of S3 bucket.
Bucket names must be between 3 (min) and 63 (max) characters long.
Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-).
Bucket names must begin and end with a letter or number.
Bucket names must not contain two adjacent periods.
Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
For more information see, https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html
(Note: The GTC scripts newly create a S3 bucket as well as a EC2 key pair (key pair) and at least one ParallelCluster instance for each project. To significantly reduce the number of the input parameters required for the user with the GTC scripts, a naming format "[IAM username]-[AWS account ID]-[project name]" specific to the GoToCloud platform is used as a unified names of all above service instances in each project. Importantly, the project name used as part of S3 bucket name must comply with the AWS requirements of non-US standard regions because the naming of each S3 bucket must be globally unique. )
Proceed to "New EC2 instance".
Select "Amazon Linux 2" in "Platform".
Proceed to "Network settings".
Select "Secure Shell (SSH)" in "Connection"
Open ▶︎VPC settings
Select the VPC and Subnet as below.
Amazon Virtual Private Cloud (VPC) : "Name - kek-gtc-user-vpc"
Subnet : "Name - kek-gtc-user-subnet1"
If "Name - kek-gtc-user-vpc" does not exist in the dropdown list of VPC, the region selected or something is wrong, so cancel Cloud9 creation (scroll to the bottom and select "Cancel")
Scroll to the bottom and select "Create"
Click "Open" in the Cloud9 IDE column of the Environments list to start up Cloud9 IDE.
Cloud9 IDE starts.
This step prepares the creation of ParallelCluster instances.
Note: The script we will use in this step performs the following 8 setups altogether.
(1) Mounting the shared EFS(Elastic File System)
(2) Installing the ParallelCluster libraries
(3) Installing dependencies (e.g., jq and Node.js)
(4) Checking for the existence of the necessary role for managing spot instances startup (i.e., EC2Spot Role), and creating it if it does not exist
(5) Setting the Cloud9 tags for cost management
key: gtc:account, value: AWS account ID (e.g. 1234565789)
key: gtc:iam-user, value: IAM user ID (e.g. kek-gtc-user01)
key: gtc:project, value: Project name (e.g. protein240101)
key: gtc:method, value: Method name (e.g. cryoem)
(6) Creating an S3 bucket for uploading input cryo-EM dataset and long-term storage for the analysis outputs
S3 bucket is named automatically in a naming format "[IAM user name]-[AWS account ID]-[project name]" (e.g. kek-gtc-user01-123456789-protein240101)
How to check S3 buckets from the AWS console, see [Viewing S3 buckets from the AWS console(Japanese only)]
(7) Generating an EC2 key pair for SSH connection to the head nodes of ParallelCluster instances.
key pair is automatically named in the same naming format as S3 buckets, "[IAM user name]-[AWS account ID]-[project name]".
(8) Creating the pcluster configuration file (“config.yaml”)
If you wish to change the configuration settings, see [Generating AWS ParallelCluster configuration file(Japanese only)]
This step needs to be done in the Cloud9 terminal at the bottom of the shown Cloud9 IDE.
Get the GoToCloud script stored in the shared S3 bucket.
$ cd ~/environment/
$ wget https://kek-gtc-master-s3-bucket.s3.ap-northeast-1.amazonaws.com/gtc_setup_gotocloud_environment.sh
Change the file permission.
$ chmod 755 ~/environment/gtc_setup_gotocloud_environment.sh
Start setting up GoToCloud environment.
$ ~/environment/gtc_setup_gotocloud_environment.sh
Advanced: this script can take arguments with the following format.
$ ~/environment/gtc_setup_gotocloud_environment.sh [-p PROJECT_NAME] [-v SCRIPT_VERSION] [-o OWN_SHARED_S3_BUCKET][--latest]
PROJECT_NAME: Project name. By default, project name set in the Cloud9 name is automatically set as default. Usually not used.
SCRIPT_VERSION: Specify GoToCloud script version. Default is the latest version. Usually not used.
OWN_SHARED_S3_BUCKET: Used to specify a shared S3 that you own. Default is shared S3 bucket owned by KEK.
--latest : Used to install the latest version of ParallelCluster. Usually not used. The default version of ParallelCluster currently installed is 3.7.0.
Execution example
$ ~/environment/gtc_setup_gotocloud_environment.sh
GoToCloud: Run setup with following settings...
GoToCloud: Project name : cloud9-name
GoToCloud: gtc_sh script verion : latest
GoToCloud: mounting the file system /efs ...
<<Snip>>
Saving to: ‘gtc_efs_setting.json’ #download efs setting file
2022-08-01 07:12:24 (52.4 MB/s) - ‘gtc_efs_setting.json’ saved [1421/1421]
fs-0c16cddddca8c3ed5:/ /efs efs _netdev,noresvport,tls,mounttargetip=10.1.11.52 1 1
Filesystem Size Used Avail Use% Mounted on
devtmpfs 475M 0 475M 0% /dev
tmpfs 492M 0 492M 0% /dev/shm
tmpfs 492M 524K 491M 1% /run
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/xvda1 10G 8.0G 2.1G 80% /
tmpfs 99M 0 99M 0% /run/user/1000
127.0.0.1:/ 8.0E 81G 8.0E 1% /efs #check if efs is mounted
GoToCloud: /efs is mounted.
GoToCloud: Installing jq ...
<<SNIP>>
Complete!
GoToCloud: Installing parallelcluster ...
Collecting aws-parallelcluster
Downloading aws_parallelcluster-3.0.2-py3-none-any.whl (393 kB)
|████████████████████████████████| 393 kB 4.3 MB/s
<<SNIP>>
GoToCloud:
GoToCloud: Check PATH settings for parallelcluster
GoToCloud: which pcluster
/home/ec2-user/.local/bin/pcluster #Path of Parallelcluster installed
GoToCloud:
GoToCloud: Parallelcluster 3.0.3 is installed #Version of Parallelcluster installed
GoToCloud: Installing Node.js ...
GoToCloud: Node.js version v16.13.1 is already installed
GoToCloud: Checking if Service-Linked Role 'AWSServiceRoleForEC2Spot' exists...
GoToCloud: OK! Service-Linked Role 'AWSServiceRoleForEC2Spot' exists.
GoToCloud: project name 'protein240101' is specified
GoToCloud: Calling gtc_utility_setup_global_variables...
GoToCloud: Creating GoToCloud application directory /home/ec2-user/.gtc...
GoToCloud: Creating Cloud9 system environment variable settings for GoToCloud global variables as /home/ec2-user/.gtc/global_variables.sh...
GoToCloud: Activating Cloud9 system environment variable settings for GoToCloud global variables defined in /home/ec2-user/.gtc/global_variables.sh...
GoToCloud: Note that this activation is effective only for caller of this script file.
GoToCloud: Making a backup of previous /home/ec2-user/.bashrc as /home/ec2-user/.gtc/.bashrc_backup_20211217_072945...
GoToCloud: Appending GoToCloud system environment variable settings to /home/ec2-user/.bashrc...
GoToCloud:
GoToCloud: To applying GoToCloud system environment variables, open a new terminal.
GoToCloud: OR use the following command in this session:
GoToCloud:
GoToCloud: source /home/ec2-user/.gtc/global_variables.sh
GoToCloud:
GoToCloud: Done
GoToCloud: Settings tags to Cloud9...
GoToCloud: Cloud9 Instance ID: i-05f5d33afa26b2a62
GoToCloud: Making sure that S3 bucket kek-gtc-user01-123456789-protein-240101 does not exist yet...
An error occurred (NoSuchBucket) when calling the ListObjectsV2 operation: The specified bucket does not exist
GoToCloud: OK! S3 bucket kek-gtc-user01-123456789-protein240101 does not exist yet!
GoToCloud: Creating S3 bucket kek-gtc-user01-123456789-protein240101... #Creating S3 bucket[S3 backet name] = [Cluster name] (=[IAM user name]-[AWS account ID]-[prject name])
make_bucket: kek-gtc-user01-123456789-protein-240101
GoToCloud: Settings tags to S3 bucket kek-gtc-user01-123456789-protein240101...
GoToCloud: Created S3 backet kek-gtc-user01-123456789-protein240101
GoToCloud: Making sure that Key-pair kek-gtc-user01-123456789-protein240101 does not exist yet...
An error occurred (InvalidKeyPair.NotFound) when calling the DescribeKeyPairs operation: The key pair 'kek-gtc-user01-123456789-protein240101' does not exist
GoToCloud: OK! Key-pair kek-gtc-user01-123456789-protein240101 does not exist yet!
GoToCloud: Generating Key-pair kek-gtc-user01-123456789-protein240101... #Creating key pair [Key pair name] = [Cluster name] (=[IAM user name]-[AWS account ID]-[prject name])
GoToCloud: Saved Key file as /home/ec2-user/environment/kek-gtc-user01-123456789-protein240101.pem
GoToCloud: FSX (Lustre) strage capacity '2400' MByte is specified
GoToCloud: Maximum number of EC2 instances '16' is specified
GoToCloud: Creating config with following parameters... # configの作成
GoToCloud: FSX (Lustre) strage capacity (MByte) : 2400
GoToCloud: Maximum number of EC2 instances : 16
GoToCloud: AWS Parallel Cluster Instance ID : GTC_INVALID
GoToCloud: Creating config from template...
GoToCloud: Saved config as /home/ec2-user/.parallelcluster/config.yaml
GoToCloud: Done
GoToCloud:
GoToCloud: Checking config settings...
Region: ap-northeast-1
Image:
Os: ubuntu2004
<<SNIP>>
KeyName: kek-gtc-user01-123456789-protein240101 # key pair name (= [cluster name])
<<SNIP>>
S3Access:
- BucketName: kek-gtc-user01-123456789-protein240101 # S3 bucket name (= [cluster name])
EnableWriteAccess: true
Scheduling:
<<SNIP>>
ComputeResources:
<<SNIP>>
MaxCount: 16 # MAX_COUNTS
<<SNIP>>
SharedStorage:
- MountDir: /fsx
<<SNIP>>
FsxLustreSettings:
StorageCapacity: 2400 # STRAGE_CAPACITY
DeploymentType: SCRATCH_2
ExportPath: s3://kek-gtc-user01-123456789-protein240101 # export_path = s3://[S3 bucket name]
ImportPath: s3://kek-gtc-user01-123456789-protein240101 # Import_path = s3://[S3 bucket name]
<<SNIP>>
Tags:
- Key: gtc:iam-user
Value: kek-gtc-user01
- Key: gtc:method
Value: cryoem
- Key: gtc:project
Value: protein240101
- Key: gtc:account
Value: "123456789"
- Key: User
Value: kek-gtc-user01
- Key: Service
Value: cryoem
- Key: Team
Value: protein240101
GoToCloud:
GoToCloud: -- Overall Results : Success -------------------------------------------------------
GoToCloud: Mount /efs : Success
GoToCloud: Install jq : Success
GoToCloud: Install pcluster : Success
GoToCloud: Install node.js : Success
GoToCloud: Check EC2Spot Roll : Success
GoToCloud: Setup Cloud9 tags : Success
GoToCloud: Create s3 bucket : Success
GoToCloud: Create key-pair : Success
GoToCloud: Create config file : Success
GoToCloud: ------------------------------------------------------------------------------------
GoToCloud: Done!
Exit from the terminal.
$ exit
Open a new terminal by clicking "+" button.
The dataset needs to be uploaded to an S3 bucket. For more detail, see [How to upload data to s3 bucket(Japanese only)]
This step needs to be done in the Cloud9 terminal.
Create new AWS ParallelCluster instance.
$ gtc_pcluster_create.sh [-i INSTANCE_ID]
INSTANCE_ID: Used to attach IDs to clusters with specific cluster configuration settings. Usually not used. By default, no instance ID.
Execution example
$ gtc_pcluster_create.sh
GoToCloud: Creating pcluster instance with following parameters...
GoToCloud: AWS Parallel Cluster Instance ID : GTC_INVALID
GoToCloud: Making sure that pcluster instance kek-gtc-user01-123456789-protein240101 is not running...
GoToCloud: OK! Pcluster instance kek-gtc-user01-123456789-protein240101 is not running yet!
GoToCloud: Creating pcluster instance kek-gtc-user01-123456789-protein240101...
<<SNIP>>
Status: CREATE_IN_PROGRESS
Status: CREATE_IN_PROGRESS
<<SNIP>>
Status: CREATE_COMPLETE
GoToCloud: Creation of pcluster instance kek-gtc-user01-123456789-protein240101 is completed.
GoToCloud: Executing head-node startup script...
Warning: Permanently added '35.72.243.165' (ECDSA) to the list of known hosts.
GoToCloud: Installing basic denpendency for RELION ...
<<SNIP>>
GoToCloud: Changing owners of directories and files in fsx (Lustre) ...
GoToCloud: Creating GoToCloud application directory /home/ubuntu/.gtc...
GoToCloud: Creating pcluster head node system environment variable settings for GoToCloud global variables as /home/ubuntu/.gtc/global_variables.sh...
GoToCloud: Creating pcluster head node system environment variable settings for RELION as /home/ubuntu/.gtc/relion_settings.sh...
GoToCloud: Creating pcluster head node system environment variable settings for UCFS Chimera settings as /home/ubuntu/.gtc/chimera_settings.sh...
GoToCloud: Setting up pcluster head node environment variables for GoToCloud system ...
GoToCloud: Making a backup of previous /home/ubuntu/.bashrc as /home/ubuntu/.gtc/.bashrc_backup_20240101_041356...
GoToCloud: Appending GoToCloud system environment variable settings to /home/ubuntu/.bashrc...
GoToCloud:
GoToCloud: To applying GoToCloud system environment settings, open a new terminal.
GoToCloud: OR use the following commands in this session:
GoToCloud:
GoToCloud: source /home/ubuntu/.gtc/global_variables.sh
GoToCloud: source /home/ubuntu/.gtc/relion_settings.sh
GoToCloud: source /home/ubuntu/.gtc/chimera_settings.sh
GoToCloud:
GoToCloud: Done
Creating an AWS ParallelCluster instance takes about 30 minutes.
Confirm that ParallelCluster creation has been completed successfully by displaying the list of clusters.
The created AWS ParallelCluster instance should have "clusterStatus": "CREATE_COMPLETE".
AWS ParallelCluster instance is automatically named in the same naming format as S3 buckets, "[IAM user name]-[AWS account ID]-[project name]".
$ pcluster list-clusters
{
"clusters": [
{
"clusterName": "kek-gtc-user01-123456789-protein240101",
"cloudformationStackStatus": "CREATE_COMPLETE",
"cloudformationStackArn": "arn:aws:cloudformation:ap-northeast-1:123456789:stack/kek-gtc-user01-123456789-protein240101/b11e07c0-5d68-11ec-a973-0ac0e36575f7",
"region": "ap-northeast-1",
"version": "3.0.2",
"clusterStatus": "CREATE_COMPLETE"
}
]
}
You can also check the status by specifying pcluster instance name
$ pcluster describe-cluster --cluster-name [pcluster instance name]
Execution example
$ pcluster describe-cluster --cluster-name kek-gtc-user01-123456789-protein240101
{
"creationTime": "2022-01-12T05:34:37.433Z",
"headNode": {
"launchTime": "2022-01-12T05:42:58.000Z",
"instanceId": "i-09828682fba621c25",
"publicIpAddress": "46.51.246.118",
"instanceType": "m5.xlarge",
"state": "running",
"privateIpAddress": "10.254.4.21"
},
"version": "3.0.3",
"clusterConfiguration": {
"url": "https://..."
},
"tags": [
{
"value": "123456789",
"key": "gtc:account"
},
{
"value": "3.0.3",
"key": "parallelcluster:version"
},
{
"value": "protein240101",
"key": "gtc:project"
},
{
"value": "kek-gtc-user01",
"key": "User"
},
{
"value": "cryoem",
"key": "gtc:method"
},
{
"value": "cryoem",
"key": "Service"
},
{
"value": "protein240101",
"key": "Team"
},
{
"value": "kek-gtc-user01",
"key": "gtc:iam-user"
}
],
"cloudFormationStackStatus": "CREATE_COMPLETE",
"clusterName": "kek-gtc-user01-123456789-protein240101",
"computeFleetStatus": "RUNNING",
"cloudformationStackArn": "arn:aws:cloudformation:ap-northeast-1:275704676984:stack/kek-gtc-user01-123456789-protein240101/b11e07c0-5d68-11ec-a973-0ac0e36575f7",
"lastUpdatedTime": "2024-01-01T05:34:37.433Z",
"region": "ap-northeast-1",
"clusterStatus": "CREATE_COMPLETE"
}
Note: If "clusterStatus" is "CREATE_FAILD", cluster creation has failed.
This step needs to be done in the Cloud9 terminal.
Connect to Head node of ParallelCluster created in step 3.1 via SSH.
$ gtc_pcluster_ssh.sh [-i INSTANCE_ID]
INSTANCE_ID: Used to attach IDs to clusters with specific cluster configuration settings. Usually not used. By default, no instance ID.
Execution example
$ gtc_pcluster_ssh.sh
GoToCloud: Connecting pcluster instance through SSH with following parameters...
GoToCloud: AWS Parallel Cluster Instance ID : GTC_INVALID
GoToCloud: Connecting to pcluster instance kek-gtc-user01-123456789-protein240101 through SSH...
Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.11.0-1020-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Tue Jan 12 10:58:35 UTC 2022
System load: 0.0 Processes: 272
Usage of /: 48.0% of 33.87GB Users logged in: 0
Memory usage: 5% IPv4 address for ens5: 10.3.4.27
Swap usage: 0%
* Ubuntu Pro delivers the most comprehensive open source security and
compliance features.
https://ubuntu.com/aws/pro
124 updates can be applied immediately.
78 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable
FSX is automatically mounted and synchronized with S3 bucket.
Check fsx and efs mounts.
$ df -h
<<SNIP>>
10.3.4.127@tcp:/d56efbmv 2.2T 9.0M 2.2T 1% /fsx
<<SNIP>>
127.0.0.1:/ 8.0E 81G 8.0E 1% /efs
<<SNIP>>
This step needs to be done in the Cloud9 terminal.
Connect to Head node of ParallelCluster created in step 3.1 via niceDCV.
$ gtc_pcluster_dcv_connect.sh [-i INSTANCE_ID]
INSTANCE_ID: Used to attach IDs to clusters with specific cluster configuration settings. Usually not used. By default, no instance ID.
Execution example
$ gtc_pcluster_dcv_connect.sh
GoToCloud: Connecting pcluster instance through NiceDCV with following parameters...
GoToCloud: AWS Parallel Cluster Instance ID : GTC_INVALID
GoToCloud: Connecting to pcluster instance kek-gtc-user01-123456789-protein240101 through NiceDCV...
Unable to open the Web browser.
Please use the following one-time URL in your browser within 30 seconds:
https://52.68.79.209:8443?authToken=6EcNzqfTFZHyZqpuzTM45OGYP0eBX-YfWNeAHauGaN-8MIDFdW86X0ppRGV0VU69coIdWN5ApjhmuA_P8JDNo4O6flYCWnWz0yMu8NkVDCmGWdkse-ApJGa8zwNVAGsyrpU_nPhSqL0fpOnPGVALchmjrRW_I5wKM-z0g3QdQOF6dhp9gXQzFIx_wY7bRAtEJ6JHjL4lKah2DUC2ruBHztdgxDsjqR703rT7lTnMMxn3anyq0cdCPChaDrycC7q9#MzuhzOCUlnLXg5kSukXn
Note: When using Cloud9, a new window/tab in the web browser will not open automatically, so open the output URL in the web browser of the local PC you are using and connect to the Ubuntu disk top. The URL is valid for 30 seconds and must be opened within 30 sec. if timeout, redo step 3.3.
If a warning occurs when opening in a web browser, see [If a security warning occurs with the DCV connection(Japanese only)]
Startup procedure when connecting to Ubuntu for the first time, see [Startup procedure for Ubuntu(Japanese only)]
After starting ParallelCluster, the data uploaded to S3 is copied to FSX (Lustre) with metadata only. Execute the following command to synchronize the data.
$ nohup find /fsx/<path_to_data_directory> -type f -print0 | xargs -0 -n 1 -P 8 sudo lfs hsm_restore &
Open the Terminal of Ubuntu.
Navigate to the RELION project directory under /fsx and run RELION. Environment modules for Relion are preloaded.
Execution example
$ cd /fsx/relion40_tutorial
$ module list
Currently Loaded Modulefiles:
1) relion/4.0-beta-2(default)
$ which relion
/efs/em/relion-v40/relion-4.0-beta-2-cuda75/bin/relion
$ relion &
Jobs are executed via the job scheduler "SLURM".
[Running] Tab
Submit to queue? Yes
Queue submit command: sbatch
Partition: g4dn-vcpu48-gpu4 Specify the instance to use. Check the available partition ID with the "sinfo" command.
Standard submission script: /efs/em/aws_slurm_relion.sh
The analysis calculation data are stored under the Relion project folder on the Lustre file system (FSX), but all data in the Lustre file system will be deleted when the AWS ParallelCluster instance is deleted. Therefore, it is recommended that the data in the Lustre file system be exported to S3 regularly.
Execute the following command on the head node of the AWS ParallelCluster instance created in step 3.1.
$ nohup find /fsx -type f -print0 | xargs -0 -n 1 sudo lfs hsm_archive &
execute the following command to check if the export is completed. If the output is "0", the export is completed.
$ find /fsx -type f -print0 | xargs -0 -n 1 -P 8 sudo lfs hsm_action | grep "ARCHIVE" | wc -l
0
This step need to be done in a Cloud9 terminal.
Delete AWS ParallelCluster instance.
$ gtc_pcluster_delete.sh [-i INSTANCE_ID]
INSTANCE_ID: Used to attach IDs to clusters with specific cluster configuration settings. Usually not used. By default, no instance ID.
This command automatically exports data in the Lustre file system to S3 bucket before deleting the AWS ParallelCluster instance.
Execution example
$ gtc_pcluster_delete.sh
GoToCloud: Deleting pcluster instance with following parameters...
GoToCloud: AWS Parallel Cluster Instance ID : GTC_INVALID
GoToCloud: Making sure that pcluster instance kek-gtc-user01-123456789-protein240101 is still running...
GoToCloud: OK! pcluster instance kek-gtc-user01-123456789-protein240101 is still running.
GoToCloud: Exporting (archiving) data from Lustre to S3 bucket...
GoToCloud: 0
GoToCloud: Exporting (archiving) is completed.
GoToCloud: Deleting pcluster instance kek-gtc-user01-123456789-protein240101...
{
"cluster": {
"clusterName": "kek-gtc-user01-123456789-protein240101",
"cloudformationStackStatus": "DELETE_IN_PROGRESS",
"cloudformationStackArn": "arn:aws:cloudformation:ap-northeast-1:123456789:stack/kek-gtc-user01-123456789-protein240101/f55f30f0-6243-11ec-95cf-0e59336ed71f",
"region": "ap-northeast-1",
"version": "3.0.2",
"clusterStatus": "DELETE_IN_PROGRESS"
}
}
GoToCloud: DELETE_IN_PROGRESS
<<SNIP>>
GoToCloud: DELETE_IN_PROGRESS
GoToCloud: null
GoToCloud: Deletion of pcluster instance kek-gtc-user01-123456789-protein240101 is completed.
GoToCloud: Done
Deleting AWS ParallelCluster instance takes 15 to 30 minutes.
Note: When you execute the following command, the all data stored in the S3 bucket will be forcibly deleted. Execute command after confirming that the data to be saved has been moved to another storage.
This step need to be done in a Cloud9 terminal.
Delete AWS S3 bucket and EC2 key pair.
$ gtc_aws_delete_s3_and_key_pair.sh
Execution example
$ gtc_aws_delete_s3_and_key_pair.sh
GoToCloud: Making sure that Key-pair 'kek-gtc-user01-123456789-protein240101' exist ...
{
"KeyPairs": [
{
"Tags": [],
"KeyName": "kek-gtc-user01-123456789-protein240101",
"KeyFingerprint": "f7:1b:b3:2a:96:b7:e5:0c:23:a4:86:6d:aa:1a:7a:d6:45:c4:32:76",
"KeyPairId": "key-090afb6dd68d275a0"
}
]
}
GoToCloud: Key-pair 'kek-gtc-user01-123456789-protein240101' and
GoToCloud: Key file '/home/ec2-user/environment/kek-gtc-user01-123456789-protein240101t.pem' exist in your environment.
GoToCloud: Deleting Key-pair 'kek-gtc-user01-123456789-protein240101' and
GoToCloud: key file '/home/ec2-user/environment/kek-gtc-user01-123456789-protein240101.pem' ...
GoToCloud: Deleted Key-pair 'kek-gtc-user01-123456789-protein240101' and
GoToCloud: Key file '/home/ec2-user/environment/kek-gtc-user01-123456789-protein240101.pem'.
GoToCloud:
GoToCloud: Making sure that S3 bucket 'kek-gtc-user01-123456789-protein240101' exists ...
PRE relion40_tutorial/
GoToCloud: S3 bucket 'kek-gtc-user01-123456789-protein240101' exists in your account.
GoToCloud: Deleting S3 bucket 'kek-gtc-user01-123456789-protein240101'...
GoToCloud: DELETE_IN_PROGRESS
<<SNIP>>
GoToCloud: Deleted S3 backet 'kek-gtc-user01-123456789-protein240101'.
GoToCloud:
GoToCloud: Done
Navigate to "Cloud9" console page in AWS console.
Select Cloud9 to delete in Environments list and click the "Delete" button at the top.
Confirm that Cloud9 name to delete is correct, enter the "Delete" in the text box, and click "Delete".