If in doubt ask
Ruben Lopez Coto ruben.lopezcoto@pd.infn.it
Click on https://cloudveneto.ict.unipd.it/ and log in with your INFN credentials. The first time you log in you will have a page in which you have to insert your credentials and the project on which you are working on. Send these informations and wait until the responsible of the project will accept your participation.
Current admin: Michele Doro and Mose’ Mariotti
Once accepted, go to https://cloudveneto.ict.unipd.it/dashboard/ or https://cloud-areapd.pd.infn.it/dashboard/ and check the free space, CPUs and other stuff that are available in the Cloud.
a/ Easy way (not recommended)
To create your own virtual machine, go to Instances and select “Launch instances”. Then you can decide the name of the VM.
Suggested name: vm_your-surname, as “VM” is reserved to common virtual machines) and its properties (build from image (SL69-Padova-x86...), size large or xlarge (use it with care because it needs a large fraction of CPUs).
As a last step before creating your vm, go to the tab “Access and security”, write the name of the Key Pair "MAGIC_key_pair" and insert the "admin password" of MAGIC.
Once your VM has been created, you have to create your own account in it. So, go to the directory in which you have downloaded the “Key pair” in your machine and write:
ssh root@10.64.29.x
where x is the final number of the VM.
And once you are in the VM, type:
adduser <yourusername>
chown -R <yourusername> /home/<yourusername>
where <username> is the same of INFN credentials.
Then exit from the VM and type
ssh <youusername>@10.64.29.x
and put your INFN password: you should be able to enter in your VM as normal user.
If it does not work, go to the "Long way".
b/ Long way (recommended)
If you need to create a virtual machine (VM), you have to create a “Key pair name” (K) before doing it. Go to “Access and Security” in “Compute” menu and select “Create Key Pair”. Give it a new name <yourname_key>, create it and download it in a folder in your computer (don't need to open it). You will need it later. Once created, change the privileges of the key pair:
chmod go-r <yourname_key>
To create your own virtual machine, go to Instances and select “Launch instances”. Then you can decide the name of the VM. Suggested name: vm_your-surname, as “VM” is reserved to common virtual machines) and its properties (build from image (SL69-Padova-x86...), size large or xlarge (use it with care because it needs a large fraction of CPUs).
As a last step before creating your vm, go to the tab “Access and security” and write the name of the Key Pair that you created before (don't need to put password, but you can do it).
Once your VM has been created, you have to create your own account in it. So, go to the directory in which you have downloaded the “Key pair” in your machine and write:
ssh -i <yourname_key>.pem root@10.64.29.x
where x is the final number of the VM.
And once you are in the VM, type:
adduser <yourusername>
where <username> is the same of INFN credentials.
Then exit from the VM and type
ssh <youusername>@10.64.29.x
and put your INFN password: you should be able to enter in your VM as normal user.
If you need to attach a volume, you can create it in
https://cloud-areapd.pd.infn.it/dashboard-shib/project/volumes/
selecting “Create volume”.
4.2 Attach disk
Once done that, go to the properties in the menu “Edit volume” and select “Manage attachment”: then you can select your VM where the new volume will be attached.
4.3 Format disk
Then go to your terminal and log in as root
ssh -i <yourname_key>.pem root@10.64.29.x
and check where is your volume with fdisk -l (usually in /dev/vdb)
Then write
mkfs.ext4 /dev/vdb1
to format the newly created partition.
4.4 Mount
Create a directory in your machine, e.g. with
mkdir /my_data
and mount the disk into that directory
mount -t ext4 /dev/vdb /my_data
WARNING:
If you want that your disk will be always automatically mounted after reboot of your machine, you have to edit the /etc/fstab file following this guide:
http://www.pd.infn.it/cloud/Users_Guide/html-desktop/#idm140626413439056
and then you can mount it with
mount -a
he directory D on the server S (the other machine) where the volume is attached has to be exported to the public or to your machine. To do this, log in the S as root and modify theyum whatprovides "*/rpc.nfsd"
file
ano /etc/exports
with a line like:
path/to/directoryD 10.64.29.0/24(rw,sync,no_root_squash,no_subtree_check)
Then type:
exportfs -ra
If no errors appear, the directory where the volume is attached is now exported.
Finally, write:
service nfs start
Now log in your machine (as root), create a directory in your machine, e.g. with
mkdir /my_data
and write
service nfs start
mount -t nfs 10.64.29.xx:path/to/directoryD <where you want to attach the volume>
Connect to your instance as root
ssh -X root@10.64.29.xx
Update your installation with (may take some time)
yum update
Install some packages with
yum install nfs-utils nfs-utils-lib bind-utils gcc-c
yum install nano
Create the partition in which you want the MAGIC disk to be seen, for example
mkdir /sw_magic
Mount the disk
mount -t nfs 10.64.29.25:/sw_magic /sw_magic
Copy these lines in your ~/.bashrc
export ROOTSYS=/sw_magic/root/root_v5.34.36
export MARSSYS=/sw_magic/root/root_v5.34.36/Mars_V2-19-9
source $ROOTSYS/bin/thisroot.sh
export LD_LIBRARY_PATH=$ROOTSYS/lib:$MARSSYS:/sw_magic/gsl-2.1/lib:$LD_LIBRARY_PATH
export DYLD_LIBRARY_PATH=$LD_LIBRARY_PATH
export PATH=$ROOTSYS/bin:$MARSSYS:$PATH
Exit and enter again your machine. Try to write:
melibea
and should work.
TIn case your are installing on a SL, you should not find a problem, in case you install on a CentOS please follow some additional steps: https://www.ibm.com/support/pages/startapache-gives-error-libpcreso0-cannot-open. Also consider two further steps
Unix.*.Root.MacroPath: .:$(HOME)/macros:$(MARSSYS)/macros
Unix.*.Root.DynamicPath: .:$(MARSSYS)/lib
Unix.*.Gui.IconPath: $(MARSSYS)
Rint.Logon: rootlogon.C
Connect to your instance as root
ssh -X root@10.64.29.xx
Install some packages with
yum install nfs-utils nfs-utils-lib bind-utils
Create the partition in which you want the MAGIC (and ROOT) disk to be seen, for example
mkdir /sw_magic
Mount the disk
mount -t nfs 10.64.29.25:/sw_magic /sw_magic
Copy these lines in your ~/.bashrc
export ROOTSYS=/sw_magic/root/root_v5.34.36
source $ROOTSYS/bin/thisroot.sh
export LD_LIBRARY_PATH=$ROOTSYS/lib:$MARSSYS:/sw_magic/gsl-2.1/lib:$LD_LIBRARY_PATH
export DYLD_LIBRARY_PATH=$LD_LIBRARY_PATH
export PATH=$ROOTSYS/bin:$MARSSYS:$PATH
Exit and enter again your machine. Try to write:
root -l
and should work.
Download MARS, extract it with
tar xfvz Mars*.tgz
enter the directory
cp Makefile.conf.linux Makefile.conf.
nano Makefile.conf. add the option "-fPIC" in the line "OPT=...."
then simply:
make
Nota: le versioni delle macchine con CentOs 7 danno problemi di librerie perchè sembra che la glibc-2.12 (delle versioni con SL69...) sia stata sostituita con una versione più recente glibc-2.17...
To be solved -> usare solo macchine con ScientificLinux SL e NON CentOs o simili
Connect to your instance as root
ssh -X root@10.64.29.xx
Install some packages with
yum install nfs-utils nfs-utils-lib bind-utils
Create the partition in which you want the MAGIC disk to be seen, for example
mkdir /sw_fermi
mount -t nfs 10.64.29.20:/sw_fermi /sw_fermi
yum install numpy
yum install ipython
Copy these lines in your ~/.bashrc
export FERMI_DIR=/sw_fermi/ScienceTools-v10r0p5-fssc-20150518-x86_64-unknown-linux-gnu-libc2.12/x86_64-unknown-linux-gnu-libc2.12
export ENRICO_DIR=/sw_fermi/enrico/
alias ferminit='source $FERMI_DIR/fermi-init.sh && source $ENRICO_DIR/enrico-init.sh'
Exit and enter again your machine. Try to write:
gtselect
and should work.
Note: every time you need this software, you must initialize it with
ferminit
Then it will work.
Up to now, the events and spacecraft files are updated every week on saturday. However, if it does not work and you need to download the newest weekly files, you can follow these steps.
Connect as root (MAGIC password) to
ssh -X root@10.64.29.20
ferminit
enrico_download --download_data
enrico_download --download_spacecraft
and wait (it could take some minutes).
You are downloading the weekly event files (you cannot analyse data extremely near today, otherwise it is better to download event data on your folder and use those ones)
Connect to your instance as root
ssh -X root@10.64.29.xx
Install some packages with
yum install nfs-utils nfs-utils-lib bind-utils
Create the partition in which you want the MAGIC disk to be seen, for example
mkdir /sw_fermi
mount -t nfs 10.64.29.20:/sw_fermi /sw_fermi
yum install numpy
yum install ipython
yum install ncurses-devel
yum install readline-devel
yum install cfitsio-devel
yum install swig
yum install doxygen
Copy these lines in your ~/.bashrc
export GAMMALIB=/sw_fermi/test/
export CTOOLS=/sw_fermi/test/
alias ctainit='source $CTOOLS/bin/ctools-init.sh && source $GAMMALIB/bin/gammalib-init.sh'
Exit and enter again your machine. Try to write:
ctselect
and should work.
Note: every time you need this software, you must initialize it with
ctainit
Then it will work.
a/ After first time
Normal use after first time:
- edit the file input.txt
inserting the input data of the sources (only blank space, no tabs)
- if needed, you can edit also some parameters in launch_fermi_auto_analysis_v0.sh
- launch it: ./launch_fermi_auto_analysis_v0.sh
b/ First time
A simple FERMI-LAT automatic analysis is now ready for use on the MAGIC-CTA group cloud. The analysis requires as input only the following data:
SOURCE_NAME RA DEC REDSHIFT
and generates a general analysis (including TS value, spectral index, flux ...) and SED.
The analysis now allows you to launch any number of analysis: however remember that the maximum number of CPU is 8 in xlarge machines.
It is also possible to change the time interval of the analysis and the number of bin in the spectrum or lightcurve (edit the launcher).
HOW TO:
- enter your cloud machine (make sure you have attached the volume /sw_fermi to your machine with touch /sw_fermi
- create a folder where the analysis will take place mkdir test
- cp /sw_fermi/auto_fermi_analysis/launch_fermi_auto_analysis_v0.sh .
- cp /sw_fermi/auto_fermi_analysis/input.txt .
- edit the input.txt
file by entering preferred source data (separated by only one space, see model in the example)
- if needed, edit launch_fermi_auto_analysis_v0.sh
- launch ./launch_fermi_auto_analysis_v0.sh
The macro creates a folder for each source and, within it, it creates a configuration file config.conf
required by the ENRICO software. Then it executes and launches the ENRICO software that generates the general analysis and launches the SED (bin by bin). It is possible to logout from the machine during the analysis since it is launched with "nohup" (you will continue even if you are not connected), but remember to reset the software with "ferminit" when you come back.
At the end of the analysis, you can check the results in log * .results: cat *.results
To plot the SED (in text format and in a graph), you have to enter the source folder at the end of the analysis and write:
- enrico_plot_sed config.conf
(fast)
The ENRICO software will produce all the data in the "Spectrum" subfolder (in png format, dat ...)
To create a lightcurve, at the end of the analysis, launch:
- enrico_lc config.conf
(long, run it with nohup ... &
)
- enrico_plot_lc config.conf
(takes several minutes)
The ENRICO software will produce all the data in the subfolder "Lightcurve-20bins" (in png format, dat ...).
NOTE: The macro is still in development format (version v0) and does nothing but run the ENRICO software with the most "classic" settings. For detailed analysis, it is necessary to kill the process of the source, modify the config.con
f file in its folder, and start it with "enrico_sed config.conf
".
POSSIBLE PROBLEMS: If it says something is missing, there may be only a connection problem with the VM_FERMI machine. If you type "source $ FERMI_DIR / stop-init.sh && source $ ENRICO_DIR / enrico-init.sh" and it complains, you must resume the folder /sw_fermi
link steps as in the previous sections.
Common problems
Note: you need a lot of space: create your own volume! Space in the home directory is not enough.
After a while, the analysis crashes and no output *.results
are present in the source directory.
Don't worry, it could happen!
Just enter the source directory and type
nohup enrico_sed config.conf &
This will launch again the analysis. If this happens again, delete every files except for config.conf
and *.xml
one. Then launch again
nohup enrico_sed config.conf &
Enter with your access_key, then type
passwd
and change it.
options single-request-reopen
nel file /etc/resolv.conf
Supponiamo che il server nfs abbia indirizzo 10.64.29.65 e supponiamo che questo abbia il disco /disk che deve essere esportato sui client in read-only
Sul /etc/exportfs di 10.64.29.65 metti:
/disk 10.64.29.0/24(ro,no_root_squash,async)
Poi, perche` la modifica venga applicata, dai il comando: exportfs -a
Sul client dove devi montare questo disco, aggiungi a /etc/fstab la riga:
10.64.29.65:/disk /disk nfs defaults,noatime,nodiratime 0 0
e poi dai il comando "mount /disk".