01/25/18
I found references related to my project. They are listed below.
References
MadGraph References:
Top Quark References:
Top-Quark Phyiscs April 2017 by U
Recent paper on top-quark physics at the LHC:
Top-quark physics at the Large Hadron Collider
Previous project with Dr. Agashe:
Recent papers by Dr. Agashe:
Kaustubh Agashe on Google Scholar
MADGRAPH naming scheme
g is gluon, a is photon, h is higgs, w+/w- and z are w and z bosons
link to naming scheme:
http://madgraph.phys.ucl.ac.be/sm_particles.html
01/29/18
I talked to Kaustubh Agashe today. He reviewed basic concepts that would relate to our research such as the Standard Model, subatomic particles, the four fundamental interactions, and Feynman diagrams. He then outlined the basic idea of the models that we would look to validate, which propose a warped, compact (finite) fifth dimension in addition to the four-dimensional space-time that we observe. The dimension must be finite rather than infinite as the other three spatial dimensions are because otherwise forces such as gravity and EM would have a falloff of 1/r^3 rather than 1/r^2.
Next Dr. Agashe outlined the results from this theory of a fifth spatial dimension that we could look to verify in our data analysis. Spatial momentum in the extra spatial dimension results in extra energy in the general equation for five-dimensional space-time, E^2 = |p|^2 + |p5|^2 + m^2c^4, where p is spatial momentum in third dimensional space and p5 is spatial momentum in the third dimension, and that extra energy appears in the general equation for four-dimensional space-time, E^2 = |p| ^ 2 + |p5|^2 + m^2*c^4, as additional mass. In this ways particles may have states of greater mass called Kaluza-Klein particles. The collection of the Standard Model (SM) particles and their corresponding Kaluza-Klein (KK) excitations is known as the Kaluza-Klein Tower. He explained that a collision between quarks and their respective anti-quarks can produce one excitation, which then decay into both the standard model particle that corresponds to the K excitation and a radion, the particle that describes the interaction with the fifth dimension, which subsequently decays into a pair of quarks independent of the original KK excitation. This process, where a product of decay was unstable and decayed itself, is an example of a cascade decay. In order to identify when the final state of this signal is the result of a KK excitation and not background noise in the SM model, we will have to suppress background noise by examining the frequency of the aforementioned state with relation to the invariant fixed mass of the radion and the invariant mass of the KK excitation. Bumps in the data at these values will indicate the signal that we are looking for.
01/30/18
Today I continued to learn the theory of the model that I would be studying. Mr. Du told us that we are studying phenomenology. Phenomenology is the bridge between theory and experiment in that we build models that we know can be tested. He also introduced the Hierarchy Problem. The Hierarchy Problem asks why gravity is so weak next to the weak nuclear force. This is answered in the theory of the Extended Randall-Sundrum Model, which proposes UV brane, a Higgs brane, and an IR brane. A brane, short for membrane, is something I don't understand. The profile of the gravitron (the force carrier for gravity) with respect to the fifth dimension, which is the gravitron's interaction along the fifth dimension from one brane to another, decays exponentially from the UV brane to the Higgs brane, and the profile of the Higgs Boson does vice versa. This decay is due to the fifth dimension being a highly warped space-time, called an anti-de Sitter space.
In our research we will look at two specific signed channels, the Wγγ Channel and the Tri-Photon Channel.
We are changing our meeting times from 11am-12pm on Tuesdays to 3:30pm-4:30pm on Thursdays.
02/09/18
Mr. Du gave us an overview of the theory in collider physics. To make up for limited time to meet, he emailed us a paper to read over the week to get some grasp on partons and parton distributions. I have not understood most of it.
02/15/18
This week was exciting. I had already installed MadGraph the previous semester, so Mr. Du walked us through performing simulations of events on the cluster. I learned about generating processes, understanding the parameter and run cards, accessing the output, couplings, kinematic cuts, and the upper limits to what I can simulate until I learn to use Condor. Over this week I will learn more about the kinematic cuts and practice simulating events on my own.
02/22/18
We went over simulations in MadGraph that we had done for practice over the past week. We installed Delphes spent time trying to install the newest version of pythia on the cluster and failing. After the meeting, we found that "install pythia-pgs" installs Pythia6 fine. Over this week I will continue to generate events and look more closely at the lhe and lhco files. I will also have Mathematica installed for the next meeting so that we can start doing analysis of results.
download/install mathematica
delta R is sqrt (delta(eta)^2 + delta(phi)^2) meaning angle between particles in their collission
pt is transverse momentum meaning momentum perpendicular to direction of beam because thats what gets measured, the 1
eta is angle
more eta means more momentum along z direction, which is beam direction
.lhe: each event has 4-momentum
Today:
Madgraph parton level events: hadronization/shower of quarks and gluons: we will never see quarks and gluons because QCD(Quantum ChromoDynaics) is strong force between quarks to put them together into color neutrual mesons
pythia will simulate the hadronizations/showers
jet is a bounty of particles produced from
quark jetes come from quarks
gluon jets come from gluons
in collider we cant distinguish between the two above
Delhes does detector simulation
It gives us the 4 momenutm of jets (the average of its particles), photons, leptons, missing energy: MET (neutrinos)
we assume momentum conservation along the transverse plane, then we can calculate to have
MET = -sigmaPt(visible)
to get Pythia
install pythia8
install Delphes
for install pythia8, was told to install [lhapdf6?], saw that
scp [localfile] hon17_banerjee@hepcms.umd.edu:~
03/01/18
I simulated 10000 events of the process p p > bkk > a r , r > a a, which is the collision of quarks to produce a kk fermion that decays into a photon and
type 6 is the missing energy
some have 1,2,3 jets, which are type 4
all have missing energy
this may be due to energy of some mesons having very small energy that does not pass detecteor threshold and od not get detected
pta momen for photon
ptl p for lepton
drjj delta for 2 jets, their angular diff must be more than .4
03/08/18
today we learn to remove background from signal
notebook has feynman diagram, so does index.tml for simualted
1. pp > aaa
2. pp > jaa(a jet fake a photon) meaning to detector it looks like photon
Physical process; pions (pi superscript 0) inside jet can decay to 2 photons, which merge togeter to be detected as one photon
this is background because it results in same final state as signal resulting from different process
cross section (denoted by sigma)
cross section (pp>jaa) >> cross section (pp>aaa)
If we consider jet faking photon rate (demoteed eta), eta << 1, sigma(pp>jaa)*eta ?>= sigma(pp > aaa)
so more background eevents are
3. pp > jja
4. pp > jjj
How many signal and background events do we need to simulate?
cross sections:
sg:sigma(sg)
bg:
sigmaL = num of real events
L is LHC luminosity
simulated events: real evetns after cuts >= sigma(1)
BGL After cuts, Nmg = num Madgraph events after cuts
1/sqrt(Nmg) is statistical fluctuation
ideally: statistical fluctutation <= .1
so we want Nmg >= 100
Weights for Madgraph events
W = sigmaL/(num simulated MD events) <= 1
^num simulated MG events is before kinematic cuts
Meaning
1 madgraph event corresponds to W real events
Having more madgrapg events for each real events is better because we can resolve distribtions
cross section of the events
smaaa: .1pb (picobar)
back2: 99pb
signal: .1fb (fentobar, one onethousand of pb)
SG: sigma(sg) = .11fb
L = 300 to 3000 /fb
real events: 33 to 330
in practice: we need 50k sg MG events
BG1: 1000 times larger
we need to cut out a 1000 bg events to see a signal
BG2: we need to cut out 1mill events
Not feasible to generate so many events to do this, not practical
Solution for getting bg closer to sg region:
Need to add kinematic cuts at MG simulation
graph of
hw: get distribution of kinematic cuts and see where on graph bg and sg cross for them, then put those numbers in run crd
then you get new cross sections for bg
Apply same cuts to both background and signal
Important note: there is cut-decay in run crd
its default is fale
we change it to true for every cut that we do
start by simulating 10k events
Next time we will learn to do 50k with condor
03/27/18
cut notes
sg/sqrt(sg+bg)
300/fb=luminosity
sg=cross*luminosity*cut efficiency=cutevents/totalevents)
bg=cross*luminosity*(ut efficiency=cutevents/totalevents)
03/29/18
weight for mg events:
w = real events / (#mg events simulated) < 1
BG1: pp>aaa
real events:
04/05/18
absolute path commands:
convert root file to lhco file/
command/
Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/Delphes/root2lhco Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/mar29condor_bkk_aaa/Events/run_01/tag_1_delphes_events.root Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/mar29condor_bkk_aaa/Events/run_01/tag_1_delphes_events.lhco
/end of command
1st part is executable, 2nd part is root, 3rd is new lhco file
Should see:
** Reading Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/mar29condor_sm_jaa/Events/run_01/tag_1_delphes_events.root
** Input file contains 50000 events
** [################################################################] (100.00%)
** Exiting...
/end of convert root file to lhco file
copying from cluster(remote) to machine(local)/
scp hon17_banerjee@hepcms:Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/mar29condor_bkk_aa
a/Events/run_01/tag_1_delphes_events.lhco /home/Aranya/../../mnt/c/Users/arany/Downloads/
/end of command
Should see:
The authenticity of host 'hepcms (128.8.216.193)' can't be established.
RSA key fingerprint is 82:b3:db:a7:4d:34:77:5a:50:49:38:56:4e:3b:8e:59.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hepcms' (RSA) to the list of known hosts.
hon17_banerjee@hepcms's password:
tag_1_delphes_events.lhco 100% 21MB 1.9MB/s 00:11
/copying from cluster(remote) to machine(local)
submit condor job/
condor_submit condor_SG_jobs.jdl
/end of command
edit Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/command.cmd to edit condor job
/end of submit condor job
04/10/18
Cuts I used in apr10 file:
mmaa 500
mmjj 500
pta 50
ptj 50
04/11/18
for new models, put under models folder in madgraph and unzip
04/12/18
For wkk analysis, first run was in file
apr12wkklaa
generate p p > wkk+ > w+ r , r > a a , w+ > l+ vl
add process p p > wkk- > w- r , r > a a , w- > l- vl~
10000 events
cross section: 7.762e-05 ± 6.081e-08 (pb)
04/17/18
Wγγ channel has quark and antiquark collide to produce charged (+/-) KK w boson, which decays into w boson, which decays into lepton and leptonic neturino, and radion, which decays into two photons. Final state seen at detector is 2 photons, lepton, and missing transverse energy.
generating processes for the Wγγ channel:
Signal Process:
import model Radion_EW
/*unchanged code*/
generate p p > wkk+ > w+ r , r > a a , w+ > l+ vl
add process p p > wkk- > w- r , r > a a , w- > l- vl~
Background Process 1:
import model sm
/*unchanged code*/
generate p p > w+ a a, w+ > l+ vl
add process p p > w- a a, w- > l- vl~
Background Process 2:
import model sm
/*unchanged code*/
generate p p > w+ j a, w+ > l+ vl
add process p p > w- j a, w- > l- vl~
Tri-Photon Channel Processes:
Signal Process:
import model Radion_BKK
/*unchanged code*/
p p > bkk > a r , r > a a
Background Process 1:
import model sm
/*unchanged code*/
p p > a a a
Background Process 2:
import model sm
/*unchanged code*/
p p > j a a
Background Process 3:
import model sm
/*unchanged code*/
p p > j j a
04/18/18
generating 4 runs of 50000 events for each process
identical cuts for all of them for now (just seemed fine this way for now)
mmaa 500, pta 40, ptj 40 at generation level
may 18 bkk analysis
bkk
aaa
jaa
jja
may 18 ew analysis
ew
laa
lja
may 18 faking analysis
sm aaa
sm jaa
sm jja
04/26/18
I learned (very late) that vector boson, gauge boson or spin 1 particle are the same thing.
I resolved problems with doing multiple runs in Condor. Delphes was not being turned on by the numerical commands that work on the first run. Instead, the command.cmd file must have detector=Delpes to generate root files for each run.
I am preparing to do a large scale analysis of both channels that I've looked at over this semester. I will be using cuts according the distributions and significance measures that I obtained in past analyses.
04/28/18
Analyzing the triphoton channel, w gamma gamma channel, and jet-faking-photon rate with 3 processes for each analysis:
triphoton
1. p p > bkk > a r , r > a a
2. p p > a a a
3. p p > j a a
w gamma gamma
1. p p > wkk+ > w+ r , r > a a , w+ > l+ vl with p p > wkk- > w- r , r > a a , w- > l- vl~
2. p p > w+ a a, w+ > l+ vl with p p > w- a a, w- > l- vl~
3. p p > w+ j a, w+ > l+ vl with p p > w- j a, w- > l- vl~
jet-faking-photon
1. p p > a a a
2. p p > j a a
3. p p > j j a
Cuts for each process:
triphoton
Generation level: invariant mass of photon pairs above 500
Analysis level: invariant mass of photon pairs between 900 and 1100 GeV, reconstructed mass above 2500 GeV
w gamma gamma
Generation level: invariant mass of photon pairs between 500 and 1500 GeV
Analysis level: invariant mass of photon pairs between 900 and 1100 GeV, invariant mass for all photons above 2800 GeV
Did 20 runs of 50000 events for each process in both channels, did 10 runs of 50000 events for each process in the jet-faking-photon analysis
Template condor file for 10 runs of 50000 events:
import model sm
# Define multiparticle labels
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
# Specify process(es) to run
generate p p > j a a
output apr28jaa2
launch
1
2
2
# setting run card parameters
set nevents 50000
set ebeam1 7000
set ebeam2 7000
#remember cutdecays is true when cutting
set cut_decays True
set mmaa 500
0
# run02
launch apr28jaa2
detector=Delphes
0
0
# run03
launch apr28jaa2
detector=Delphes
0
0
# run04
launch apr28jaa2
detector=Delphes
0
0
# run05
launch apr28jaa2
detector=Delphes
0
0
# run06
launch apr28jaa2
detector=Delphes
0
0
# run07
launch apr28jaa2
detector=Delphes
0
0
# run08
launch apr28jaa2
detector=Delphes
0
0
# run09
launch apr28jaa2
detector=Delphes
0
0
# run10
launch apr28jaa2
detector=Delphes
0
0
04/29/18
Several parts of this process are very slow, so I have developed scripts to automate what took so much time right now that it was worth the investment: Scripts for converting root files to lhco files for all runs in a folder and for copying lhco files to local machine are at the bottom, adapted for my own settings. Mathematica needs to be rewritten. Mathematica is the bottleneck for my analysis, where I need to wait for long durations of time for analysis to complete. I also continued using the code I developed for calculation of weights, real events, cut efficiency, and significance.
MADGRAPH NOTES
GETTING TO MADGRAPH
cd __CMSSW object with madgraph in it
cd Phys268n/CMSSW_8_0_22/src
cmsenv
^ In above, runs the environment
cd MADGRAPH/MG5_aMC_v2_6_0
./bin/mg5_aMC
^In above, ./ runs the executable
RUNNIG MADGRAPH
Model
sm (standard model) is for backgrounds
in madgraph, we do >>import modell xx
^In above, xx is model name (like sm)
Radion_EW.. (our signal)
If you add a new model files, put it in Madgraph/models/
particle containt
Defined multiparticle p = g u c d s u~ c~ d~ s~
Defined multiparticle j = g u c d s u~ c~ d~ s~
Defined multiparticle l+ = e+ mu+
Defined multiparticle l- = e- mu-
^ In above, j is jet: collection of quarks (u,d,c,s,g), p is proton also with all quarks, l is lepton. ~ means antiparticle. The gluon is its own antiparticle. g u c d s u~ c~ d~ s~ are all partons. They are distributed according to different parton distribution functions.
generate process:
>generate p p > z > e+ e-
^ Above generates a process
output process:
>output [filename]
run the process
>> launch [filename]
several options such as
pythia:
delplex:
We type 0 and enter it
we then see 2 cards
1 to get to parameter card
parameter card has all properties of particles: mass, coupling. Block mass is mass of particle. MT is mass of top quark. ALl numbers are in GeV
Then shows 0 mass for up down charm all quarks. THey are 0 in here because they are relativelyalmost massless for higher energies
Blocksminputs are couplings
aewmi is electronmagneti coupling
fermi constant
strong coupling
Above values are inverses apparently? 1/ value
INFORMATION FOR DECAY
decay width is how fast particle will decay
WZ is Higgs
For new models if you want to test a new parameter soace and change mass decay width (we change nothing for sm)
To exit file from vi editor:
:q quits
:q! force quit if you changed something but does nt want to save it
:wq to write and quit
:i to insert mode
esc to leave insert mode
2 to get to run_card.dat
[num of simulated events] = nevents
cant run for 1m plus events becasuse file would exceed 1GB, so make multiple files instead
We would want to use 50,000 for reasonable signal analysis
1000 or some other smaller number is good for quick testing
Dont simulate more than 10,000 events at a time right now because we only have 1 or 2 nodes in cluster for use. We will learn to use Condor in the future which will allow more nodes and it will continue running even when we log off.
The 0 =n means no parton distribution function, meaning equal distribution of the different particles
1 beam is 6500 GeV.
The two beams together give the 1300 GeV we normally see in LHC. We can simulate higher energies
Kinematic cuts
under the title Parton level cuts definition
False = cut-decays
means no kinematic cuts applied
For our signal we will always do True = cut-decays
Under title Standard CUts
Then there are standard]pt2 is ttransvere momentum
WE use cuts to get rid of alot of background
It is called kinematic cut because we cut evnts based on kinematics
We will have stronger cuts then what are in place for the collider simulation.
Important question we will look at is how to make cuts
Homework is to look at following cuts below
min ptj pt1
max __ __
ej (energy)
rapidity
dr (delta r)
cc
exit from madgraph after running simulatiion
firefox HTML
I will get webpage with info
Besides that, I can cd to Events
cd to run
gunzip unweighted events
I will get unweighted.
vi unweighted
scroll all the way down to unweighted
homework is to check physics=al meaning of each column in.lhe
first col is name of particles
sec
thir
first four large are four momentum
fifthh large is mass
Homework is to generate abother oricess with pp> e+e- (not specifying z in between, includes pp to photon) and then checking the Feynman diagrams
The Feynman diagrams are on theHTML webpage
Next time well be sent a madgraph file to work with
LHE AND LHCO INSTRCUTIONS
first go to
cd [eventoutputfilename]/Events/run_01/
to unzip MADGRAPH lhe file,
gunzip unweighted_events.lhe.gz
ro unzip pythia lhe file,
gunzip tag_1_pythia_events.lhe.gz
then
cd
in the Delphes folder, execute
./root2lhco ../[processname]/Events/run_01/tag_1_delphes_events.root ../[processname]/Events/run_01/tag_1_delphes_events.lhco
PRINTING CONDOR RUNS
vi Phys268n/condor/MG5test_286304_0.stdout
where 286304_0 above should be replaced with run number
AUTOMATED SECURE COPY LHCO FROM REMOTE TO LOCAL
bash script starts/
rm -r /home/Aranya/../../mnt/c/Users/arany/OneDrive/College/umd/fresh/sem2/269/$1
mkdir /home/Aranya/../../mnt/c/Users/arany/OneDrive/College/umd/fresh/sem2/269/$1
for i in 0{1..9} {10..10} ; do
scp hon17_banerjee@hepcms:Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/$1/Events/run_$i/tag_1_delphes_events.lhco /home/Aranya/../../mnt/c/Users/arany/OneDriv\
e/College/umd/fresh/sem2/269/$1/$1_$i
done
/bash script ends
AUTOMATED ROOT TO LHCO CONVERSION
tcsh script begins/
#!/usr/bin/env tcsh
foreach run (*Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/$1/Events/run*)
echo $run
Phys268n/CMSSW_8_0_22/src/MADGRAPH/MG5_aMC_v2_6_0/Delphes/root2lhco $run/tag_1_delphes_events.root $run/tag_1_delphes_events.lhco
end
/tcsh script ends
CALCULATION OF SIGNIFICANCE
c file begins/
#include <stdio.h>
#include <math.h>
#define NUM_BG 2 /*num background events to calculate cut efficiency for*/
#define LUMINOSITY 300 /*in fb*/
#define MADGRAPH_EVENTS 50000 /*number of events simulated in madgraph*/
/*ORDER IN ARRAYS: sg,aaa,jaa*/
int main() {
short i = 0;
double significance,sg_and_bgs[NUM_BG+1],cut_efficiency[1+NUM_BG], num_real_events[1+NUM_BG];
double cross_section[1+NUM_BG] = {0.0001365,1.362*.00001,0.09078}; /*in pb from MADGRAPH, will convert to fb next*/
double cut_events[1+NUM_BG] = {36754,903,0}; /*num events after cut flow in mathemtatica*/
for(; i < NUM_BG + 1; i++){
cross_section[i] *= 1000; /*convert cross sectionpb to fb*/
num_real_events[i] = cross_section[i] * LUMINOSITY;
printf("%d: num real events is %f\n", i, num_real_events[i]);
printf("weight for mg events is %f\n", num_real_events[i] / MADGRAPH_EVENTS);
cut_efficiency[i] = cut_events[i] / MADGRAPH_EVENTS; /*effiency = cut events (real events) / simulated events*/
sg_and_bgs[i] = cross_section[i] * LUMINOSITY * cut_efficiency[i]; /*cross-section*luminosity*cut_efficiency*/
printf("sg_and_bgs[%d]: %f\n",i, sg_and_bgs[i]);
}
/*significance = sg_and_bgs[0] / sqrt(sg_and_bgs[0] + sg_and_bgs[1] + sg_and_bgs[2]);
printf("sig is %f\n", significance);*/
return 0;
}
/c file ends
RS MODEL DIAGRAMS
FEYNMAN DIAGRAMS FOR THE CHANNELS I WORKED ON
DISTRIBUTIONS OF INVARIANCES I CUT ALONG (FROM 04/28/18 POST)
blue is always signal, green is always background, orange is always same background with one jet-faking-photon
RESULTS AT THE END OF ALL THIS (FROM 04/28/18 POST)
__________________________________________________________________
05/22/18
I want to try my hand at rewriting the Mathematica code. It was a bottleneck in the research process because it limited my analysis time to whatever local machine I had Mathematica on. I want to at least implement the analysis for the triphoton channel in python so that the code can be run on the cluster. I will be using cut parameters similar to those from my 04/28/18 runs. I will only work with 1 run of 500 events for each process and compare results from my python code and the original Mathematica code for reference. Attached to this page are the condor command files for each process. I had to attach them with txt extensions because Google Pages would not allow the cmd extension.