Top Quark Properties at LHC
The top quark is the heaviest of the fundamental particles, and it is commonly a key player in new models of physics, such as a concept called technicolor. Furthermore, the top quark is also involved in supersymmetry, and, being the heaviest fundamental particle, it is suggestive that the top quark could be involved in beyond the standard model physics (Review of Top Quark Physics).
IceCube UMD Group
IceCube is a high-energy neutrino observatory and is located in the south pole. This observatory will look into previously unexplored areas for astronomy and into regions such as the PeV regions. It can potentially answer questions about the origin of high energy photons coming in from space as well as potentially answering questions about / detecting dark matter particles (such as cold dark matter particles) and neutrinos of higher energy than those produced in our particle accelerators (icecube.umd.edu).
LZ Next Generation Dark Matter Experiment
The LZ dark matter experiment is a combination of the LUX and ZEPLIN experiments. This experiment is between 30 and 100 times more sensitive than LUX was before. LZ is specifically designed for detection of heavy WIMPS (Weakly Interacting Massive Particles), but it also has capabilities for detection of other candidates and processes of dark matter (sanfordlab.org). This project makes use of data analysis techniques with Python and ROOT.
Segmented Crystal Calorimeters for Future Colliders
Crystal calorimeters are key in improving high resolution EM calorimetry, which is necessary for the detection of particles from collisions in the LHC. As we continue to push the bounds of energy ranges and collider technology, we require more and more sensitive instruments to effectively gather data. Now, potential calorimeter designs are suggesting that we can optimize them for the detection of both photons/electrons and hadrons; previously it was simply one or the other.
CMS LHC Group
This project is involved in the search for beyond the standard model physics (related to W and Z boson physics). There are various models which can be explored, including "dark top", charged Higgs, F-mesons, and folded SUSY. Each of these models seems to be looking to fit theoretical work with experimental data, in essence, figuring out which BSM theories can potentially work or can be ruled out. Furthermore, these analyses can also simply set limits on the various theories, after which they can be modified and reevaluated. The specific models that this project is involved with relate to the different modes of producing W and Z bosons in LHC.
Working with Yuqin Wang under Dr. Kaustubh Agashe
Standard Model with Higgs - https://home.cern/science/physics/standard-model
Some Key Info:
Top Quark
Mass: 173.1 GeV
Charge: 2/3
Spin: 1/2
Bottom Quark
Mass: 4.18 GeV
Charge: -1/3
Spin: 1/2
W Boson
Mass: 80.39
Charge: +-1
Spin: 1
Leading Order Process: Has least number of intermediate particle interactions
We have a particle collision which produces some parent particle B
Parent B decays into two child particles, A and a
Child A is massive
Child a is massless
Because of the high energy of particle collisions, this is a highly relativistic situation, thus Special Relativity comes into play
Two reference frames
Rest Frame of Parent B
Lab Frame
Rest Frame of B
Children a and A are produced with an angle of pi rad between them (conservation of momentum) since parent B is at rest
Lab Frame
Parent B is moving with a boost (speed) beta
Child a is released at an angle theta sub a beta (angle of a's trajectory w.r.t the boost velocity beta)
We can measure energy of daughter a in the lab frame
Assume that the cosine of theta_{a}{beta} is constant between +1 and -1
I spent some time to make my life easier by creating some Shell scripts to automate common tasks. So far these include running cmsenv inside the CMSSW directory, and then changing to the madgraph directory and running madgraph, as well as a workaround for an issue we are having with madgraph not detecting a web browser.
Startup of MADGRAPH, cmsenv:
Another script I made was for automation of the lhe2root.py process we used last semester:
The workaround I mentioned is for viewing the HTML output files for a given event. Since madgraph is not detecting firefox, I setup a script to compress the event's output files and another script on my local machine to scp and extract the compressed files into a directory in my desktop so that I can use my firefox to open them:
This week we have been tasked with revisiting what we did with MADGRAPH last semester, that is, to recreate the p p > z h, z > l+ l-, (h > z l+ l-, z > l+ l-) process using madgraph and then generate the correct plots.
Here are Feynman Diagrams for two processes which satisfy this process:
The left image shows the interaction beginning from an Up and an Anti-Up quark, the right shows from a Down and Anti-Down quark
Note that one of the z's in the Higgs decay is a virtual Z, in that when you use its decay components to form the invariant mass of that "Z", you will not get the known mass of the Z
This makes sense because two real Z's can not add together to yield the mass of the Higgs (Their sum would be larger than ~125 GeV)
I am still trying to figure out how to recreate the plots we had created last semester using the HiggsAnalysis.C file on my own from this process. It seems that the C files involved required some files from Pythia, which we have not discussed at all with Dr. Agashe, and the plots directly from madgraph's output do not show what we want them to.
This is the process we will be looking at more in depth next, and also generating the same plots as with the p p to z h event. Here are the six Feynman diagrams generated for this process: [To Be Added Later]
Using run_02 from pp_ttbar2/ with 100,000 events:
madanalysis is run independently of madgraph from within madgraph directory: /home/abeisaw1/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0
run madanalysis with: ./madanalysis5/bin/ma5
Once madanalysis is open:
import /home/abeisaw1/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/pp_ttbar2/Events/run_01/unweighted_events.lhe
plot E(b) 250 0 300 [logX]
plot E(b~) 250 0 300 [logX]
submit
first number is # of bins
second number is xmin
third is xmax
I tried and replicated this error with xmin at 0 and 1
Yuqin also gets this error with same command sequence
We met with Dr. Jabeen to discuss this problem and try to come up with some resolution.
Looking closely at the output of the non-log scale plots, which were generated by:
import /path/to/the/lhe/file/lhefile.lhe
plot E(b) 250 0 300
plot E(b~) 250 0 300
submit
We found that in the respective output directory there is a C file generated. From madgraph directory:
cd ANALYSIS_#/Output/Histos/
In here we find selection_0.C which is responsible for creating the histogram from the datasets used in madanalysis. We find a line in the "Finalizing TCanvas" section:
canvas->SetLogx(0);
Changing the 0 to a 1 overrides the logX command in madanalysis and applies it afterwards, resulting in the proper log scaled histogram. After this line we can save the resulting histo as we please in the "Saving the image" section.
Since this is all done outside madanalysis, we should manually run this C file using ROOT:
root -l selections_0.C
Result:
This plot is not finalized as the x range must be extended a bit and we want to normalize to one to better see everything since there are many data sets being displayed together, but the log scaling is apparent nonetheless.
Thank you Dr. Jabeen for this helpful workaround
Another way to get around this issue was to install ROOT locally and copy the same selections C files from the cluster to my local machine. From there I was able to use the ROOT editor GUI to manually add in annotations on the plots, change the scales, etc.
For our final presentations, I decided to regenerate all datasets and plots to ensure everything was created the same and as intended. This was done using a script for madgraph and madanalysis:
generate p p > t t~ > w+ b w- b~
output research2021Final
launch research2021Final -n pp_630gev
set nevents 100000
set ebeam1 315
set ebeam2 315
launch -n pp_1980gev
set nevents 100000
set ebeam1 990
set ebeam2 990
launch -n pp_7tev
set nevents 100000
set ebeam1 3500
set ebeam2 3500
launch -n pp_13tev
set nevents 100000
set ebeam1 6500
set ebeam2 6500
launch -n pp_33tev
set nevents 100000
set ebeam1 16500
set ebeam2 16500
launch -n pp_100tev
set nevents 100000
set ebeam1 50000
set ebeam2 50000
launch -n ppbar_630gev
set lpp2 -1
set nevents 100000
set ebeam1 315
set ebeam2 315
launch -n ppbar_1980gev
set lpp2 -1
set nevents 100000
set ebeam1 990
set ebeam2 990
launch -n ppbar_7tev
set lpp2 -1
set nevents 100000
set ebeam1 3500
set ebeam2 3500
launch -n ppbar_13tev
set lpp2 -1
set nevents 100000
set ebeam1 6500
set ebeam2 6500
launch -n ppbar_33tev
set lpp2 -1
set nevents 100000
set ebeam1 16500
set ebeam2 16500
launch -n ppbar_100tev
set lpp2 -1
set nevents 100000
set ebeam1 50000
set ebeam2 50000
launch -n *name* creates the next run with specified name
set lpp2 determines whether proton-proton (set lpp2 1) or proton-antiproton (set lpp2 -1). No change is needed for proton-proton.
set nevents tells madgraph how large of a dataset to create, in this case we used 100,000 events each time
set ebeam1, set ebeam2 determine the energy of each beam in the collider, summing to the total collider energy in GeV
The MadAnalysis script is:
First for unzipping our lhe.gz files:
gunzip -c ./research2021Final/Events/pp_630gev/unweighted_events.lhe.gz > ./research2021Final/Events/pp_630gev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/pp_1980gev/unweighted_events.lhe.gz > ./research2021Final/Events/pp_1980gev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/pp_7tev/unweighted_events.lhe.gz > ./research2021Final/Events/pp_7tev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/pp_13tev/unweighted_events.lhe.gz > ./research2021Final/Events/pp_13tev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/pp_33tev/unweighted_events.lhe.gz > ./research2021Final/Events/pp_33tev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/pp_100tev/unweighted_events.lhe.gz > ./research2021Final/Events/pp_100tev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/ppbar_630gev/unweighted_events.lhe.gz > ./research2021Final/Events/ppbar_630gev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/ppbar_1980gev/unweighted_events.lhe.gz > ./research2021Final/Events/ppbar_1980gev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/ppbar_7tev/unweighted_events.lhe.gz > ./research2021Final/Events/ppbar_7tev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/ppbar_13tev/unweighted_events.lhe.gz > ./research2021Final/Events/ppbar_13tev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/ppbar_33tev/unweighted_events.lhe.gz > ./research2021Final/Events/ppbar_33tev/unweighted_events.lhe
gunzip -c ./research2021Final/Events/ppbar_100tev/unweighted_events.lhe.gz > ./research2021Final/Events/ppbar_100tev/unweighted_events.lhe
Then once in MadAnalysis, we use:
For p p
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/pp_630gev/unweighted_events.lhe as _630GeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/pp_1980gev/unweighted_events.lhe as _1980GeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/pp_7tev/unweighted_events.lhe as _7TeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/pp_13tev/unweighted_events.lhe as _13TeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/pp_33tev/unweighted_events.lhe as _33TeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/pp_100tev/unweighted_events.lhe as _100TeV
plot E(b) 317 0 300 [stack normalize2one]
plot PT(b) 317 0 250 [stack normalize2one]
plot E(b~) 317 0 300 [stack normalize2one]
plot PT(b~) 317 0 250 [stack normalize2one]
submit bbar_Energy_PT_Distribution
For p p~
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/ppbar_630gev/unweighted_events.lhe as _630GeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/ppbar_1980gev/unweighted_events.lhe as _1980GeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/ppbar_7tev/unweighted_events.lhe as _7TeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/ppbar_13tev/unweighted_events.lhe as _13TeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/ppbar_33tev/unweighted_events.lhe as _33TeV
import ~/CMSSW_8_0_22/src/MCProduction/MG5_aMC_v2_6_0/research2021Final/Events/ppbar_100tev/unweighted_events.lhe as _100TeV
plot E(b) 317 0 300 [stack normalize2one]
plot PT(b) 317 0 250 [stack normalize2one]
plot E(b~) 317 0 300 [stack normalize2one]
plot PT(b~) 317 0 250 [stack normalize2one]
submit bbar_FromPPBar_Energy_PT_Distribution
And now we have generated our html report / selections .C files to use with ROOT