Linden Yuan's Journal
Main
Group: Aranya Banerjee, me, Abhish Dev (grad student)
Supervisors: Prof. Kaustabh Agashe (taller/skinnier than his online portraits suggest), Mr. Peizhi Du (pinyin: Du4 Pei4zhi1)
268N-2016 People: Austin Antonacci, David Creegan, John Martyn, John ("Jack") Nolan
268N-2016 Project: Phenomenological Study of an Extension of the Randall-Sundrum Model (our project will be exactly the same; however, our specific model will be slightly different)
268N-2015 People: Vineet Pande, Phillip Shulman
268N-2015 Project: Top Quark Properties at LHC
Aranya and I met with Prof. Agashe and Mr. Du on Monday (Jan. 29) and with Mr. Du again on Tuesday (Jan. 30). On Monday, Prof. Agashe gave an overview of the project. He started with a review of standard model stuff.
Review of Standard Model Stuff
- "Matter particle" = quark or lepton
- Matter particles have spin 1/2
- He drew a generic Feynman diagram for explaining interaction of matter particles via force carriers (TO-DO: allow showing/hiding for images using HTML).
- Force carriers have spin 1; "...gauge bosons"
- Forces
- Electromagnetic -- acts on: quarks, leptons; carried by: photons;
- Weak nuclear -- acts on: quarks, leptons; carried by: W/Z gauge bosons;
- Strong nuclear -- acts on: quarks; [binds quarks into protons and neutrons]; carried by: gluons
- The Higgs boson "gives mass" to W and Z gauge bosons and to quarks and leptons.
- The Higgs boson has spin 0, and so is called a "scalar" particle.
- Feynman diagrams and spin (TO-DO: format images):
- (dashed line) = spin-0
- (wavy) = spin-1/2
- (straight line) = spin-1
- This is a thing: Higgs (h) decays to two V's, for V=photon or W boson or Z boson. (Recall: h->ZZ->4l was the "golden channel" we talked about in the first semester.)
Prof. Agashe then proceeded to talk about an extension of the Standard Model to 5 dimensions.
Standard Model in 5D
- Say the extra dimension is a spatial dimension. If it had infinite length, then gravity and electric force would fall off as 1/r^3; so it must have a finite length L. We will denote coordinates in 5D as follows: x_{\mu} = (four-vector containing the three original spatial coordinates and the time coordinate), y = (number from 0 to L representing coordinate in fifth dimension).
- The theory predicts new particles, called Kaluza-Klein particles. For example, for photons: it predicts that for each n=1,2,3,... there are "Kaluza-Klein photons" with mass n/L.
- This diagram is called a Kaluza-Klein tower. (modified from http://universe-review.ca/I15-74-KK.jpg)
- (Remark: Coincidentally, Kaluza-Klein theory was my topic for my Future of Physics project in 12th grade. I posted my final report below.)
(TO-DO: Add more details)
- Derivation of n/L quantization
- uses Schrodinger's equation for a particle in a 1D box
- background and signal
- hierarchy problem: Why is gravity so weak compared to the weak nuclear force and electromagnetic force?
- Randall-Sundrum model
- exponential suppression
- 4D branes
I skimmed through some of Griffith's Introduction to quantum mechanics, to understand the derivation of n/L quantization. Griffiths has the formula "E_n=n^2\pi^2\hbar^2/2ma^2" (TO-DO: add MathJax support or something similar), which seems to not match up with the "E=n/L" that Prof. Agashe derived (since there's an "n^2" instead just of an "n").
Peizi said the n^2 and n discrepancy comes from a discrepancy in the problem setups. We met on Tuesday and he gave us a lecture on Special Relativity. The symmetries in \mathbb{R}^3 are rotations, translations and reflections (a "symmetry" is defined as a function from \mathbb{R}^3 to \mathbb{R}^3 that preserves distances; that is, for any p,q\in\mathbb{R}^3, the distance from the image of p to the image of q is the same as the distance from p to q (modified from Gallian, 2006, pp. 35--36)). He said the invariants under these symmetries include norms and, more generally, dot products. I thought, "If you translate a vector He wrote the equations "X_{\mu}=\eta_{\mu\nu}X^{\nu}" and "\eta_{\nu\mu}=\text{diag}(1,-1,-1,-1)" and defined "\diag{a,b,c,d}" as the 4x4 diagonal matrix with entries a,b,c,d in the first, second, third and fourth diagonal entries. He wanted to talk about some collider stuff as well, but we didn't have time and he told us to read Secs. 1--3 of Perelstein (2010) and try to get a general understanding of partons and parton distribution functions.
Every time:
cd CMSSW_8_0_22
cmsenv
cd src/MADGRAPH/MG5_aMC_v2_6_0
./bin/mg5_aMC
Models: sm = Standard Model; for background; Radion_EW = Peizhi will send to us, we shall put in models folder, our signal. In MADGRAPH: import model xx (xx = sm, Radion_EW, ...).
Useful code:
generate p p > z > e+ e- (you need spaces between everything) (generates process);
output xxx (outputs process, xxx = file name)
launch xxx
Inside param_card.dat:
use :q to exit (needs to be lowercase; :Q doesn't work)
use :q! to force quit (don't save)
use :wq to write and quit
use i to enter insert mode (no colon)
use [Esc] to escape from insert mode (escape key)
Inside run_card.dat: parameters for collider
nevents
max = 1000000 (after 1000000 the file size gets too big (>= ~10 GB) to store);
avoid more than 10000 simulations without Condor;
use at least 50000 to get a reasonable signal for analysis
lpp1
option "No PDF" gives uniform distribution
Our HW:
1. Look these up: ptj, ptl, ej, el, rapidity, DeltaR.
2. Find out the physical meaning of each particle in ___.lhe.
3. Try generate p p > e+ e-, save the output and look at the Feynman diagram that MADGRAPH creates for the process.
In our Feb. 22 meeting, we first went over the homework. It turns out DeltaR is _not_ the "radius of separation" (I don't remember where I got that from). The DeltaR thing is the same thing as the thing defined in Eno and Jabeen's Section 7.8, in their description of the anti-k_T algorithm: DeltaR=sqrt(DeltaEta)^2+(DeltaPhi)^2). Peizhi mentioned something about pseudorapidity being related to the hyperbolic sine and hyperbolic cosine, but he didn't remember the precise relation. He did say the following: "E^2-p_X^2=m^2, E=m*cosh(eta), p=m*sinh(eta)". He talked about Madgraph, Pythia and Delphes and how we would use them. Apparently, Madgraph is associated with "parton level events", Pythia with "hadronization/shower for q,g" and Delphes with "detector simulation". We then tried installing ExRoot (upon which Pythia depends), Pythia and Delphes. We had trouble installing Pythia 8, so we're using Pythia 6 (which we installed using "install pyth-pgs"). (QUESTION TO SELF: What does the "pgs" mean?)
Our HW:
1. (Assigned by myself) Make presentable the description of the LHE columns (HW 2) from last time; Peizhi wanted us to talk about the column meanings, and although I knew them I didn't know them well and it was hard to show him what I knew.
2. Read a paper that he will send us.
3. Five parts (copy-pasted from e-mail):
1) Generate p p > z > j j process in Madgraph.
2) Launch it and choose shower=pythia6 and detector=delphes. Modify run_card.dat to generate 1000 events.
3) After running is done, go to folder <name of the process>/Events/run_01/ and you should see unweighted_events.lhe.gz,tag_1_pythia_events.lhe.gz , tag_1_delphes_events.root these three files, which are simulated events from Madgraph, Pythia, Delphes respectively.
4) You can vi or emacs .lhe files after gunzip them. However, you can not do so for .root file. What you can do is go to /Delphes folder inside Madgraph folder, and you should find one executive file called root2lhco , which can convert .root file to .lhco file which can be viewed by vi or emacs. Then type the following to run root2lhco ./root2lhco <name of the process>/Events/run_01/tag_1_delphes_events.root <name of the process>/Events/run_01/tag_1_delphes_events.lhco
5)vi or emacs all three .lhe .lhco files. Try to understand the physical meaning of each column in each file. What are the differences in these files? For example, we simulated two jets by typing pp > z > j j, how many jets did you get in .lhco file? Why?
Partial solution to HW part 1:
Screenshot of Dercks (2018)
Hyperbolas 1 (source: McGlinn 2003)
Hyperbolas 2 (source: McGlinn 2003)
Hyperbolas 3 (source: McGlinn 2003)
Last week we received a Mathematica notebook from Mr. Du.
Our homework was:
(1) Put the attached model file in cluster under /models folder in Madgraph and unzip it. This is one of the model we will use to simulate signal processes for our final analysis.
(2) Simulate 10K events using this model for tri-photon channel in Madgraph, meaning you should type
import model Radion_BKK
generate p p > bkk > a r , r > a a
And then launch it to get .lhco file of that.
(3) Simulate 10K events using sm model for two background channels: p p > a a a and p p > j a a Each channel should be in a separate folder and you shall get one .lhco file for each process.
(4) Modify Mathematica file, such that it can select three photons and get invariant mass of all three photons and also invariant mass for all pairs of two photons. Basically you should modify “SelectionCutdijet” function in “Additional Functions for Data Analysis “ cell. If you successfully did that, you should be able to get plots for invariant masses for all the .lhco files you simulated.
Unfortunately, I was really busy this week and did not get to do part 4. One cool thing I learned was that to clear a line in a Mac terminal you can use Ctrl+U.
In our meeting we talked about background events, signal events and cuts. In our process we have the following background from the Standard Model: 1. pp > aaa; 2. pp > jaa; 3. pp > jja; 4. pp > jjj. The first two processes have cross-sections 0.1 pb and 99 pb (using the values from simulated data from last week's homework), and the last two processes have negligible cross-sections (I think; Peizhi didn't talk about these processes so that's what I assume). Something something "how many signal and background events are needed?" something something "background events...ideally 1/sqrt(number of MadGraph events after cuts)\lesssim0.1" something something "weight for madgraph events \equiv W \equiv (\sigma*L)/(# simulated MG events)\lesssim1...1 MadGraph event corresponds to W real events..." something something "sigma(SG)=0.11 fb, L is 300 to 3000 fb^{-1}, so real events is about 33 to 330...in practice, 50,000 signal MadGraph events" something something "significance...S/sqrt(S+B)\gtrsim3...excluded by 3 sigma...S=signal events after cuts, B=background events after cuts" something something "In run_card.dat: set cut_decay to True to add cuts".
Our homework was:
(1) Understand the Mathematica code and modify it to get Pt of photons, and invariant mass of two photon pairs and all three photons using the data you have simulated before.
(2) Plot the distribution of both signal and backgrounds for the above kinematic variables. According to the plots for each variable, find the region to cut, which will select out signals but cut down backgrounds. You might need to use the function “AllCutPGS” I defined in the “Additional functions for data analysis”. You need to understand how it works and how to use it. After you try add cuts for several variables, what is the S/\sqrt{S+B} you get?
(3) If S/\sqrt{S+B} you get from step (2) is too small, we need to modify cuts at run_card.dat to re-simulate events. Then will you get a larger S/\sqrt{S+B}? How many number of events we need to simulate then?
Unfortunately I did not get to finish part 4 of last week's homework and so didn't do this week's.
Due to Spring Break I choose to not post an update for week Mar. 15--Mar. 21. In our meeting on Mar. 15 Peizhi helped us modify the Mathematica code, basically telling us how to do part (4) of the homework from Mar. 1 to Mar. 7. A tip: in Mathematica, variables are color coded; local variables = green, undefined variables = blue, defined variables = black.
In last time's homework, part (1) was basically the same the as part (4) of the homework from Mar. 1 to Mar. 7. For part (2), I initially used one formula to calculate my sample statistical significance as approximately 84.2; I just took S = CountEvent["1_PGS_SG_info_Presel.txt"] and B = Length[PPAAASMLHCO] + Length[PPJAASMLHCO] and plugged in the values to S/sqrt{S+B}. Apparently statistical significance is defined as follows: Say $b\in\mathbb{N}$ and $N\sim\text{Pois}(b)$. Say you observe $n_0$ events, for some natural number $n_0>b$. The *statistical significance* of this observation is denoted $s$ and defined by $s=\Phi^{-1}(N<n_0)$, where $\Phi$ is the normal CDF. A natural question is whether this quantity is always nonnegative, since to me the term "statistical significance" should signify something (alliteration unintentional) nonnegative. It is, since for any Poisson distribution with parameter \lambda we have the following result:
(Choi 1994). IDK how to prove it. However, the paper seems pretty readable. Source: http://www.phys.ufl.edu/~korytov/tmp4/lectures/note_A13_statistics.pdf. In the terminology of statistics, S/sqrt{S+b} is an estimator for the statistical significance.
Apparently for S and B I should use the following quantities (source: e-mail from Mr. Du):
"I guess you might use Madgraph(MG) events to calculate significance. However, we should use real events after cuts, which is defined as cross-section*luminosity*cut efficiency. Cross-section you can get from simulation and we usually choose luminosity to be 300fb^{-1}. Cut efficiency can be obtained as (# MG events after cuts)/(total MG events you simulated). We checked last time that our signal cross-section is roughly 0.11fb, so our real signal events after cuts at most 33 events. So S/sqrt{S+B} should at most be around 6. Please check your results and calculate the S/sqrt{S+B} again from your analysis."
This gave me a S/sqrt{S+B} of 4.88, which I thought was pretty good (we wanted about above 3, this is almost always less than or equal to about 6 and above about 5 is very good) so I didn't do part 3.
We're still doing stuff with the triphoton channel. In our Mar. 29 meeting, Peizhi told me to print out my value of B. It was 0. This is too low, so we re-simulated stuff. This time, to circumvent this B=0 problem, we generated more events -- a lot more events. We generated 100000 events for each process. To do so, we had to use Condor. Peizhi gave us three files to copy for Condor usage. Here are my modified versions: condor_SG_jobs.jdl, command.cmd and condor_SG_MG5.sh.
Peizhi's e-mail from this week:
Hi all,
Let me clarify one thing which I get confused during our last meeting. The question we asked is whether we should apply the cuts which must equally applied to all backgrounds and signals at generation level. Now I think the answer is the following. At generation level, it is OK to apply different cuts for different backgrounds and signals, or the cuts which might perform differently for different backgrounds. As long as we applied a stronger cut at analysis level, it will be consistent. For example, we could apply mmaa for aaa, jja and mmjj for jja at generation level and even mmaa and mmjj could be different. However, we need to make sure that the final cut used in analysis is much stronger than both of them. So what Aranya did before is fine.
As we discussed last time, we should finish the following steps before the next meeting:
1) For tri-photon channel, you need to simulate jja background and apply analysis cuts to see how many left after cuts.
2) We need to start to think about Waa signal also. Attached is the model file for KK W cascade decay.
2.1) Use this model to simulate signal Waa with W decaying to a lepton and a neutrino. To generate signal process, you need to type:
generate p p > wkk+ > w+ r , r > a a , w+ > l+ vl
add process p p > wkk- > w- r , r > a a , w- > l- vl~
Try to understand the meaning of each line of commands above.
2.2) What are the backgrounds relevant to this signal process? One obvious backgrounds is Waa with W leptonic decay. What about other backgrounds?
Best,
Peizhi
I am confused why we need the same cuts at the analysis level for each process. Peizhi said something about sampling and some statistics reason, but I still don't see why. I have printed last year's group's paper and a paper by Prof. Agashe, Mr. Du and Mr. Hong. I have finished the analysis of the 100K events data from last week. This time, I got a B of 7.6425 and a S/sqrt(S+B) of 4.25174. I will start the homework due Apr. 12; I haven't had time to yet.
Fig. 1: Why are we doing cuts: (a) at the analysis level? (b) at the generation level?
Fig. 2: Calculating B and Sqrt(S+B)
In our meeting, we talked about the W gamma gamma channel.
Fig. 1: W gamma gamma channel
At the detector, we'll see: 2 photons, one lepton and some MET. This is our "final state". The plus-minus means there are two channels, one with KKW+ and W+ and one with KKW- and W-.
Our assignment for this week:
---
Hi all,
I think you have already known all necessary strategies to complete a full study till know. What you need to do next is just to do analysis more carefully, and find a better cut to maximize the signal significance. In the final presentation or paper, you should have plots for each kinematic variables that you cut on, and also the cut flow table. For example, table 6 in the reference attached is one of the cut flow table. This is the paper where we studied exactly the same channels. So you could try to read it and get some idea how the analysis works. You should do a blind study, not just using the same cuts in the paper, although the cuts you find might almost the same as presented in the paper.
---
Here's a link to the paper: https://arxiv.org/abs/1711.09920.
I don't understand what Peizhi means by "blind study"; does he mean "don't look at their cuts"? I guess I could do that by taping something over the part of the cut tables where they list actual numbers.
In our meeting, we didn't go over anything new but instead worked on the project. I have bk2 = 0. How do I fix this? Last time, we simply generated more events. This time, we need to do something different -- probably change the cuts, since that's the only thing we can do. At which level should we change the cuts?
The following figure might be related -- there's a sharp cutoff.
We asked for an extension for our paper. It was due the night of May 11 (Thu). It's now due the night of May 14 (Mon). I also asked for an extension for this logbook. The previous and extended due date are the same as for the paper.
Here is a link to our poster.
Here is a link to a draft of our paper and to the images we used in our paper.
Here is a link to my Mathematica notebook for the triphoton channel analysis.
For our paper we ended up using Aranya's data for the triphoton channel, since his S/sqrt(S+B) was higher. I just found out (today) how to add cuts at generation level for Maaa: add cuts on Maa, e.g. at 500 GeV, for a cut at analysis level of 2800 GeV for Maaa.
Tidbits copied from our paper:
To summarize what we did:
In our analysis, we attempted to maximize $S/\sqrt{S+B}$ and minimize the weights and $N^{\text{cut}}_{\text{MG}}$s (number of simulated events remaining after cuts) of the two background processes. For each process, weight is defined by $W=\sigma L/N$, where $\sigma$ denotes cross-section, $L$ integrated luminosity and $N$ the number of simulated MadGraph events. Note that $\sigma L$ is the (predicted) number of real events, and that $W$ represents the ratio of (predicted) real events to simulated events. As general guidelines, we tried to proceed until we got $S/\sqrt{S+B}\geq5$, and $W\leq1$ and $N^{\text{cut}}_{\text{MG}}\geq100$ for both background processes. In practice, we did not cut $N^{\text{cut}}_{\text{MG}}$ to be greater than 100; we only had $N^{\text{cut}}_{\text{MG},B_2}=1$ for our final analysis (where $N^{\text{cut}}_{\text{MG},B_2}$ is the value of $N^{\text{cut}}_{\text{MG}}$ corresponding to the second background process $pp>j\gamma\gamma$). We used as parton-level cuts $M_{\gamma_1\gamma_2},M_{\gamma_1\gamma_3},M_{\gamma_2\gamma_3}\geq500$ GeV. For preselection cuts, we used $p_{T_j}\geq20$, $p_{T_\gamma}$ We used at the analysis level the cut $M_{\gamma_1\gamma_2\gamma_3}\geq2800$ GeV.
Here is my cut flow code (from Mathematica):
Table[
(*AllCutPGS["1_PGS_"<>ToString[x]<>"_info_Presel.txt","1_PGS_"<>
ToString[x]<>"_Ma1a2a3_Presel.txt",{2800,\[Infinity]},x,Ma1a2a3];*)
AllCutPGS["1_PGS_" <> ToString[x] <> "_info_Presel.txt",
"1_PGS_" <> ToString[x] <> "_Ma1a2_Presel.txt", {900, \[Infinity]},
x, Ma1a2];
AllCutPGS["2_PGS_" <> ToString[x] <> "_info_Ma1a2Cut.txt",
"1_PGS_" <> ToString[x] <> "_Ma1a3_Presel.txt", {800, \[Infinity]},
x, Ma1a3];
AllCutPGS["2_PGS_" <> ToString[x] <> "_info_Ma1a3Cut.txt",
"1_PGS_" <> ToString[x] <> "_Ma2a3_Presel.txt", {800, \[Infinity]},
x, Ma2a3];
AllCutPGS["2_PGS_" <> ToString[x] <> "_info_Ma2a3Cut.txt",
"1_PGS_" <> ToString[x] <> "_Pta1_Presel.txt", {240, \[Infinity]},
x, Pta1];
AllCutPGS["2_PGS_" <> ToString[x] <> "_info_Pta1Cut.txt",
"1_PGS_" <> ToString[x] <> "_Pta2_Presel.txt", {200, \[Infinity]},
x, Pta2];
AllCutPGS["2_PGS_" <> ToString[x] <> "_info_Pta2Cut.txt",
"1_PGS_" <> ToString[x] <> "_Pta3_Presel.txt", {60, \[Infinity]},
x, Pta3],
{x, {SG, BK1, BK2, SG1, BK11, BK21}}];
s = xsec[SG]*300*(CountEvent["2_PGS_SG_info_Pta3Cut.txt"] +
CountEvent["2_PGS_SG1_info_Pta3Cut.txt"])/(Length[
RadionBKKLHCO] +
Length[RadionBKKLHCO1])(*s=number of signal events*)
bk1 = xsec[
BK1]*300*(CountEvent["2_PGS_BK1_info_Pta3Cut.txt"] +
CountEvent["2_PGS_BK11_info_Pta3Cut.txt"])/(Length[
PPAAASMLHCO] + Length[PPAAASMLHCO1]);
bk2 = xsec[
BK2]*300*(CountEvent["2_PGS_BK2_info_Pta3Cut.txt"] +
CountEvent["2_PGS_BK21_info_Pta3Cut.txt"])/(Length[
PPJAASMLHCO] + Length[PPJAASMLHCO1]);
bk = bk1 + bk2
N[s/Sqrt[s + bk]]
Table[(CountEvent["1_PGS_" <> ToString[x] <> "_info_Presel.txt"] +
CountEvent["1_PGS_" <> ToString[x] <> "1_info_Presel.txt"])*
xsec[x]/100000, {x, {SG, BK1, BK2}}]
Table[(CountEvent["2_PGS_" <> ToString[x] <> "_info_Ma1a2Cut.txt"] +
CountEvent["2_PGS_" <> ToString[x] <> "1_info_Ma1a2Cut.txt"])*
xsec[x]/100000, {x, {SG, BK1, BK2}}]
Table[(CountEvent["2_PGS_" <> ToString[x] <> "_info_Ma1a3Cut.txt"] +
CountEvent["2_PGS_" <> ToString[x] <> "1_info_Ma1a3Cut.txt"])*
xsec[x]/100000, {x, {SG, BK1, BK2}}]
Table[(CountEvent["2_PGS_" <> ToString[x] <> "_info_Ma2a3Cut.txt"] +
CountEvent["2_PGS_" <> ToString[x] <> "1_info_Ma2a3Cut.txt"])*
xsec[x]/100000, {x, {SG, BK1, BK2}}]
Table[(CountEvent["2_PGS_" <> ToString[x] <> "_info_Pta1Cut.txt"] +
CountEvent["2_PGS_" <> ToString[x] <> "1_info_Pta1Cut.txt"])*
xsec[x]/100000, {x, {SG, BK1, BK2}}]
Table[(CountEvent["2_PGS_" <> ToString[x] <> "_info_Pta2Cut.txt"] +
CountEvent["2_PGS_" <> ToString[x] <> "1_info_Pta2Cut.txt"])*
xsec[x]/100000, {x, {SG, BK1, BK2}}]
Table[(CountEvent["2_PGS_" <> ToString[x] <> "_info_Pta3Cut.txt"] +
CountEvent["2_PGS_" <> ToString[x] <> "1_info_Pta3Cut.txt"])*
xsec[x]/100000, {x, {SG, BK1, BK2}}]
xsec[SG]*300/100000
xsec[BK1]*300/100000
N[xsec[BK2]*300/100000]
(+ some more code to show plots)
Some more things I found out:
-In the theory we're studying (Agashe et al, 2017), the graviton's wave function is real-valued and nonnegative. It's exponential, and thus so is its probability density function.
- The extended Randall-Sundrum improves upon the Randall-Sundrum model in certain particles (including the Kaluza-Klein photon and W+- bosons) have less weight (~3 TeV vs. ~10 TeV) and are easier to detect.
-The extended Randall-Sundrum model improves upon the Randall-Sundrum model in this way: it predicts lower rates of flavor-changing processes. Flavor-changing processes change particles from one flavor to another, e.g. from electron to muon. Current experiments have found very small rates of flavor-changing processes. Additionally, adjusting theories to have some (I'm not clear which) particles heavier increases the predicted rates of flavor-changing processes. But the extended Randall-Sundrum model predicts that certain particles are heavier (~10 TeV vs. ~3 TeV) than the Randall-Sundrum model's predictions, so it predicts lower rates of flavor-changing processes, which agrees better with our observations.
(Note for self: citation style = APA.)
Agashe, K., Du, P., Hong, S., & Sundrum, R. (2017). Flavor universal resonances and warped gravity. Journal of High Energy Physics, 2017(1), 16.
Bettini, A. (2014). Introduction to elementary particle physics. Cambridge University Press.
Close, F. (2004). Particle physics: A very short introduction (Vol. 109). Oxford University Press.
Dercks, D. (2018). Tools for high energy physics [PDF file]. Retrieved from https://indico.cern.ch/event/682259/contributions/2874876/attachments/1594000/2523797/Talk2.pdf.
Eno, S., Jabeen, S. (2017). Experimental particle physics. Unpublished manuscript.
Gallian, J. A. (2006). Contemporary abstract algebra (6th ed.). Houghton Mifflin.
Griffiths, D. (2005). Introduction to quantum mechanics (2nd ed.). Pearson Prentice Hall.
Griffiths, D. (2008). Introduction to elementary particles. John Wiley & Sons.
McGlinn, W. D. (2002). Introduction to relativity. Johns Hopkins University Press.
Thomson, M. (2013). Modern particle physics. Cambridge University Press.
Perelstein, M. (2010). Introduction to collider physics. Retrieved from https://arxiv.org/abs/1002.0274.
Yuan, L. (2017, February 22). Kaluza-Klein theory. Unpublished manuscript.
Yuan, L. (2018, January 31). Top Quark Properties at LHC by Prof. Kaustabh Agashe, Vineet Pande and Phillip Shulman. [Photograph]. Poster located on window of Prof. Jabeen's office (PSC 3107) as of Jan. 30, 2018.
Kaplan, D. (2018, January 31). Feynman Diagram Representing a Simple Scattering of Two Particles. [Image]. Retrieved from https://www.learner.org/courses/physics/visual/img_lrg/Feynman_diagram.jpg. Cropped by me.