Ice Cube Neutrino Observatory
References:
http://icecube.umd.edu/Home.html
http://iopscience.iop.org/article/10.1088/1475-7516/2018/01/025?pageTitle=IOPscience
http://iopscience.iop.org/article/10.3847/1538-4357/aa9d94/meta
268N-2016:
Juan Dupuy
References:
Background on neutrino astronomy:
https://arxiv.org/abs/1007.1247
https://arxiv.org/abs/1701.03731
Background on IceCube systems, etc:
https://arxiv.org/abs/1612.05093
IceCube Point source results:
Most recent IceCube point source papers are here:
http://arxiv.org/abs/1406.6757
https://arxiv.org/abs/1609.04981
It references a good methods paper, that describes how things work:
http://arxiv.org/abs/0912.1572
Matthew Kirby
268N-2015:
https://sites.google.com/a/physics.umd.edu/honrxxx/logbook/268n-2015/paul-neves/icecube-research
Paul Neves
Alison Duck
Anat Berday-Sacks
1/29/18:
* Background on neutrino astronomy
https://arxiv.org/abs/1007.1247
https://arxiv.org/abs/1701.03731
* Background on IceCube systems, etc:
https://arxiv.org/abs/1612.05093
* IceCube Point source results.
Most recent IceCube point source papers are here:
http://arxiv.org/abs/1406.6757
https://arxiv.org/abs/1609.04981
It references a good methods paper, that describes how things work.
http://arxiv.org/abs/0912.1572
Also, last year, we had good success with the students setting up some virtual machines running linux to do data analysis in. You might consider the same.
Try VirtualBox (for mac or windows) and install Lubuntu 16.04
https://www.virtualbox.org/wiki/Downloads
https://help.ubuntu.com/community/Lubuntu/GetLubuntu/LTS
Sorry it took me a bit longer to get this into a form that is reasonably independent of our IceCube tool stack, but here is a python numpy data file https://www.dropbox.com/s/6iq9c9gi66szkp7/IC86_exp_all.npy?dl=0 We can talk more about tools, plots, etc tomorrow. to open:
import numpy as np
datas = np.load(“./IC86_exp_all.npy”)
datas
it contains: run#, event#, RA, DEC, azimuth, zenith, energy (log10), angular error, and time (fractional MJD) for a whole year of IceCube data (our point source sample)
2/7/18:
I ran some of Paul's code in order to get a feel for the data and make sure python and Spyder worked correctly with the code on my computer.
For reference it would be:
Paul (python file)
Paul.py (has been edited since)
Paul_1.png
Paul_2.png
Paul_3.png
Paul_4.png (histograms from running the python file)
** all of the files for the ice cube research are in the HONR 269L folder **
Plot a histogram of the events recorded at each zenith and azimuth angle
plot a distribution of the zenith angles in this data
Change bin number to 100
Change it from radians
The -1 corresponds to going straight up (if you are standing at the south pole) through the Earth.
1 corresponds to coming straight down.
The difference in what we see is because there are different background.
On the right the neutrino counts go down because the earth blocks more.
On the left it's because we have to cut out a bunch of atmospheric muon background.
2-D Histogram Plotting
We see the same distribution as above horizontally (that's the zenith angle) but looking vertically we get a much more of a homogeneous distribution.
There is little difference how many neutrinos the detector sees depending on what direction tangent to the surface of the south pole you look.
Except for a few spikes in the down-going neutrino side.
2/12/18
(Notes mostly based on this article: http://arxiv.org/abs/1406.6757)
Ways to Analyze IceCube Data:
All-Sky Searches
Point Source
Extended/Diffuse Source
Used to identify areas to look for a point source
Searches Among List of 44 Candidate Sources
Identified areas of likely neutrino emission
Stacking Searches
Stacking
Sources of the same type may emit fluxes that are individually below the discovery potential but detectable as a class when summed up
Select sources based on:
γ-ray observations
Astrophysical models predicting neutrino emission
Expected luminosities of these sources can be utilized to weight the contribution of each source
Position relative to IceCube can also be taken into account
Examples
127 local starburst galaxies
5 nearby clusters of galaxies
10 SNRs associated with molecular clouds
233 Galaxies with super-massive black holes
Possible Weighting Topics
Brightness of the object
Observed
Actual
Distance
“Detectability”
Size in resolution of detector
1 degree of resolution for IceCube is quite large
Only 40,000 degrees in the sky
The “Dip”
Muon interference
Energy threshold
Horizon
Different methods of analyzing data
Zenith angle vs. band
So, to explain the "Dip" in a more complete way, it is due to the required difference energy thresholds when IceCube looks at the sky above the telescope verses the sky below the earth. There is a ton of background from atmospheric muons when IceCube is looking up. So the energy threshold for anything in the southern sky is very high because they need to eliminate the possibility that they are seeing a muon rather than a neutrino. However, when the data comes from a neutrino from the northern sky it has to go through the earth before it is detected by IceCube. So the energy threshold can be much lower because only astrophysical neutrinos can make it through the earth without interacting with anything. The difference in energy thresholds causes the dip in the data at the horizon because there is a portion of the horizon where the threshold is high but there is not much muon interference.
2/21/18
We looked through a bunch of catalogs.
Here is a list of them:
http://ned.ipac.caltech.edu/level5/NED1D/ned1d.html
http://ned.ipac.caltech.edu/level5/NED0D/NED.4D.html
https://fermi.gsfc.nasa.gov/ssc/data/access/lat/4yr_catalog/3FGL-table/#aitoff
https://www.cambridge.org/core/services/aop-cambridge-core/content/view/S1323358000004707
https://arxiv.org/pdf/1802.03925.pdf
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4561239/
Close AGN
https://arxiv.org/abs/1411.2596
http://www.astro.gsu.edu/AGNmass/
Wise Catalog
https://arxiv.org/abs/1706.09901
-check just close objects
Catalog Search
http://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-scan?submit=Select&projshort=WISE
The best one looks like the Close AGN catalog.
It was created because it is calculating the mass of the AGN black holes.
The Wise Catalog is really cool but there are way too many entries to make it work.
*AGN = active galactic nuclei
3/7/18
So, I tried to make Paul's code work.
Here is Attempt #1:
That gives:
and:
Basically this means that for some reason the data isn't being inserted into the code.
I am getting a graph but no data is in it.
Attempt #2:
This repeats over and over again.
So I didn't want to sit here for over two hours and that obviously wasn't what I was looking for.
Attempt #3:
And, we are back to having graphs but no data in the graphs:
3/14/18
Really went in-depth into what the catalog actually gives us.
Predominantly from this journal: https://arxiv.org/abs/1411.2596
Reverberation Mapping
Active galactic nuclei are generally too distant for even the largest telescopes to spatially resolve the gravitational influence of the black hole and determine its mass with current technology
Reverberation mapping measures the time delay between changes in the continuum emission
likely arising from the accretion disk
and the response to these changes in the broad emission lines
arising from the photoionized broad line region, BLR
62 AGN's
(each AGN has this information on it)
Object name
Common alternate names
Coordinates of right ascension, declination, and redshift
AGN activity classification
Hubble Space Telescope optical (medium V) image of the host galaxy
Luminosity distance and angular diameter distance assuming a cosmology of H0 = 71 km s−1 Mpc−1 , ΩM = 0.30, and ΩΛ = 0.70
The luminosity isn't actually what you think it is and it can't be used to weight the stacking. It is what they used to measure the mass and is not in the right range of values that would make it reasonable to use.
3/28/18
Randomized data to simulate background needs to happen. We fixed up Paul's code and used the more current data to make the randomized skymaps.
I finally got data on to one of my skymaps using this code:
This gave me:
This is really cool because this is data. But, it isn't the randomized skymap that we need to make.
So, I worked on this code and got some help with Jake to fix the small errors I was getting.
They were mostly because of bad formatting in some of the arrays.
Here is the finished code:
*scrambledDataSkymap.py
This is what that outputs:
This should work for now. But we might have to get a better scrambled p-value later.
4/1/18
Skymap of the Point Sources from the AGN Catalog
Professor Blaufuss recommended that we use astropy instead of healpix.
We primarily used the tutorial from this site:
http://astropy-tutorials.readthedocs.io/en/latest/rst-tutorials/plot-catalog.html
First, I had problems with creating a table from my mbh.csv file because when I deleted the first two rows in excel it caused a formatting problem so the ascii.read() command outlined in the tutorial didn't work correctly.
I was able to fix this by using the mbh.csv file that Jake uploaded on our shared Google Drive folder.
*mbh.csv
We also encountered this error when we were running the code:
This happened because the format of the right ascension in our data was wrong. Instead of being in degrees it was in hours:minutes:seconds.
So, instead of using that code we used this to change the right ascension into degrees:
After we were able to change the right ascension we encountered this error when we tried to get the declination into the same format:
The problem was that the declination was in degrees:minutes:seconds instead of hours:minutes:seconds.
So, we changed the declination code to:
Once I had the declination and the right ascension in the correct format I tried to plot the data and got:
This was because I had incorrectly tried to edit the graphing code.
The complete correct code is:
*catalogSkymap.py
This gives you this skymap:
We checked a few of the points on the skymap and it seems to be accurate.
4/2/18
Jake did a proof to determine the conversion of redshift to proper distance.
cz/H = D
H being the Hubble constant, z being the redshift, and c being the speed of light.
A lot of the proof came from this wikipedia article:
https://en.wikipedia.org/wiki/Hubble%27s_law
Dr. Blaufuss also gave us quite a bit of information in an email that he sent before going on vacation.
I’ve got the scaling from Flux to N events.
I’ve posted a bunch of stuff, including my script that calculates effective area
and convolves fluxes here:
http://icecube.umd.edu/~blaufuss/post/umd_ug_ps/
I’ve assumed the flux to be of a form F(E) = A * E**-2
(a standard spectrum assumed in many astrophysical models)
Then I’ve picked a value of 1e-8 for the normalization A. In our standard PS analysis, they use
TeV instead of GeV, so there is a value of 1e-11 when comparing to our standard point source limits as shown here:
I calculated the effective areas from our MC sample, and then for each bin in energy calculated the number of expected
events, then summed them to get the total # of events at each declination:
0, +/- 20, +/- 40, +/-60
The effective area plots and the NEV plots (showing expected signal as a function of energy)
The total events in all energies are shown in text written on the NEV plots.
These will scale linearly with A. (Double A, double the expected # of events. ) you can use this
to convert your N_injected needed to a flux.
It’s a lot to dump on you, I would expect your limits to be ~10 times worse than the sensitivities shown on the above plot for two reasons:
- this is 7 years of data from several selections
- we consider energy and do a more sophisticated analysis.
I downloaded the data on that site and did a couple of minor edits to Professor Blaufuss's code so it worked in Spyder.
*calc_effarea_colvole.py
(P.S. its supposed to be convole - I just spelled it wrong)
There are two graphical outputs that I get from running the code
and a bunch of numerical output.
4/9/18
Theoretical Weight
These are weights based off of information that is believed to impact the detected flux of particles
Currently, we are basing these off of:
Distance (based on redshift)
Mass of AGN
Detector Weight
This is a well defined weight based off of two main things
Declination of the event
Energy of the particle
The weights are based off of interpretations of background data and known effectiveness of the detector at certain declination
Dec: -20 0 20 40
4/19/18
Setiawan, S., M. Ruffert, and H.-Th Janka. “Non-Stationary Hyperaccretion of Stellar-Mass Black Holes in Three Dimensions: Torus Evolution and Neutrino Emission.” Monthly Notices of the Royal Astronomical Society 352, no. 3 (August 11, 2004): 753–58. https://doi.org/10.1111/j.1365-2966.2004.07974.x.
Effect of Mass on Neutrino Emission
The AGN Black Hole Mass Database
This is what we have been using for our catalog of stacking sources
Up to now we haven’t used the Mass aspect
Should we use mass as a weight for neutrino emission?
Is there a correlation?
Simulations show that the neutrino emission and energy deposition by νν¯-annihilation increase sensitively with the disc mass, with the black hole spin in case of a disc in corotation, and in particular with the α-viscosity
Neutrino Emission
Stellar-mass black holes
Neutrino emission associated with the dynamical phase of the merging or collision of two neutron stars is powerful but too short to provide the energy for gamma-ray bursts by neutrino-antineutrino annihilation
Neutrino luminosities rise when the stars plunge into each other
A few milliseconds later the remnant of the merger collapses into a black hole
Some matter remains in a toroidal accretion disc around the BH
Neutrinos are abundantly created by weak interactions in the very dense and hot tori and a fair fraction of the gravitational binding energy of the accreted matter can be radiated away by them
The total neutrino luminosity of the torus, Lν , (i.e. the sum of the luminosities of neutrinos and antineutrinos of all flavours) increases with the torus mass, with the BH spin in case of direct rotation (corotation with disc), and with the viscosity.
This is a consequence of a higher torus temperature.
4/26/18
Healpix
Each pixel is a bin (Image has NSIDE of 2, we use NSIDE of 32)
#pixels = 12 * NSIDE2 = 12288 (for us)
Stacking Itself
So to do this Jake had changed the catalog right ascension and declination to a pixel using healpix. So instead of trying to match up the right ascensions and declinations we just had to find the pixel in the skymap of events. This skymap was something we had already made so that made finding the events quite easy. I was a bit tripped up on whether this was what the skymap in the code is but it does work.
Here was my first draft:
4/29/18
Then we needed to add the neighbors around the points to make it take into account more than just the neutrino flux from the one point.
We also changed how we weighted the data. The redshift weighting stayed the same but for the detector weights we hard coded each of the declinations using the values that Prof. Blaufuss gave us. That made the code go much faster and we didn't need to worry too much about our code going off the rails or importing another document.
The semi-final stacking code is attached as: agn_pval_stacking.py
4/30/18
The results we are getting: