1/28/2016
Pythia
To output the pythia/root file pythia program should be inside the CMSSW directory.
To run pythia output :
source PYTHIA_setup.csh
make pythiaTree
./pythiaTree >& output.txt
//Repeat these commands to compile the code each time
added mass leaf in the branch and complied the file
2/2/2016
pythia.readString("NewGaugeBoson:ffbar2gmZZprime = on");// resets the default
pythia.readString("32:m0 = 150.");
pythia.readString("32:mMin = 90."); // ID (this cas Zo boson):the lower limit of allowed mass range
pythia.readString("32:mMax = 300.");// the upper limit of allowed mass. Ignored if mMax < mMin
pythia.readString("PhaseSpace:mHatMin = 20. ");// sets the invariant mass
pythia.readString("Zprime:gmZmode = 3");
pythia.readString("Beams:eCM = 14000."); // sets center of mass
pythia.readString("32:onMode = off"); off's the Z
pythia.readString("32:onIfAny = 11");
2/7/2016
Initialized variable mass as float
Changed parameters of mass range, and median mass
Created the branch and leaf of mass by declaring mass inside t1 tree
Filled the mass variable
Histogram of mass
Below is the picture of invariant mass, the highest peak at 1500
2/9/2015
I'll be researching with UMD HEP CMS group. Researching made me to realize that this group is huge fan of LEGO'S. I found this link (in the reference page #4), where it clearly explains different components of CMS with lego models. The main aim of this research is to identify the full range of particles, fields and forces at quantum, atomic and sub- atomic realm.
Fun facts about LHC:
1. Is 27 km, located underground on the Swiss-French border.
2.Cluster of protons collide each other at 99.9% speed of light.
3.The detectors can be as large as five story building, used to track and detect particles.
There are addition two groups called LHCB and BABAR, are engaged in experimental studies of particles containing the bottom and charm quarks, including the breaking of the CP invariance in the decays of these particles. Link to their website can be found under reference number (2).
From CMS latest article, they are trying to prepare their instrumentation for high luminosity run at 13 TeV.
Higgs Boson
What is it? Why is it useful? What can we learn from it? These are some of the questions I'll try to answer from this writing (1).
Universe is described by four fundamental forces: "electromagnetism, the weak force (which regulates nuclear phenomena like fusion within stars) and the strong force (which manifests on the scale of atomic nuclei)." From last century physicists are trying to unify these forces under a single theory. They propose that they is a relationship between weak force and electromagnetic force. Which was a breakthrough, when physicists came up with unification symmetry theory to describe electroweak forces.
The standard model we have successfully describes how particles interact. But doesn't explain how these particles get mass. That's where Peter Higgs comes in. He proposes that there is a field with which matter interact. And the mass of the matter depends on how it interacts with the field. If the particle interacts heavily with the field then the particle has more mass than the particle which doesn't interact. Below is the link to youtube video about Higgs Boson.
2/15/2016
Working on GEANT and studying about dark matter. See reference for Dr. ENO website.
2/16/2015
Notes from Dr. Eno's website and Wikipedia (Dr. Eno referred) .
Dark matter is a hypothetical particle astronomers believe exists. They believe it accounts of five-sixth of the matter in the universe. It only interacts with gravity and possibly with the weak force. Dark matter is transparent to electromagnetic radiation and is so dense and small that it fails to adsorb and emit radiation to be detectable with imaging technology.
The standard model of cosmology indicates that the total mass-energy of the universe contains 4.9% ordinary matter 26.8% dark matter and 68.3%dark energy. Thus, dark matter constitutes 84.5% of total mass. The dark matter hypothesis plays a central role modeling of galaxy formation and evolution. All the evidences suggest that galaxies, cluster of galaxies and the universe as a whole contain far more matter than that which is observable via electromagnetic signals.
Universe rotational curve:-
is a plot of the orbital speeds of visible stars or gas in the galaxy vs their radical distance from that galaxy's center. Stars revolve around their galaxy's centre at equal or increasing speed over a large range of distances. Instead, the orbital velocity of planets in solar systems and moons orbiting planets decline with distance.
Cosmic microwave background: is the thermal radiation left over from the time of recombination in big bang cosmology. It is very fundamental to cosmology because it is the oldest light in the universe.The CMB is well explained as radiation left over from an early stage in the development of the universe, and its discovery is considered a landmark test of the Big Bang model of the universe. When the universe was young, before the formation of stars and planets, it was denser, much hotter, and filled with a uniform glow from a white-hot fog of hydrogen plasma. As the universe expanded, both the plasma and the radiation filling it grew cooler. When the universe cooled enough, protons and electrons combined to form neutral hydrogen atoms. These atoms could no longer absorb the thermal radiation, and so the universe became transparent instead of being an opaque fog. Cosmologists refer to the time period when neutral atoms first formed as the recombination epoch, and the event shortly afterwards when photons started to travel freely through space rather than constantly being scattered by electrons and protons in plasma is referred to as photon decoupling. The photons that existed at the time of photon decoupling have been propagating ever since, though growing fainter and less energetic, since the expansion of space causes their wavelength to increase over time (and wavelength is inversely proportional to energy according to Planck's relation). This is the source of the alternative termrelic radiation. The surface of last scattering refers to the set of points in space at the right distance from us so that we are now receiving photons originally emitted from those points at the time of photon decoupling.
Red shifts of light from type la supernova dark: is the observation that the universe appears to be expanding at an increasing rate or slowing down using the light as median.
The bullet cluster galaxy collision: consist of two colliding cluster of galaxies. Gravitational lensing studies of the Bullet Cluster are claimed to provide the best evidence to date for the existence of dark matter.
Above are all the ways to study the dark matter. These observations are providing more evidence for dark matter. In my project, we explore a theory that predicts a potential dark matter candidate: dark QCD (quantum chromodynamics) is the theory of the strong force that holds the quarks in the nucleons and holds nucleons together in nuclei. The quarks also feel E&M and they feel the weak force. The force is caused by the exchange of boson called the "gluon". The gluon is the mediator of QCD. Quarks and gluons can be produced in proton-proton collisions at the LHC.
As the separation between 2 quarks increases, the energy stored in the field increases. when the energy is large enough, it can turn into quark and anti-quark pair. Thus, if two quarks are initially produced and are flying apart from each other, they will create quark and anti-quark pairs and gluons. This process is called fragmentation. Eventually, the particles will arrange themselves into color neutral states. A group of pions produced by initial quark, and which tend to go in the direction of the initial quark, is called a Jet.
2/22/2016
Left Dark matter research
************************************************** AMAV TEAM **********************************************************
3/4/2016 (how to) software installations are in the attached document
Filter noise: convert image from RGB-> HSV (Hue saturation value), because colored objects are easier to filter in HSV. HSV is the cylindrical coordinate representation of points in an RGB color. Each degree of angle represents a color, 0 degree is red and so on.
erode and dilate the image to clear noise and display white image of any color
erode(thresh, thresh, erodeElement);
erode(thresh, thresh, erodeElement);
dilate(thresh, thresh, dilateElement);
dilate(thresh, thresh, dilateElement);
Track objects: then the dilated image is put on to contour. Use the "moments" method, inputs vector contours and outputs x,y coordinates.
3/8/2016
Successfully, created a c++ code that can track objects. Code is in my github account with comments. https://github.com/saimouli/openCV_object_detect.git
3/12/2016
http://wiki.ros.org/ROS/Tutorials/NavigatingTheFilesystem
NAVIGATION IN ROS
Learning and installing ROS program.
In ROS it is tedious to use ls and cd commands to change or view list in directory instead ROS has its own commands:
> rospack find [package_name] , will print the path of the ROS package. It will return YOUR_INSTALL_PATH/share/roscpp
> roscd [locationname[/subdir]], allows to change directory. roscd can also move to a subdirectory of a package or stack, roscd roscpp/cmake
> To see what is in your ROS_PACKAGE_PATH, type: $ echo $ROS_PACKAGE_PATH
> opt/ros/indigo/base/install/share:/opt/ros/indigo/base/install/stacks , is the output. ':' separates sub-directories
> rosls is part of the rosbash suite. It allows you to ls directly in a package by name rather than by absolute path. rosls [locationname[/subdir]], rosls roscpp_tutorials this will show list of files inside the roscpp_tutorials
CREATING A ROS PACKAGE
> using catkin packages. Change source to cd ~/catkin_ws/src
> Now use the catkin_create_pkg script to create a new package called 'beginner_tutorials' which
depends on std_msgs, roscpp, and rospy:catkin_create_pkg beginner_tutorials std_msgs rospy roscpp
> need to build the packages in the catkin workspace: $ cd ~/catkin_ws $ catkin_make
> To add the workspace to your ROS environment you need to source the generated setup file: . ~/catkin_ws/devel/setup.bash
> When using catkin_create_pkg earlier, a few package dependencies were provided. They can be reviewed through rospack $ rospack depends1 beginner_tutorials
> To view indirect dependencies of dependencies rospack depends1 rospy this lists the rospy dependencies
package file consists of :
<description>The beginner_tutorials package</description>
maintainer tag- input my email so that people know who to contact
<!-- One maintainer tag required, multiple allowed, one person per tag --> <!-- Example: --> <!-- <maintainer email="jane.doe@example.com">Jane Doe</maintainer> --><maintainer email="user@todo.todo">user</maintainer>
License tag
<!-- One license tag required, multiple allowed, one license per tag --><!-- Commonly used license strings: --><!-- BSD, MIT, Boost Software License, GPLv2, GPLv3, LGPLv2.1, LGPLv3 --><license>TODO</license>
Dependencies tag
<!-- The *_depend tags are used to specify dependencies --> <!-- Dependencies can be catkin packages or system dependencies --> <!-- Examples: --> <!-- Use build_depend for packages you need at compile time: --> <!-- <build_depend>genmsg</build_depend> --> <!-- Use buildtool_depend for build tool packages: --> <!-- <buildtool_depend>catkin</buildtool_depend> --> <!-- Use run_depend for packages you need at runtime: --> <!-- <run_depend>python-yaml</run_depend> --> <!-- Use test_depend for packages you need only for testing: --> <!-- <test_depend>gtest</test_depend> --> <buildtool_depend>catkin</buildtool_depend> <build_depend>roscpp</build_depend> <build_depend>rospy</build_depend> <build_depend>std_msgs</build_depend>
The dependencies are split into build_depend, buildtool_depend, run_depend, test_depend. For a more detailed explanation of these tags see the documentation about Catkin Dependencies.
always source source /opt/ros/jade/setup.bash
3/21/2016
Understanding stereoscopic vision
https://developer.oculus.com/documentation/intro-vr/latest/concepts/bp_app_imaging/
Binocular vision describes the way in which we see two views of the world simultaneously—the view from each eye is slightly different and our brain combines them into a single three-dimensional stereoscopic image, an experience known as stereopsis.
3/24/2016
Calibrating stereo cam by chess board
https://github.com/upperwal/opencv/blob/master/samples/cpp/stereo_calib.cpp
Found code need to understand!
3/26/2016
Trying to understand oculus coding/ unity
Import https://developer.oculus.com/downloads/game-engines/0.1.3.0-beta/Oculus_Utilities_for_Unity_5/ in unity.
edit->project settings->player->enable VR
3/28/2016
Image dilating, eroding and inverting followed tutorials online.
#include "stdafx.h"
#include <cv.h>
#include <highgui.h>
int main()
{
//display the original image
IplImage* img = cvLoadImage("C:/MyPic.jpg");
cvNamedWindow("MyWindow");
cvShowImage("MyWindow", img);
//invert and display the inverted image
cvNot(img, img);
cvNamedWindow("Inverted");
cvShowImage("Inverted", img);
cvWaitKey(0);
//cleaning up
cvDestroyWindow("MyWindow");
cvDestroyWindow("Inverted");
cvReleaseImage(&img);
return 0;
}
4/3/2016
Learning ROS, see other logbook (site) for progress
Installed openCV on ubuntu
Using openCV with cmake http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_gcc_cmake/linux_gcc_cmake.html
mkdir displayImage
cd displayImage
gedit DisplayImage.cpp
gedit CMakeLists.txt insert below code into the .txt file
cmake_minimum_required(VERSION 2.8)
project( DisplayImage )
Find_package( OpenCV REQUIRED )
add_executable( DisplayImage DisplayImage.cpp )
target_link_libraries( DisplayImage ${OpenCV_LIBS} )
cmake . //runs
make
./DisplayImage lena.jpg // execute the program
4/6/2016
Contacted zed company for help with the code through github.
Working on the object detection
Figuring how to upload code into Github (non-linux users)
Super useful website: https://guides.github.com/introduction/getting-your-project-on-github/#pushit
Windows users download github for desktop, make sure to login by selecting -> options->settings-> login
drag and drop code and upload to github by pushing in.
4/7/2016
More information on HSV: http://infohost.nmt.edu/tcc/help/pubs/colortheory/web/hsv.html
The hue of color refers to which pure color it resembles. (tints,tones and shades of red value)
Hues are described by a number that specifies the position of the corresponding pure color on the color wheel.
The saturation of color describes how white the color is. (white will have value of 0 and red will have a value of 1
The value of a color, describes how dark the color is. 0 means black.
4/10/2016
Using the opencv program creating a stereoscopic image from the zed camera.
Followed tutorials, but having trouble to debug the code.
Learning Cmake: SET(execName ZED_PROJECT)
code needed to link external libraries
CMAKE_MINIMUM_REQUIRED(VERSION 2.4)
if(COMMAND cmake_policy)
cmake_policy(SET CMP0003 OLD)
cmake_policy(SET CMP0015 OLD)
endif(COMMAND cmake_policy)
SET(EXECUTABLE_OUTPUT_PATH ".")
IF(WIN32) # Windows
SET(ZED_INCLUDE_DIRS $ENV{ZED_INCLUDE_DIRS})
if (CMAKE_CL_64) # 64 bits
SET(ZED_LIBRARIES $ENV{ZED_LIBRARIES_64})
else(CMAKE_CL_64) # 32 bits
message("32bits compilation is no more available with CUDA7.0")
endif(CMAKE_CL_64)
SET(ZED_LIBRARY_DIR $ENV{ZED_LIBRARY_DIR})
SET(OPENCV_DIR $ENV{OPENCV_DIR})
find_package(CUDA 7.0 REQUIRED)
ELSE() # Linux
find_package(ZED REQUIRED)
find_package(CUDA 6.5 REQUIRED)
ENDIF(WIN32)
find_package(OpenCV 2.4 COMPONENTS core highgui imgproc REQUIRED)
include_directories(${CUDA_INCLUDE_DIRS})
include_directories(${ZED_INCLUDE_DIRS})
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/include)
link_directories(${ZED_LIBRARY_DIR})
link_directories(${OpenCV_LIBRARY_DIRS})
link_directories(${CUDA_LIBRARY_DIRS})
SET(SRC_FOLDER src)
FILE(GLOB_RECURSE SRC_FILES "${SRC_FOLDER}/*.cpp")
ADD_EXECUTABLE(${execName} ${SRC_FILES})
add_definitions(-std=c++0x) # -m64)
TARGET_LINK_LIBRARIES(${execName}
${ZED_LIBRARIES}
${OpenCV_LIBRARIES}
${CUDA_LIBRARIES} ${CUDA_npps_LIBRARY} ${CUDA_nppi_LIBRARY}
)
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -O3" ) # Release Perf mode
4/14/2016
Following tutorials on cmake to create linking libraries
Basic code: cmake_minimum_required (VERSION 2.6) # minimum required version
project (Tutorial)
set (Tutorial_VERSION_MAJOR 1) set (Tutorial_VERSION_MINOR 0) configure_file ( "${PROJECT_SOURCE_DIR}/TutorialConfig.h.in" #configure a header file to pass some of the CMake settings "${PROJECT_BINARY_DIR}/TutorialConfig.h" ) include_directories("${PROJECT_BINARY_DIR}")
# add the executable
add_executable(Tutorial tutorial.cxx)
4/18/2016
Following tutorials to add lib. for linux ubuntu
cmake_minimum_required(VERSION 2.8) // define minimum
cmake version project( DisplayImage )
find_package( OpenCV REQUIRED )
add_executable( DisplayImage DisplayImage.cpp )
target_link_libraries( DisplayImage ${OpenCV_LIBS} ) // connects libraries
4/20/2016
Canny edge detector was by John F. Canny in 1986. Also known to many as the optimal detector, Canny algorithm aims to satisfy three main criteria:
Low error rate: Meaning a good detection of only existent edges.
Good localization: The distance between edge pixels detected and real edge pixels have to be minimized.
Minimal response: Only one detector response per edge.
4/25/2016
http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html
following tutorials
#include "opencv2/imgproc/imgproc.hpp"#include "opencv2/highgui/highgui.hpp"#include <stdlib.h>#include <stdio.h>using namespace cv;/// Global variablesMat src, src_gray;Mat dst, detected_edges;int edgeThresh = 1;int lowThreshold;int const max_lowThreshold = 100;int ratio = 3;int kernel_size = 3; char* window_name = "Edge Map"; /** * @function CannyThreshold * @brief Trackbar callback - Canny thresholds input with a ratio 1:3 */ void CannyThreshold(int, void*) { /// Reduce noise with a kernel 3x3 blur( src_gray, detected_edges, Size(3,3) ); /// Canny detector Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size ); /// Using Canny's output as a mask, we display our result dst = Scalar::all(0); src.copyTo( dst, detected_edges); imshow( window_name, dst ); } /** @function main */ int main( int argc, char** argv ) { /// Load an image src = imread( argv[1] ); if( !src.data ) { return -1; } /// Create a matrix of the same type and size as src (for dst) dst.create( src.size(), src.type() ); /// Convert the image to grayscale cvtColor( src, src_gray, CV_BGR2GRAY ); /// Create a window namedWindow( window_name, CV_WINDOW_AUTOSIZE ); /// Create a Trackbar for user to enter threshold createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold ); /// Show the image CannyThreshold(0, 0); /// Wait until user exit program by pressing a key waitKey(0); return 0; }
4/30/2016
Watson developer cloud. Creates account in bluemix. Learning java script language. Joined code academy.
5/9/2016
Completed beginner level. Learning basics on cloud technology.