Type Ia Supernovae (SNeIa) are among the most important astrophysical probes of cosmology. Due to empirical calibrations, it is possible to standardize their magnitudes and use them as standard candles to measure cosmic distances and infer cosmological parameters. This method was responsible for the discovery of cosmic acceleration, and SNeIa surveys are a key component of current efforts to understand the properties of our Universe.
One of the main challenges of using SNeIa is to tell them apart from other types of stellar explosions without using spectroscopic information, which is quite costly to obtain. Our group investigate machine learning algorithms for the photometric classification of these objects. Our goal is to apply these methods to observations from the J-PLUS, S-PLUS and J-PAS collaborations, where we are actively building the infrastructure to perform a SNIa survey.
Our group also explores alternative ways to constrain cosmology with supernovae. We are among the pioneers in the use of gravitational lensing and peculiar velocities of supernovae to directly constrain the large scale structure of the Universe by going beyond the standard cosmic distance measurements.
After the initial rapid cosmological expansion phase known as Inflation, primordial quantum fluctuations seed classical adiabatic fluctuations of the total components in the Universe. These fluctuations, initially microscopic, grew up through gravitational collapse to form the intricate web of galaxies, gas, neutrinos and dark matter that forms the large scale structure we observe today. Imprinted in this distribution is not only the spectrum of the primordial fluctuations, but also the size of the sound horizon in the early plasma and the impact of the cosmic expansion, controlled by the densities of the different cosmological components. Galaxy surveys are one of the main observational tools to map the matter distribution and derive constraints on all these physical processes through the use of advanced statistical methods.
Recently, we have used publicly available data from the Baryon Oscillation Spectroscopic Survey (BOSS), one of the largest existing galaxy surveys to date, to constrain cosmological parameters using alternative statistical methods based on a spherical harmonic decomposition of projected radial slices of the galaxy distribution. We are also working on the J-PAS, J-PLUS and S-PLUS cosmological surveys, which will use an innovative set of narrow-band optical filters to measure the primordial sound horizon scale imprinted in the galaxy distribution.
One of the major challenges in the cosmological analysis of galaxy surveys is getting accurate measurements of the Doppler effect on galaxy light, commonly called redshift. Measuring galaxy redshifts by observing their spectrum is extremely costly, dramatically reducing the number of observable galaxies. An alternative is to measure so-called photometric redshifts. Photometric surveys observe large regions of the sky in a series of filters usually in the optical region of the spectrum. The integrated light through each filter provides a low resolution measure of the type of spectral properties of each galaxy, and this information can be used to reconstruct the galaxy's redshift value. Although the uncertainties are greater than in the spectroscopic case, it is possible to observe a much larger number of galaxies.
The photometric redshift estimation area was one of the pioneers in the use of machine learning methods in astronomy, among which neural networks are the best known example. We have extensive experience with these techniques, having applied them to the generation of photometric redshift catalogs for the Sloan Digital Sky Survey Coadd Stripe-82 data set and investigated possible extensions such as using morphological information and sparsity techniques form the signal processing community. Combining our joint expertise in the area, we continue to explore the use of additional observables and incorporate modern deep learning techniques to achieve technical advances, potentially impacting the accuracy of cosmological measures for the next generation of cosmological surveys.
The discovery of the accelerated expansion of the Universe in 1998 resulted in a revolution in cosmology. This acceleration is attributed to a mysterious dark energy which, in its simplest interpretation, would be a constant energy density due to quantum effects of vacuum fluctuations. Observations indicate that the Universe is composed of 69% dark energy together with 26% of dark matter and 5% of ordinary matter. This standard LambdaCDM model of cosmology is capable of explaining all current observations, albeit important tensions have recently surfaced in different observations. Moreover, the cosmological constant is multiple orders of magnitude higher than what can be predicted from traditional high-energy theory calculations. Overall, this situation is unsatisfactory both from a theoretical and an observational perspective and for the past 2 decades cosmologists have searched for a deeper understanding.
Our group has been involved in this effort since its inception, contributing with influential model proposals and exploring observational consequences of gravity modifications to the Universe background expansion, the growth rate of large scale structure. As part of our current efforts, we are comparing the Horndeski class of modified gravity models to observations in order to constrain their allowed properties, and other efforts [incomplete section].
Most of the stars are formed in grouped mode, in young stellar clusters and associations. Even during their formation and throughout their entire existence these stellar systems are constantly losing stars as several processes acts to dissolve and dismatle them into the general galactic field population. However, given the right conditions, these systems can survive for billions of years, effectively acting as fossil probes of the galactic environmental conditions across time and space. Understanding the physics of the formation, evolution and dissolution of these systems is pivotal for a proper caracterization of their astrophysical properties and for contextualizing their physical state into the galactic frame.
Our group has been investigating stellar systems in the Milky-Way, the Magellanic Clouds and in other dwarf galaxies of the Local Group, aiming to improve the understanding of these processes and to uncover the resolved stellar formation history and chemical evolution across these galaxies. Ultimately, these stellar systems are used to better constraint galactic general properties, which in turn are used to calibrate the cosmological distance ladder.
We have known since the 1990s that, contrary to what was expected in the Standard Particle Model, neutrinos must have non-zero mass as a result of observing atmospheric and solar neutrino flavour fluctuations. Ongoing beta decay experiments intend to constrain the lighter neutrino mass but there are no immediate prospects of measuring the total sum of neutrino masses to high significance with terrestrial experiments.
Cosmological observations are contributing decisively in this scenario. Neutrinos are one of the fundamental components of the Primordial Universe. As the universe expands and cools, neutron creation becomes energetically unfavourable, and the neutrinos decouple from the baryons and photons plasma and spread freely throughout the universe, under the sole effect of gravity. This propagation attenuates matter fluctuations affecting the statistical distribution of both cosmic background radiation and galaxies in the recent universe.
Despite large efforts it is not yet clear what the optimal combination of data and observables would be, and how restricting cosmological measures can be on the mass hierarchy. Members of our group have recently investigated the ability of current large-scale galaxy surveys to set upper limits on the lighter neutrino mass. We are advancing the study of statistical and systematic challenges and making projections for the constraining power of future surveys, such as LSST and DESI, both alone and in combination. with future CMB experiments.
In the early stages of the Universe after the Big Bang, matter and radiation were in thermal equilibrium as an extremely hot plasma. Initial quantum fluctuations had grown to macroscopic sizes and the competition between thermal pressure and gravitational collapse produced oscillations that rippled through the plasma. As the Universe expanded and the temperature dropped, photons and baryons decoupled, freezing these oscillations at a specific scale and the photons began to travel unimpeded throughout the Universe. These photons cool down as the Universe expands, and today we see them as an afterglow from the decoupling moment, coming to us from all points in the sky. The spectral energy distribution of these photons is the most perfect blackbody radiation known in Nature.
We observe small angular fluctuations in the temperature amplitude that preserve information from the primordial sound horizon scale and from the subsequent evolution of the Universe. Measuring these anisotropies to exquisite accuracy allows us to constrain cosmological parameters to very high precision.
In our investigations of the CMB, we have focused on two aspects. First, we have studied some of the so-called CMB anomalies, which if confirmed would result that the standard model of cosmology would need to be revised. We also studied the aberration and Doppler effects due to the peculiar movement of the solar system in the Universe. Both effects induce a correlation between different harmonic coefficients. In the last decade we made a number of investigations, and proposed it could be observed in Planck's data, allowing an independent measurement of our peculiar velocity with respect to the CMB. The Planck collaboration later conducted such a measurement. We also showed that these effects produce spurious anomalies in the data in not accounted for. More recently we conducted an independent measurement of these effects and produced the first constraints of the primordial CMB dipole.