Structure & property characterization
To evaluate structural features at different length scales and understand properties, several techniques are needed. Herein we list the main techniques used in zeolite science in a very brief way. Wikipedia was used as a main reference. The objective of this page is to introduce the techniques and the possible information obtained from each. Future developments will be added progressively.
A review summarizing most of the techniques listed below is available in ChemCatChem
Microscopy - Morphology
Electron microscopy uses a beam of electrons as a source of illumination. Thus electron microscopes use electron optics that are analogous to the glass lenses of optical light microscopes to control the electron beam, focusing it to produce magnified images or electron diffraction patterns. As the wavelength of an electron can be up to 1e5 times smaller than that of visible light, electron microscopes have a much higher resolution of about 0.1 nm, which compares to about 200 nm for light microscopes.
Many developments laid the groundwork of the electron optics used in microscopes. One significant step was the work of Hertz in 1883 who made a cathode-ray tube with electrostatic and magnetic deflection, demonstrating manipulation of the direction of an electron beam. Emil Wiechert improved oxide-coated cathodes; more electrons were produced by Arthur Wehnelt in 1905. Electromagnetic lens was developed by Hans Busch in 1926.
To this day, the issue of who invented the transmission electron microscope is controversial. In 1928, Prof. Adolf Matthias appointed Max Knoll to lead a team of researchers to advance research on electron beams and cathode-ray oscilloscopes. The team consisted of several PhD students including Ernst Ruska. In 1931, Max Knoll and Ernst Ruska successfully generated magnified images of mesh grids placed over an anode aperture. In 1933, Ruska and Knoll built the first electron microscope that exceeded the resolution of an optical (light) microscope.
Transmission Electron Microscope (TEM) uses a high voltage electron beam to illuminate the specimen and create an image. An electron beam is produced by an electron gun, with the electrons typically having energies in the range 20 to 400 keV, focused by electromagnetic lenses, and transmitted through a thin specimen. When it emerges from the specimen, the electron beam carries information about the structure of the specimen that is then magnified by the lenses of the microscope. The spatial variation in this information may be viewed by projecting the magnified electron image onto a detector.
Scanning electron microscope (SEM) produces images by probing the specimen with a focused electron beam that is scanned across the specimen. When the electron beam interacts with the specimen, it loses energy and is scattered in different directions by a variety of mechanisms. These interactions lead to, among other events, emission of low-energy secondary electrons and high-energy backscattered electrons, light emission or X-ray emission. All of these signals carrying information about the specimen, such as the surface topography and composition. The image displayed when using an SEM shows the variation in the intensity of any of these signals as an image.
Scanning transmission electron (STEM) combines features of both a TEM and a SEM by rostering a focused incident probe across a specimen, but now mainly using the electrons that are transmitted through the sample.
Diffraction - Crystallinity
X-ray diffraction is associated with changes in the direction called ‘diffraction’ of X-ray beams due to interactions with the electrons around atoms. It occurs due to elastic scattering without change in the energy of the waves. The resulting map of the directions of the X-rays far from the sample is called a diffraction pattern. After the discovery of X-rays by Wilhelm Röntgen in 1895, single-slit experiments done by Arnold Sommerfeld after considering Maxwell equations, suggested that X-rays had a wavelength of about 1 angstrom. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed that X-rays are a form of electromagnetic radiation. After Von Laue's pioneering research, the field developed rapidly, most notably by William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the scattering with evenly spaced planes within a crystal.
The incoming beam causes each scatterer to re-radiate a small portion of its intensity as a spherical wave. If scatterers are arranged symmetrically with a separation d like in crystals, these spherical waves will be add constructively only in directions where their path-length difference 2d sin θ equals an integer multiple of the wavelength λ. In that case, part of the incoming beam is deflected by an angle 2θ, producing a reflection spot in the diffraction pattern. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and their spacing by d. Each X-ray diffraction pattern represents a spherical slice of reciprocal space, as may be seen by the Ewald sphere construction. X-ray scattering is determined by the density of electrons within the crystal. Consequently, the coherent scattering detected from an atom can be accurately approximated by analyzing the collective scattering from the electrons in the system.
Diffraction experiments can be done with a local X-ray tube source, typically coupled with an image plate detector. However, the wavelength of the X-rays produced is limited by the availability of different anode materials. Furthermore, the intensity is limited by the power applied and cooling capacity available to avoid melting the anode. In such systems, electrons are boiled off a cathode and accelerated through a strong electric potential of ~50 kV; having reached a high speed, the electrons collide with a metal plate, emitting some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper producing strong Kα and Kβ lines, as it can be kept cool easily due to its high thermal conductivity. The Kβ line is often suppressed with a thin nickel foil. X-rays are generally filtered using X-ray filters to a single monochromatic wavelength and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator or with an arrangement of gently curved mirrors.
Synchrotron radiation sources are some of the brightest light sources. X-ray beams are generated in synchrotrons, which accelerate electrically charged particles, often electrons, to nearly the speed of light and confine them in a circular loop using magnetic fields. Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected without interruption. Synchrotrons were originally designed for use by high-energy physicists studying subatomic particles and cosmic phenomena. The largest component of each synchrotron is its electron storage ring. This ring is not a perfect circle, but a many-sided polygon. At each corner of the polygon, or sector, precisely aligned magnets bend the electron stream. As the electrons' path is bent, they emit bursts of energy in the form of X-rays.
Electron diffraction. Because they interact via the Coulomb forces, the scattering of electrons by matter is 1000 or more times stronger than for X-rays. Hence electron beams produce strong multiple or dynamical scattering even for relatively thin crystals (>10 nm). While there are similarities between the diffraction of X-rays and electrons, the approach is different as it is based upon the original approach of Hans Bethe and solving Schrödinger equation for relativistic electrons, rather than a kinematical or Bragg's law approach. Information about very small regions, down to single atoms is possible.
Neutron diffraction. Neutron diffraction is used for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although sources producing neutrons by spallation are becoming increasingly available. Being uncharged, neutrons scatter more from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is useful for observing the positions of light atoms with few electrons, especially hydrogen, which is essentially invisible in X-ray diffraction.
Le Bail refinement extracts intensities from powder diffraction data to determine the atomic structure of a crystalline material and to refine the unit cell. It generally provides a quick method to refine the unit cell. Generally, the intensities of powder diffraction data are complicated by overlapping diffraction peaks with similar d-spacings. For the Le Bail method, the unit cell and the approximate space group of the sample must be predetermined because they are included as a part of the fitting technique. The algorithm involves refining the unit cell, the profile parameters, and the peak intensities to match the measured powder diffraction pattern. It is not necessary to know the structural factor and associated structural parameters, since they are not considered in this type of analysis.
Rietveld refinement uses the height, the width and the position of reflections to determine many aspects of the material's structure. The Rietveld method uses a least squares approach to refine a theoretical line profile until it matches the measured profile. It employs the non-linear least squares method, and requires the reasonable initial approximation of many free parameters, including peak shape, unit cell dimensions and coordinates of all atoms in the crystal structure. Other parameters such as unit cell dimensions, phase quantities, crystallite sizes, atomic coordinates, and bond lengths can be guessed while still being reasonably refined. Before exploring Rietveld refinement, it is necessary to establish a greater understanding of powder diffraction data and what information is encoded therein in order to establish a notion of how to create a model of a diffraction pattern, which is of course necessary in Rietveld refinement. A typical diffraction pattern can be described by the positions, shapes, and intensities of multiple Bragg reflections. Each of the three mentioned properties encodes some information relating to the crystal structure.
Gas sorption - Texture
The word "adsorption" was coined in 1881 by Heinrich Kayser. Adsorption is the adhesion of atoms, ions or molecules from a gas, liquid or dissolved solid to a surface. This process creates a film of the adsorbate on the surface of the adsorbent. This process differs from absorption, in which a fluid is dissolved by, or permeates a liquid or solid absorbent. While adsorption does often precede absorption, adsorption is distinctly a surface phenomenon, wherein the adsorbate does not penetrate through the material surface and into the bulk of the adsorbent. The term sorption encompasses both adsorption and absorption, and desorption is the reverse of sorption. According to IUPAC, adsorption corresponds to an increase in the concentration of a dissolved substance at the interface of a condensed and a liquid phase due to the operation of surface forces. Adsorption can also occur at the interface of a condensed and a gaseous phase.
Like surface tension, adsorption is a consequence of surface energy. In a bulk material, all the bonding requirements of the constituent atoms of the material are fulfilled by other atoms in the material. However, atoms on the surface of the adsorbent are not wholly surrounded by other adsorbent atoms and therefore can attract adsorbates. The exact nature of the bonding depends on the details of the species involved, but the adsorption process is generally classified as physisorption (characteristic of weak van der Waals forces) or chemisorption (characteristic of covalent bonding). It may also occur due to electrostatic attraction.
The adsorption of gases and solutes is usually described through isotherms, that is, the amount of adsorbate on the adsorbent as a function of its pressure (if gas) or concentration (for liquid phase solutes) at constant temperature. The quantity adsorbed is nearly always normalized by the mass of the adsorbent to allow comparison of different materials. To date, 15 different isotherm models have been developed.
The first mathematical fit to an isotherm was published by Freundlich and Kuster (1906). It was a purely empirical formula for gaseous adsorbates. Irving Langmuir was the first to derive a scientifically based adsorption isotherm in 1918. The model applies to gases adsorbed on solid surfaces. It is a semi-empirical isotherm with a kinetic basis and was derived based on statistical thermodynamics. It is the most common isotherm equation to use due to its simplicity and its ability to fit a variety of adsorption data. It is based on four assumptions: all of the adsorption sites are equivalent, and each site can only accommodate one molecule. The surface is energetically homogeneous, and adsorbed molecules do not interact. There are no phase transitions. At the maximum adsorption, only a monolayer is formed. Adsorption only occurs on localized sites on the surface, not with other adsorbates.
These four assumptions are seldom all true: there are always imperfections on the surface, adsorbed molecules are not necessarily inert, and the mechanism is clearly not the same for the first molecules to adsorb to a surface as for the last. The fourth condition is the most troublesome, as frequently more molecules will adsorb to the monolayer; this problem is addressed by the BET isotherm for relatively flat (non-microporous) surfaces. The Langmuir isotherm is nonetheless the first choice for most models of adsorption and has many applications in surface kinetics (usually called Langmuir–Hinshelwood kinetics) and thermodynamics.
To choose between the Langmuir and Freundlich equations, the enthalpies of adsorption must be investigated. While the Langmuir model assumes that, the energy of adsorption remains constant with surface occupancy, the Freundlich equation is derived with the assumption that the heat of adsorption continually decrease as the binding sites are occupied. The choice of the model based on best fitting of the data is a common misconception.
Often molecules do form multilayers that is, some are adsorbed on already adsorbed molecules, and the Langmuir isotherm is not valid. In 1938 Stephen Brunauer, Paul Emmett, and Edward Teller developed a model isotherm that considers that possibility. Their theory based on modification of Langmuir's mechanism is called BET theory, after the initials in their last names. The Langmuir isotherm is usually better for chemisorption, and the BET isotherm works better for physisorption.
Thermal analyses - Behavior toward heat
Thermal gravimetric analysis (TGA) is a method of thermal analysis in which the mass of a sample is measured over time as the temperature changes. This measurement provides information about physical phenomena, such as phase transitions, absorption, adsorption and desorption; as well as chemical phenomena including chemisorptions, thermal decomposition, and solid-gas reactions.
Thermogravimetric analysis (TGA) is conducted on an instrument referred to as a thermogravimetric analyzer. A thermogravimetric analyzer continuously measures mass while the temperature of a sample is changed over time. Mass, temperature, and time are considered base measurements in thermogravimetric analysis while many additional measures may be derived from these three base measurements.
A typical thermogravimetric analyzer consists of a precision balance with a sample pan located inside a furnace with a programmable control temperature. The temperature is generally increased at constant rate to incur a thermal reaction. The thermal reaction may occur under a variety of atmospheres including: ambient air, vacuum, inert gas, oxidizing/reducing gases, corrosive gases, carburizing gases, vapors of liquids or "self-generated atmosphere"; as well as a variety of pressures including: a high vacuum, high pressure, constant pressure, or a controlled pressure.
The thermogravimetric data collected from a thermal reaction is compiled into a plot of mass or percentage of initial mass versus either temperature or time. This plot, which is often smoothed, is referred to as a TGA curve. The first derivative of the TGA curve (the DTG curve) may be plotted to determine inflection points useful for in-depth interpretations as well as differential thermal analysis.
Differential scanning calorimetry (DSC) is a thermo-analytical technique in which the difference of heat required to increase the temperature of a sample and reference is measured as a function of temperature. Both the sample and reference are maintained at nearly the same temperature throughout the experiment. Generally, the temperature program for a DSC analysis is designed such that the sample holder temperature increases linearly as a function of time. The reference sample should have a well-defined heat capacity over the range of temperatures to be scanned. Additionally, the reference sample must be stable, of high purity, and must not experience much change across the temperature scan. Typically, reference standards have been metals such as indium, tin, bismuth, and lead, but other standards such as polyethylene and fatty acids have been proposed to study polymers and organic compounds, respectively.
The basic principle underlying this technique is that when the sample undergoes a physical transformation such as phase transitions, more or less heat will need to flow to it than the reference to maintain both at the same temperature. Whether less or more heat must flow to the sample depends on whether the process is exothermic or endothermic.
For example, as a solid sample melts to a liquid, it will require more heat flowing to the sample to increase its temperature at the same rate as the reference. This is due to the absorption of heat by the sample as it undergoes the endothermic phase transition from solid to liquid. Likewise, as the sample undergoes exothermic processes (such as crystallization) less heat is required to raise the sample temperature. By observing the difference in heat flow between the sample and reference, differential scanning calorimeters are able to measure the amount of heat absorbed or released during such transitions. DSC may also be used to observe more subtle physical changes, such as glass transitions.
Glass transitions may occur as the temperature of an amorphous solid is increased. These transitions appear as a step in the baseline of the recorded DSC signal. This is due to the sample undergoing a change in heat capacity; no formal phase change occurs.
As the temperature increases, an amorphous solid will become less viscous. At some point the molecules may obtain enough freedom of motion to spontaneously arrange themselves into a crystalline form. This is known as the crystallization temperature (Tc). This transition from amorphous solid to crystalline solid is an exothermic process, and results in a peak in the DSC signal. As the temperature increases the sample eventually reaches its melting temperature (Tm). The melting process results in an endothermic peak in the DSC curve. The ability to determine transition temperatures and enthalpies makes DSC a valuable tool in producing phase diagrams for various chemical systems.
Elemental analyses - Composition
Inductively coupled plasma mass spectrometry (ICP-MS) is a type of mass spectrometry that uses an inductively coupled plasma to ionize the sample. It atomizes the sample and creates atomic and small polyatomic ions, which are then detected. It is known and used for its ability to detect metals and several non-metals in liquid samples at very low concentrations. It can detect different isotopes of the same element, which makes it a versatile tool in isotopic labeling. An inductively coupled plasma is a plasma that is energized (ionized) by inductively heating the gas with an electromagnetic coil, and contains a sufficient concentration of ions and electrons to make the gas electrically conductive.
For coupling to mass spectrometry, the ions from the plasma are extracted through a series of cones into a mass spectrometer, usually a quadrupole. The ions are separated based on their mass-to-charge ratio and a detector receives an ion signal proportional to the concentration. The concentration of a sample can be determined through calibration with certified reference material such as single or multi-element reference standards. ICP-MS also lends itself to quantitative determinations through isotope dilution, a single point method based on an isotopically enriched standard. In order to increase reproducibility and compensate for errors by sensitivity variation, an internal standard can be added.
X-ray fluorescence (XRF) is a powerful non-destructive analytical technique. In simple terms, it works by illuminating a sample with high-energy X-rays, causing the atoms within to become excited and emit their own unique, characteristic X-rays—a process similar to how a black light makes certain colors fluoresce. By measuring the energy and intensity of these emitted "secondary" X-rays, one can identify which elements are present in the sample and in what quantities.
Energy-dispersive X-ray spectroscopy (EDS). X-ray microanalysis is a method of obtaining local chemical information within electron microscopes of all types, although it is most commonly used in scanning instruments. When high-energy electrons interact with atoms, they can knock out electrons, particularly those in the inner shells and core electrons. These are then filled by valence electron, and the energy difference between the valence and core states can be converted into an x-ray, which is detected by a spectrometer. The energies of these x-rays is somewhat specific to the atomic species, so local chemistry can be probed.
Temperature programmed desorption (TPD) - Surface binding energy
It consists of observing desorbed molecules from a surface when the surface temperature is increased. Indeed, when molecules or atoms come in contact with a surface, they adsorb onto it, minimizing their energy by forming a ‘bond’ with the surface. Upon heating the surface, the energy transferred to the adsorbed species leads to the desorption of adsorbates from the surface. The temperature at which this happens is known as the desorption temperature. Thus, TPD shows information on the binding energy. TPD analyses may recognize the different adsorption sites of the same molecule from the differences between the desorption temperatures. TPD gives the amount of adsorbed molecules on the surface from the integral of the peaks.
X-ray photoelectron spectroscopy (XPS) - Surface electronic state
It is a surface-sensitive quantitative spectroscopic technique that measures the 50–60 atoms of 5–10 nm thickness of any surface. It belongs to the family of photoemission spectroscopies in which electron population spectra are obtained by irradiating a material with a beam of X-rays. XPS is based on the photoelectric effect that can identify the elements that exist within a material (elemental composition) or are covering its surface, as well as their chemical state, and the overall electronic structure and density of the electronic states in the material. The technique can be used in line profiling of the elemental composition across the surface, or in depth profiling when paired with ion-beam etching. It is often applied to study chemical processes in the materials in their as-received state or after cleavage, scraping, exposure to heat, reactive gasses or solutions, ultraviolet light, or during ion implantation.
Chemical states are inferred from the measurement of the kinetic energy and the number of the ejected electrons. XPS requires high vacuum (residual gas pressure of 1e−6 Pa) or ultra-high vacuum (p < 1e−7 Pa) conditions, although a current area of development is ambient-pressure XPS, in which samples are analyzed at pressures of a few tens of mbars. When laboratory X-ray sources are used, XPS easily detects all elements except hydrogen and helium. The detection limit is in the parts per thousand range, but parts per million (ppm) are achievable with long collection times and concentration at top surface.
X-ray absorption spectroscopy (XAS) - Coordination state
It is used for probing the local environment of matter at atomic level and its electronic structure. The experiments require access to synchrotron radiation facilities for their intense and tunable X-ray beams. XAS data are obtained by tuning the photon energy to a range where core electrons can be excited (0.1-100 keV). The edges are, in part, named by which core electron is excited: the principal quantum numbers n = 1, 2, and 3, correspond to the K-, L-, and M-edges, respectively. Excitation of a 1s electron occurs at the K-edge, while excitation of a 2s or 2p electron occurs at an L-edge. There are three main regions found on a spectrum generated by XAS data, which are then thought of as separate spectroscopic techniques.
XAS is a type of absorption spectroscopy from a core initial state with a well-defined symmetry; therefore, the quantum mechanical selection rules select the symmetry of the final states in the continuum, which are usually a mixture of multiple components. The most intense features are due to electric-dipole allowed transitions (i.e. Δℓ = ± 1) to unoccupied final states. For example, the most intense features of a K-edge are due to core transitions from 1s → p-like final states, while the most intense features of the L3-edge are due to 2p → d-like final states.
The X-ray absorption near-edge structure (XANES) are dominated by core transitions to quasi-bound states for photoelectrons with kinetic energy in the range from 10 to 150 eV above the chemical potential, called "shape resonances". In the high kinetic energy range of the photoelectron, the scattering cross-section with neighbor atoms is weak, and the absorption spectra are dominated by EXAFS (extended X-ray absorption fine structure), where the scattering of the ejected photoelectron of neighboring atoms can be approximated by single scattering events. In 1985, it was shown that multiple scattering theory can be used to interpret both XANES and EXAFS; therefore, the experimental analysis focusing on both regions is now called XAFS.
Infrared (IR) spectroscopy - Polar vibration
Infrared (IR) spectroscopy is a vibrational spectroscopy that had its beginning in the early 1900s, when William Weber Coblentz demonstrated that chemical functional groups exhibited specific and characteristic IR absorptions. It measures the interaction of infrared radiation with matter, it is used to identify and quantify chemical substances or functional groups. IR spectrophotometer is used to produce infrared spectra in either absorbance or transmittance modes vs. frequency, wavenumber, or wavelength. Typical units of wavenumber used in IR spectra are reciprocal centimeters (cm−1). Fourier transform infrared (FTIR) spectrophotometer is usually used for fast acquisitions. The infrared portion of the electromagnetic spectrum is usually divided into three regions: the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14000–4000 cm−1 (0.7–2.5 μm wavelength) can excite overtone or combination modes of molecular vibrations. The mid-infrared, approximately 4000–400 cm−1 (2.5–25 μm) is generally used to study the fundamental vibrations and associated rotational–vibrational structure. The far-infrared, approximately 400–10 cm−1 (25–1000 μm) has low energy and may be used for rotational spectroscopy and low frequency vibrations.
Infrared spectroscopy exploits the fact that molecules absorb frequencies that are characteristic of their structure. These absorptions occur at resonant frequencies, i.e. the frequency of the absorbed radiation matches the vibrational frequency. The energies are affected by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated couplings. Based on Born–Oppenheimer and harmonic approximations i.e. when the molecular Hamiltonian corresponding to the electronic ground state is approximated by a harmonic oscillator in the neighborhood of the equilibrium molecular geometry, the resonant frequencies are associated with the normal modes of vibration corresponding to the molecular electronic ground state potential energy surface. Thus, it depends on both the nature of the bonds and the mass of the atoms that are involved. The Schrödinger equation leads to the selection rule for the vibrational quantum number in the system undergoing vibrational changes.
The compression and extension of a bond may be likened to the behavior of a spring, but real molecules are hardly perfectly elastic in nature. If a bond between atoms is stretched, for instance, there comes a point at which the bond breaks and the molecule dissociates into atoms. Thus, real molecules deviate from perfect harmonic motion and their molecular vibrational motion is non-harmonic. In order for a vibrational mode in a sample to be "IR active", it must be associated with changes in the molecular dipole moment. A permanent dipole is not necessary, as the rule requires only a change in dipole moment. A molecule can vibrate in many ways, and each way is called a vibrational mode. For molecules with N number of atoms, geometrically linear molecules have 3N – 5 degrees of vibrational modes, whereas nonlinear molecules have 3N – 6 degrees of vibrational modes. Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. carbon monoxide (CO), absorb in the IR spectrum. Molecules that are more complex have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra.
Raman spectroscopy - Non polar vibration
Raman spectroscopy, named after C. V. Raman is used to determine vibrational modes of molecules, although rotational and other low-frequency modes of systems may also be observed. Raman spectroscopy relies upon inelastic scattering of photons, known as Raman scattering. A laser source of monochromatic light is usually used e.g. visible, near infrared, near ultraviolet, or X-rays. The laser light interacts with molecular vibrations or phonons, resulting in an energy shift of the laser photons. The shift in energy gives information about the vibrational modes and allows getting fingerprints of molecules. Typically, when a sample is illuminated with a laser beam, electromagnetic radiation is collected from the illuminated spot with a lens. Elastic scattered radiation at the wavelength corresponding to the laser line (Rayleigh scattering) is filtered out, while the rest of the collected light is dispersed onto a detector. Spontaneous Raman scattering is typically very weak. Thus, the main difficulty in collecting Raman spectra was separating the weak inelastic scattered light from the intense Rayleigh scattered laser light.
The magnitude of the Raman Effect correlates with the polarizability of the electrons in a molecule. It is a form of inelastic light scattering, where a photon excites the sample. This excitation puts the molecule into a virtual energy state for a short time before the photon is emitted. Inelastic scattering means that the energy of the emitted photon is of either lower or higher energy than the incident photon. After the scattering event, the sample is in a different rotational or vibrational state. For the total energy of the system to remain constant after the molecule moves to a new rovibronic (rotational–vibrational–electronic) state, the scattered photon shifts to a different energy, and therefore a different frequency. This energy difference is equal to that between the initial and final rovibronic states of the molecule. If the final state is higher in energy than the initial state, the scattered photon will be shifted to a lower frequency (lower energy) so that the total energy remains the same. This shift in frequency is called a Stokes shift, or downshift. If the final state is lower in energy, the scattered photon will be shifted to a higher frequency, which is called an anti-Stokes shift, or upshift.
For a molecule to exhibit Raman Effect there must be a change in its electric dipole-electric dipole polarizability with respect to the vibrational coordinate corresponding to the rovibronic state. The intensity of the Raman scattering is proportional to this polarizability change. Therefore, the Raman spectrum (scattering intensity as a function of the frequency shifts) depends on the rovibronic states of the molecule. The Raman Effect is based on the interaction between the electron cloud of a sample and the external electric field of the monochromatic light, which can create an induced dipole moment within the molecule based on its polarizability. Transitions of large Raman intensities often have weak IR intensities and vice versa. If a bond is strongly polarized, a small change in its length such as that which occurs during a vibration has only a small resultant effect on polarization. Vibrations involving polar bonds (e.g. C-O , N-O , O-H) are therefore, comparatively weak Raman scatterers. Such polarized bonds, however, carry their electrical charges during the vibrational motion, (unless neutralized by symmetry factors), and this results in a larger net dipole moment change during the vibration, producing a strong IR absorption band. Conversely, relatively neutral bonds (e.g. C-C , C-H , C=C) suffer large changes in polarizability during a vibration. However, the dipole moment is not similarly affected such that while vibrations involving predominantly this type of bond are strong Raman scatters, they are weak in the IR.
Ultra-Violet Visible (UV-Vis) spectroscopy - Electronic transition
Ultraviolet–visible spectrophotometry refers to absorption spectroscopy or reflectance spectroscopy in part of the ultraviolet and the full, adjacent visible regions of the electromagnetic spectrum. The only requirement is that the sample absorb in the UV–Vis region, i.e. be a chromophore. Absorption spectroscopy is complementary to fluorescence spectroscopy. Parameters of interest, besides the wavelength of measurement, are absorbance (A) or transmittance (%T). A UV–Vis spectrophotometer is an analytical instrument that measures the amount of ultraviolet (UV) and visible light that is absorbed by a sample. UV–Vis spectrophotometers work by passing a beam of light through the sample and measuring the amount of light that is absorbed at each wavelength. The amount of light absorbed is proportional to the concentration of the absorbing compound in the sample.
Most molecules and ions absorb energy in the ultraviolet or visible range, i.e., they are chromophores. The absorbed photon excites an electron in the chromophore to higher energy molecular orbitals, giving rise to an excited state. For organic chromophores, four possible types of transitions are assumed: π–π*, n–π*, σ–σ*, and n–σ*. Transition metal complexes are often colored (i.e., absorb visible light) owing to the presence of multiple electronic states associated with incompletely filled d orbitals. Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum.
The Beer–Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV–Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from reference tables of molar extinction coefficients, or more accurately, determined from a calibration curve.
The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating or a prism as a monochromator to separate the different wavelengths of light, and a detector. The radiation source is often a tungsten filament (300–2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190–400 nm), a xenon arc lamp, which is continuous from 160 to 2000 nm; or more recently, light emitting diodes (LED) for the visible wavelengths. The detector is typically a photomultiplier tube, a photodiode, a photodiode array or a charge-coupled device (CCD). Single photodiode detectors and photomultiplier tubes are used with scanning monochromators, which filter the light so that only light of a single wavelength reaches the detector at one time. The scanning monochromator moves the diffraction grating to "step-through" each wavelength so that its intensity may be measured as a function of wavelength. Fixed monochromators are used with CCDs and photodiode arrays. As both of these devices consist of many detectors grouped into one or two-dimensional arrays, they are able to collect light of different wavelengths on different pixels or groups of pixels simultaneously.
Electron Paramagnetic Resonance (EPR) spectroscopy - Electron spin
Electron paramagnetic resonance (EPR) or electron spin resonance (ESR) spectroscopy concerns materials that exhibit unpaired electrons. The basic concepts of EPR are analogous to those of nuclear magnetic resonance (NMR), but the spins excited are those of the electrons instead of the atomic nuclei. EPR spectroscopy is useful for analyzing metal ions and organic radicals (compounds with unpaired electrons). The technique reveals some structural information but often simply provides a characteristic "finger print". The measurement requires a large magnet into which is placed the sample. Signals are detected using microwaves. For a given sample, some of the parameters of interest are g-values (analogous to chemical shift), anisotropy (asymmetry), hyperfine coupling constants (analogous to coupling constant J), and relaxation times.
Gas chromatography (GC) - Analytics
Chromatography dates to 1903 in the work of Mikhail Semenovich Tswett, who separated plant pigments via liquid column chromatography. The invention of gas chromatography is generally attributed to Anthony T. James and Archer J.P. Martin. Their gas chromatograph used partition chromatography as the separating principle, rather than adsorption chromatography. The popularity of gas chromatography quickly rose after the development of the flame ionization detector. Erika Cremer together with Fritz Prior developed what could be considered the first gas chromatograph in 1947, this chromatograph consisted of a carrier gas, a column packed with silica gel, and a thermal conductivity detector.
Gas chromatography (GC) is a common type of chromatography used in analytical chemistry for separating and analyzing the flow rates of compounds that can be vaporized without decomposition while going in a mobile phase through a stationary phase in the GC column. Typical uses of GC include testing the purity of a particular substance or separating the different components of a mixture. Gas chromatography is the process of separating compounds in a mixture by injecting a gaseous or liquid sample into a mobile phase, typically called the carrier gas, and passing the gas through a stationary phase. The mobile phase is usually an inert gas or an unreactive gas such as helium, argon, nitrogen or hydrogen. The stationary phase can be solid or liquid, although most GC systems today use a polymeric liquid stationary phase. The stationary phase is contained inside of a separation column. Today, most GC columns are fused silica capillaries with an inner diameter of 100–320 micrometers and a length of 5–60 meters. The GC column is located inside an oven where the temperature of the gas can be controlled and the effluent coming off the column is monitored by a suitable detector.
A gas chromatograph is made of a narrow tube, known as the column, through which the vaporized sample passes, carried along by a continuous flow of inert or nonreactive gas. Components of the sample pass through the column at different rates, depending on their chemical and physical properties and the resulting interactions with the column lining or filling, called the stationary phase. The column is typically enclosed within a temperature-controlled oven. As the chemicals exit the end of the column, they are detected and identified. Many detectors are used are available nowadays for GC e.g. flame ionization detector (FID), thermal conductivity detector (TCD), Alkali flame detector (AFD), catalytic combustion detector (CCD), Discharge ionization detector (DID), Flame photometric detector (FPD), Electron capture detector (ECD), Nitrogen–phosphorus detector (NPD), Dry electrolytic conductivity detector (DELCD), Vacuum ultraviolet (VUV).
Gas chromatography–mass spectrometry (GC–MS) is an analytical method that combines the features of gas chromatography and mass spectrometry to identify chemical species. The first on-line coupling of gas chromatography to a mass spectrometer was reported in the late 1950s. The MS detector can be used to identify molecules in chromatograms by their mass spectrum.