На цій сторінці публікуються прості коротенькі тексти для розширення словникового запасу у різних галузях науки і техніки. Для зручності посилання, тексти нумеруються. Нумерація зворотня, тобто, самий останній текст на сторінці опублікований першим і має номер 1, а свіжіші тексти додаються зверху. Перед номером тексту додається префікс VBT (vocabulary building text), щоб його було легше знайти контекстним пошуком. Наприклад, якщо потрібен текст №12, треба шукати VBT12.
Texts for vocabulary building
© Volodymyr V. Bielikov
All texts on this page were written by Volodymyr Bielikov in 2014, 2015, 2016 and 2017.
VBT42. The Sun
The Sun is the center of our Solar System and the nearest star to the Earth. The average distance between the Sun and the Earth is approximately 150,000,000 km. This is a small star, it is classified as a yellow dwarf. However, it is a mighty dwarf: it is deemed to be brighter than approximately 85% of stars in our Galaxy.
The Sun formed around 4.57 billion years ago out of a large molecular cloud as a result of the gravitational collapse of the central part of the cloud. Approximately 99.8% of mass of the cloud collapsed to form the Sun. The remained matter of the cloud flattened and formed the planetary disk, of which, eventually, planets and other celestial bodies of the Solar System formed.
In the process of the collapse, matter heated owing to the release of huge potential energy. When the temperature of the gasses in the center of the collapsing matter reached about 15 million kelvins, reactions of nuclear fusion began. Very soon (in just around half a million years or less) the pressure created by these reactions counterbalanced gravitational forces; the collapse slowed down and eventually stopped. A new star formed.
The equatorial radius of the Sun is 109 times as big as that of the Earth. Its mass is 333,000 masses of the Earth. The escape velocity is 617.7 km/s (this is the speed an object has to acquire in order to overcome the gravitational pull of the Sun and fly away from it forever). Compare this with the escape velocity of the Earth, which is 11.2 km/s. Its average density is only a quarter of that of the Earth, however, the density of its center is impressive – 162 g/cm³. The mass of one cubic centimeter of gold or uranium is only around 20 grams. The center of the Sun is a huge nuclear fusion reactor.
The Sun is composed primarily of hydrogen (74.9%) and helium (23.8%). All heavier elements, up to iron, account for less than 2% of the mass.
The temperature of the Sun is from around 15 million kelvins in the center to 5700 kelvins on the surface. Above the surface, there is an atmosphere with several layers, each having its own specific features. Above the atmosphere, there is the corona. It is composed of overheated plasma, in which nuclei of atoms of hydrogen and helium move independently from electrons. Its temperature is over 1 million kelvins.
Our Sun permanently emits electromagnetic waves in a wide range of spectrum – from radio waves through infrared, visible light, ultraviolet and X-rays to gamma-rays. Also it emits charged particles – electrons, protons and nuclei of atoms, mainly helium (and protons are also nuclei of hydrogen). Also it emits neutrons, which are not charged, but their collisions with other particles can produce new particles. Also it emits neutrinos, which rarely interact with other particles. A considerable part of this radiation is deadly for all known forms of life. Luckily, the magnetic field of the Earth and its atmosphere protect us from the lethal part of that emission, and we can bask in the warm infrared radiation, enjoy visible light and acquire suntan in the soft ultraviolet radiation. Most of the charged particles are blocked by the magnetic field and are directed to polar areas, where they produce spectacular auroras – aurora borealis and aurora australis. Hard ultraviolet, X-rays, gamma rays and a considerable part of radio waves are blocked by the atmosphere.
On certain occasions, emission of particles and/or electromagnetic radiation increases manyfold. The most known events of this kind are solar flares and coronal mass ejections. But this is a different story.
VBT41. Laser
The word “laser” is an acronym for “light amplification by stimulated emission of radiation” and is the name of any device with certain principles of operation and unique properties. Lasers emit coherent and monochromatic light. Coherence means that emission of light is spatially and temporally coordinated, that is, almost all emission is concentrated in a narrow beam and it occurs simultaneously, at the same time and the same phase of light waves. Monochromatic radiation means that all emission occurs with the same single wavelength. And light means electromagnetic radiation of a particular wavelength in the range from far infrared to X-rays. Each laser produces emission of only one particular wavelength.
The fundamental design of lasers may look very simple. The core of a laser, which forms the laser beam, is a substance called the gain medium. The gain medium can be of any state of aggregation – solid, liquid, gas and plasma. Then there is a source of “pump energy”. It “pumps” energy into the gain medium. Then there is a control device, which controls the operation of the source of pump energy. Certain types of lasers also have emission stimulators (also controlled by the control devices), but most lasers do not need stimulators, they emit spontaneously after sufficient energy is pumped into their gain media.
The principle of operation of lasers (extremely simplified) is as follows. The source of pump energy supplies energy in the form of light or other electromagnetic waves, or in the form of electric field and electric current. This energy is directed to the gain medium.
Atoms of the gain medium have electrons on different orbits. The number of the orbits and the number of electrons in each orbit depends on the position of a particular atom in the Mendeleev’s Periodic Table. Each orbit has one or more energy levels, which electrons on it can take. Usually, electrons on each orbit are in the lowest energy level. However, when external energy is applied to them, they come to higher energy levels of the orbit, or to the highest level. An atom with electrons at a higher energy level of a particular orbit (usually this is the external orbit) is said to be excited, as well as each electron at the higher energy level. These energy levels are of quantum nature. Reaching each energy level requires a particular amount of additional energy. When an excited electron skips from a higher level to a lower or the lowest level, it emits exactly the same amount of energy, which was necessary for getting it excited to that level. The emission occurs in the form of a quantum of electromagnetic energy, that is, a photon of a particular wavelength determined by the energy of this quantum.
An atom gets excited when it absorbs energy from outside. In the case of a laser this external energy is supplied by the source of pump energy. When the majority of atoms (or almost all atoms) in the gain medium get excited, the laser is ready to produce its coherent and monochromatic light. Then, with the help of a stimulator or spontaneously, one of the electrons of one of the atoms skips to a lower or the lowest energy level and emits a photon of a particular wavelength. It hits an electron in a nearby atom, and this electron also skips to a lower level and emits a photon of the same wavelength and phase. Now two photons travel inside the gain medium and hit two other electrons, as a result, four photons are traveling. Now four photons hit four electrons, and there are now eight photons. And so on. It’s an avalanche-like process. As a result, all or almost all excited electrons release their gained energy in the medium, and the medium emits the light beam. The medium is shaped as a tube or a rod, the length of which is much bigger than the diameter. For this reason, the probability that the emitted photons will move along the axes of the rod or tube is much higher than the probability of their motion in other directions. This is why the laser beam is narrowly focused without optical focusing. One of the ends of the tube or rod of the gain medium is blocked with a mirror, and for this reason, almost all radiation occurs through the other end, in one direction. There is also some collateral radiation, which is much less powerful than the main beam, and it is not concentrated.
Remember: this is a very simplified explanation. It gives only a vague idea of how lasers work.
There are pulsed lasers and continued-wave lasers. Pulsed lasers emit light in very short (nanoseconds to microseconds) pulses at desirable intervals, typically from milliseconds to several seconds. Or, sometimes, they emit a single pulse. Continuous-wave lasers emit light continuously.
These two types of lasers have different power of beams. Consider a continuous-wave laser and a pulsed laser (with a pulse lasting 1 microsecond emitted ten times a second) consuming the same amount of pump energy in a unit of time – one joule per second (one watt), then both emit the same amount of energy (let’s assume, the efficiency is 100%). Then the power of the beam of the continuous-wave laser is one watt (1 W), while the power of each pulse of the pulsed laser is 100 thousand watts (100 kW). In reality, of course, the efficiency is much less than 100% and depends on the type of laser and its gain medium. A part of the external energy is dissipated as heat, and another part is taken away by collateral radiation.
VBT40. Lightning
VBT39. Cosmic rays
What tools do astronomers use for observing stars and galaxies? Everyone knows about telescopes and radio telescopes. Some may also have heard about infrared and ultraviolet telescopes, X-ray and gamma-ray detectors. But would you consider a water tank as a proper tool for observing the Universe?
Imagine 1600 massive water tanks evenly distributed over the area larger than Luxemburg. This is how a very unusual astronomical tool looks like. It was built in the vast empty plain in Western Argentina. This is the Pierre Auger Cosmic Ray Observatory, the largest observatory of this kind in the world.
Radio waves, visible light, infrared and ultraviolet radiation, X rays and gamma-rays are all electromagnetic waves of different frequencies and wavelengths. The longest waves with the lowest frequency are radio waves; the shortest with the highest frequencies are gamma rays.
But cosmic rays are different. They are flows of subatomic particles, like electrons, protons, neutrons, mesons and some others. A small fraction of the cosmic rays are particles of antimatter, mainly positrons. Some of the particles in the cosmic rays are of very high energy, so high that it cannot be achieved with any particle accelerators built by people so far, not even with the Large Hadron Collider. Low energy particles come from processes in the solar wind of our Sun or that of nearby stars. High energy particles come from catastrophic events in the Universe, like explosions of supernovas in our Galaxy and other galaxies, probably also from the dark matter.
Water detectors use the Cherenkov Effect for detecting cosmic ray particles. These particles travel at the speed close, but lower, than the speed of light in vacuum. However, their speed is higher than the speed of light in water. When such particles enter water, they produce bursts of light, which can be registered by light detectors.
Cosmic ray particles or secondary particles produced in collisions of high energy particles with atoms of the atmosphere reach the surface of the Earth. But the actual flow is much more intensive. A considerable part of cosmic rays is blocked by the atmosphere of our planet or deflected by its magnetic field to the poles. Therefore scientists tried to put water detectors into space. The largest was a 27 cubic meters tank attached to the International Space Station.
Cosmic rays cause mutations in cells of living organisms. Mutations are changes in genetic codes. A mutation of a cell can result in death of a cell, or, if it is viable, it will be passed down generations of cells as a result of division of the cell. Some of such mutations may cause cancer.
Cosmic rays produce carbon-14 in the atmosphere, a radioactive isotope used by scientists for determining the age of fossils in the range from a hundred years to around 60 thousand years. This technique is known as carbon dating.
VBT38. Shine on, you crazy Diamond (a superpowerful light source)
This Diamond shines 10,000 times as brightly as the Sun. That is, the electromagnetic radiation in all segments of its spectrum is 10 thousand times as intense as that of our Sun, when its radiation reaches the Earth. Officially it is called DIAMOND Light Source Ltd., unofficially it is called just Diamond. This is one of the brightest sources of light in our galaxy.
Diamond is a joint venture and a scientific research facility partially financed by the UK government and partially, by a private fund. It is located in Oxfordshire, a county in South East England. Its core is a synchrotron, a particle accelerator, which accelerates electrons to the speeds very close to the speed of light. It has three stages of acceleration.
The first stage is the electron gun, very much like in our old CRT screens, but much more powerful. It emits electrons and accelerates them to the energy of 90 keV (kilo-electron-volts). Then a linear accelerator increases their energy to 100 MeV (mega-electron-volts), at which electrons move at 99.99% of the speed of light, and injects them into a booster synchrotron capable of accelerating electrons to the required energy, up to 3 GeV. The synchrotron ring looks almost like a circle with circumference of 561.6 m, however, it is not a circle, it is a polygon with 48 sides and 48 angles. In each angle powerful magnets bend the beam of electrons. When electrons suddenly change direction of their motion they emit electromagnetic radiation known as synchrotron light.
Out of some of these angles radiation is directed to beamlines. Depending on the energy of electrons, Diamond can produce high intensity electromagnetic radiation from far infrared through visible light and ultraviolet to “soft” and then “hard” X-rays.
At the ends of the beamlines the radiation is utilized for different research purposes. Currently Diamond has 22 output terminals for different kinds of experiments.
The radiation generated by Diamond is used for testing parts of jet engines, determining structure of proteins, determining irregularities in crystalline materials used for manufacturing electronic parts, spectroscopy of unknown materials (including meteorites and Moon rocks), analysis of structure of nanomaterials, reading ancient manuscripts and scrolls without unwinding them, imaging of internal structures, skeletons and organs of fossilized plants and animals and for many other purposes.
What scientists get in most experiments are images of diffraction of beams of different frequency on molecular and atomic structures of substances or spectra of their absorption of radiation. It takes a lot of refined science and computations to get from the obtained images to the conclusions about crystalline, molecular or atomic structures of the tested substances.
An important issue in operating Diamond is safety. Even visible light of such intensity can be lethal to any living creature, let alone the extremely powerful X-rays. No one is present in the areas where samples are exposed to radiation; each experiment is pre-programmed and carried out automatically. For this reason, no one has ever seen the bright shining of Diamond.
Hordes of scientists visit this facility every year, and not only from the UK, but also from other countries. Only sufficiently big and rich countries can afford creation, operation and maintenance of such a very expensive piece of scientific equipment.
Diamond was conceived in 1990. In 2001 the design study was finished and construction began. In 2007 it was put in operation with 7 beamlines. From 2007 to 2013 fifteen more beamlines were added. Only the initial investment in creating Diamond was £260 million.
VBT37. Asymmetric cryptography
Asymmetric cryptography (aka public-key cryptography) is based on a special class of functions, for which it is difficult to find an inverse functions without some additional information used for creating the initial functions.
The information about the initial function is called the public key. The information necessary for creating the inverse function but is not necessary for restoring the initial function is called the private key.
The public key is used for encrypting a message. It is safe to send it via unprotected communication lines. Anyone who gets the public key can encrypt a message, but only the one who has the private key can read the encrypted message.
For instance, Bob wants to send a secret message to Ann. Ann generates two keys - a public key and a private key. She sends the public key to Bob and keeps the private key to herself. Bob encrypts the message to Ann using her public key and sends the encrypted message to Ann. If the coded message gets intercepted by Bob’s mother or Ann’s father, they won’t be able to read it, even if earlier they intercepted the public key sent by Ann.
If Bob wants a reply from Ann, it’s his turn to generate two keys and to send his public key to Ann so that she can encrypt her message and send it to Bob, who will read it with his private key.
But if not only dads and moms are nosing for the secrets of Bob and Ann? What if real enemies want to destroy their relationship? A real enemy could intercept the public key sent by Ann to Bob and to use it for sending some nasty words on behalf of Bob for breaking their relationship.
Then Bob and Ann should be more careful. When Ann wants to get a coded message from Bob and wants to be sure that it is from Bob, they may use a symmetric key algorithm. But they have to exchange the key privately. Ann can generate the private and public keys and send the public key to Bob together with an uncoded challenge question, for example: “When we had a date last Friday and walked in the park, I asked you to buy me a drink. What is the name of the park? What drink was it?” Bob uses the public key received from Ann for encrypting a symmetric key together with his answers to the challenge questions: “We walked in Hyde Park. I bought you a pint of Fanta.” Then Ann can be sure that she received the symmetric key from Bob rather than from an enemy, and they can use this key for encryption and decryption of their messages during some period of time, until they have to change the key.
These algorithms are widely used in the internet secure protocols, remote banking and automatic teller machines (ATMs). A 256-bit private key has approximately 10⁷⁷ variants. If an enemy has a computer capable of going over one hundred million variants in a second, then checking all variants will take 10⁷¹ seconds, or 3.2⋅10⁵⁶ years. One chance in a quadrillion of discovering this key will take, on average, 3.2⋅10⁴¹ years. Even if you have an enemy who would try this and use very expensive super-powerful computers for doing this, his or her chances are very poor. Anyway, none of the participants of the story will live 3.2⋅10⁴¹ years. By that time, breaking their codes will be irrelevant.
Sometimes the message itself is for public use, and therefore, is not secret and should not be ciphered. But it may be important to be sure that the message originates from a particular person and is not a fake. In this case, different variants of electronic signatures are used. The most advanced of them are “digital signatures” based on asymmetric cryptography. Digital signatures are now legally recognized in the USA, Canada, all countries of the European Union and in many other countries.
VBT36. Cryptography
People used ciphers since the invention of writing in order to prevent their messages from being read by those who should not know their contents. The idea is simple - you develop a key, you apply the key to your message and send your message to the receiver. The receiver has to have the same key delivered safely and reliably in order to read this message by applying the same key reversely. The advantage is that you have to protect only the delivery of the key rather than the delivery of each separate message. Of course, stealing the keys and breaking the keys became an important business since the times of development of the first ciphers. These activities resulted in emergence of a branch of science and technology largely based on mathematics and called cryptography. It deals with development of algorithms of encryption and decryption of messages, as well as methods of breaking secret keys.
The idea of encrypting and decrypting is basically as follows:
Message + Key = Encrypted message.
Encrypted message – Key = Message.
Often more complicated operations and functions than just addition and subtraction are used.
Also cryptography deals with such issues like hiding messages. It is much more difficult to break the key to a secret message when you don’t know that the message exists, or the message does not look like coded and important. This branch of cryptography is called steganography.
In most cases, the message remains secret and important for a certain period of time, sometimes, several hours, sometimes, several years. If it is not possible to break the key during the period while the message remains secret, the cipher is good enough. The property of resisting attempts of breaking a cipher is called robustness. It depends not only on the encryption algorithm and the key, but also on available technologies of breaking codes. With modern computers, which can go over a million variants of keys in one second, codes, which deemed to be robust just a couple of decades ago, are now unreliable (insufficiently robust). Two hundred years ago a cipher with a billion variants of keys was reliable enough. Now it can be broken in 8 minutes on average, and in 17 minutes maximum. And there is 1 in a thousand chance that it will be broken within the first second. But there is also another aspect to this: Is the coded message worth breaking with the use of very expensive supercomputers? This consideration makes a cipher with a billion variants of keys sufficiently robust for many not very important secret messages, if they should remain secret within several hours or several days..
The longer you use the same key, the greater is the chance that somebody will break it. Therefore, you have to change the keys quite frequently, and you have to provide safe delivery of the keys to everyone who should be able to read your encrypted messages. Quite often, messengers with new keys have to travel through the territory, where people are hunting for those keys. In modern times, transmitting such keys by wires or radio makes them absolutely vulnerable to the eavesdroppers. Therefore, it is strictly prohibited to transmit keys by unguarded electrical communication lines. It is possible, however to use a special cipher for encrypting the key, then it is safe to send it via unguarded lines. This is done with the use of one of the key exchange algorithms.
All the described above is about the symmetric key cryptography. It is still in use because its methods are rather simple. The simplest variants are suitable for manual coding and decoding, which can be used when computers fail for whatever reason. And it is also used in electronic communications. However, in the second half of the 20th century the dawn of a new era in ciphering began.
In 1970 James Ellis, a British cryptographer, developed a concept of a new method of encryption. It was based on the so-called one-way functions described by British economist and logician William Jevons in 1874. This work led to development of trapdoor functions, which are functions that can be easily applied to a set of data, but it is very difficult to reverse them, that is, to obtain the initial set of data using the same function without additional information. Between 1973 and 1985 several algorithms of coding based on one-way and trapdoor functions were described by different researchers. Eventually, these efforts lead to appearance of the asymmetric cryptography (aka public-key cryptography). It solved many problems and limitations of the traditional methods of cryptography. We shall tell what it is and how it works (though not in all mathematical details) in a separate story - Asymmetric Cryptography.
VBT35. Map projections. Part 2
The simplest idea of a map projection is to put a flat sheet of paper over a territory, which you want to draw on the map, perpendicularly to the line from the center of the globe to the center of that territory, and to draw parallel lines from each point on this territory to the sheet. This is good enough for small territories and gives small distortions. It is also possible to use two parallel sheets, one for projecting the eastern hemisphere and the other for the western hemisphere, or one for the north hemisphere and the other for the south hemisphere. In this way you get a map of the whole earth in the form of two circles, each representing one hemisphere. This projection, however, creates big distorsions closer to the circumferences of the circles. In many cases this is not important, since we may choose the centers of the projections in such a way that the most important parts will be moderately distorted. It is possible to to make this projection in such a way that distances will be represented in the correct scale either in the center of the projection or along the circumference of a small circle around the center.
It is possible to make a cylinder out of paper, put the globe such that its axis is vertical and wrap the cylinder around it. Then there are two options of projection. Either you can draw horizontal lines from the axis to the point on the surface and then to the paper cylinder. Or you can draw lines from the center of the globe through points on its surface to the paper cylinder. In both options parallels look like horizontal lines and meridians look like vertical lines. It is possible to make distances in the correct scale either along the equator or along two latitudes at the same angle from the equator, one in the north hemisphere, and the other in the south hemisphere. Both options have very strong distortions closer to the poles. In the first case, distances along the meridians are shrunk, distances along the parallels are expanded closer to the pole. You get the poles looking like lines of the same length as the equator, rather than points. In the second option both parallels and meridians are getting stretched closer to the pole, however, the pole itself is at the infinite distance from the equator. For this reason, poles are not shown at all on maps with this projection.
However, the cylinder projection with projecting lines starting from the center of the globe has a remarkable property. Any bearing of a sea or flying vessel (and “bearing” means motion at a certain angle to the direction to the north pole, that is, azimuth) is a straight line on a map with this projection. If the rhumb (the angle between the direction to the North Pole and the direction of motion) is 10° or 285° or whatever, the trace of this vessel is a straight line on the map. A loxodrome is a line drawn by an object moving at the permanent rhumb. Therefore, all loxodromes are straight lines on this map. This projection was developed by the Flemish geographer and cartographer Gerardus Mercator in 1569. It is called the Mercator map projection and has been widely used in navigation since its development.
The transverse Mercator map projection is also based on a cylinder, but the axis of the cylinder is perpendicular to the axis of the globe. It can be used for global maps, but mostly it is used for large scale local maps, and the axis of the cylinder is put in such a way that the cylinder touches the areas of interest on the surface of the Earth.
The cone projections are used for mapping territories close to the poles and the pole regions themselves.They have strong distortions closer to the equator.They are not suitable for global maps.
There are also purely mathematical projections, which cannot be obtained by wrapping flat paper in any shape around a globe and by using straight lines for projecting points on the globe into points on the map.. Some of them are better for large scale maps representing small territories, others are better for small scale maps representing large territories. Of those, which are used for the maps of the entire world some are good for preserving visible shapes of continents and countries, other preserve areas of territories (but distort shapes) or preserve distances, but distort shapes and areas.
VBT 34. Map projections. Part 1
You cannot draw a picture of the Earth, which is almost spherical, on the plane surface of a map without distortions. When you draw a small part of it, a square of 50 by 50 km, the distortions will be minor and neglectable, but still, the angles of a triangle drawn on that small territory added up will exceed 180 angular degrees by a fraction of a degree, while angles of the same triangle drawn on the map will be exactly 180 degrees. This is already a distortion. If you draw a map of a hemisphere or the whole Earth, the distortions become substantial. Angles of a triangle drawn on the surface of a hemisphere may add up to 360 degrees, while on a flat map angles of the same triangle will be 180 degrees only. Therefore, if you still want a map of the Earth drawn on paper you have to choose a projection – a set of rules for representing points on the curved surface of the Earth on a flat piece of paper.
Of course, you may want a globe. It is similar to the Earth in its shape, and its distortions are minor for the whole Earth. But if you want to see a particular part of the Earth in detail, you’ll need a very big globe. And sometimes you do not need many details, then you need a smaller globe. When you are travelling, you may need a set of globes from 20 cm to 100 m in diameter. But it is not convenient to travel around with that set of globes, especially with the one of 100 m in its diameter. And it is not very convenient to use such a globe, especially if you are not an experienced mountain climber. Instead, you can have a set of maps with the same range of details, that is, maps of a large scale and of smaller scales, but much smaller in volume – and with distortions.
Since a map is a distorted representation of the Earth or its part, you have to understand how different map projections distort the Earth and what you get yourself into by choosing one of them.
But what is a projection? It is a rule, according to which every point on the surface of the Earth is represented on the surface of the map. Consider a globe of a desirable scale as the source of projections. Put a piece of paper close to it. You can keep it flat or wrap around the globe as a cylinder or make a cone of it as a cap and put it either on the North or the South Pole. Or you can use a sheet of rubber instead of a sheet of paper and stretch it around the globe in different manners. Then you project points from the globe to the piece of paper or rubber, then you unwrap the globe and get a map on your sheet.
In part two several projections are described.
VBT33. Meteor showers
A meteor shower is a celestial event, in which a number of meteors seem to appear from a certain point in the sky, which is called the radiant, in a certain period of each year (or sometimes once in several years) lasting from a few days to nearly a month. Meteor showers are observed when the Earth crosses the orbit of a comet, which does not exist anymore, or maybe still exists, but it is quite old and is on its way to complete disintegration.
A comet is an irregularly shaped lump of ice (frozen water) and iced gases with inclusions of dust and small pieces of rock.Typically comets orbit the Sun in highly elliptical orbits. Their orbital periods may be from several years to over two centuries. When they come closer to the Sun, they get heated up and emit molten gases and water vapor. In this process, dust and small pieces of rock get released.
Over hundred millions or even billions of years some of the comets melted down substantially or completely, however, dust and pieces of rock, which they initially held inside, remained in their orbits. Eventually those small particles of matter got dispersed all along the orbit of the dead comet and around it, but their distribution is uneven – the highest concentration of those debris is around the place where the dead comet used to move in its orbit. These particles of dust and rock distributed over the orbit of the comet are called the “meteoroid trail” of a comet. When our planet crosses that trail, we can observe meteor showers. Comets begin to form meteoroid trails long before they melt completely, therefore, crossing orbits of existing comets may also cause meteor showers.
On the Earth, we observe different intensity of shooting stars, depending on the density of dust and rocks in the place, which the Earth crosses in a particular year. In periods, when the cloud of the highest concentration of debris (dust and rocks) appears on its orbit in the same place where the Earth crosses it, meteor showers may be very intensive, over 1000 meteors per hour, and they are called meteor outbursts or meteor storms. In November, 1833, a meteor storm with around 100 thousand meteors per hour was observed.
The intensity of a meteor shower is measured in the Zenith Hourly Rate (ZHR), which is the number of meteors an observer can see with naked eyes in the clear dark sky, when the radiant is in the zenith. In these conditions, the limiting apparent magnitude of observed meteors is 6.5. In less favorable conditions (the radiant is closer to the horizon, the Moon illuminates the night sky, the sky is not completely clear from clouds) the actual observed number of meteors is lower than the ZHR.
There are around 100 confirmed regular meteor showers and over 600 suspected meteor showers (these are not observed every year because of the inclination of their orbits or distribution of dust in them).
Some of the well-known regular meteor showers are named after constellations, in which their radiants are observed. For instance, the Leonids have their radiant in the constellation Leo and are observed during around one week in November. Those were the Leonids that caused the extremely intense meteor storm in 1833. The Perseids with their radiant in the constellation Perseus are observed from the last days of July to the last decade of August. The Geminids with their radiant in the constellation Gemini are observed during 10 days in the first half of December.
Meteor showers are dangerous for satellites. A small piece of rock moving at the speed of up to 30 km/s, if collides with a satellite, can destroy it and create a lot of space debris in the near-Earth space.
VBT32. Computer architectures
A computer architecture is a set of basic principles describing how a computer is built. There are two very distinct computer architectures – the Von Neumann architecture and the Harvard architecture – with many variations of architectures within each of them.
Basically a computer operates as follows. It performs a program temporarily or permanently stored in its memory. The program consists of two parts – the algorithm, or the program code (instructions of what should be done by the computer) and the data structure (what should be changed in the process of performing the algorithm).
In the Von Neumann architecture, instructions and data are stored in the same memory device. In the Harvard architecture the program code and the data are stored in separate memory devices. This seemingly small difference in the structural organization spells immense difference in operation.
The Harvard architecture was proposed as a concept by Howard Aiken, a scientist at Harvard University in Cambridge, state Massachusetts, the USA, in 1937. In 1939 it was adopted as a concept for an electromechanical computing machine developed by IBM. In 1944 the first computer (still electromechanical rather than completely electronical) was built by IBM and tested at Harvard University.
The Von Neumann architecture was theoretically described by John Von Neumann in 1945 and was implemented in one of the first fully electronic computers, EDVAC, which was built and put in operation in 1949, also in the USA.
The main difference between these two architectures is this. A computer based on the Harvard architecture can change data, which is stored separately from the program code, but it cannot change the program code. A computer based on the Von Neumann architecture can change the data of a program, but also it can treat the program code as data. Therefore, a program can change another program or can change itself while it is executed. This possibility opens enormous opportunities for programmers. Unfortunately, it also opens opportunities for designers of malware - harmful computer programs.
Currently the Von Neumann architecture is widely used in so-called all-purpose computers – all your desktop and laptop computers, also in your netbooks, tablet PCs and smartphones. The Harvard architecture is mainly used in very specialized computers, which are often called controllers. There are plenty of them in your laptop or smartphone – USB, Ethernet, Bluetooth and Wi-Fi connectivity, as well as many other utility operations are performed by tiny controllers built according to the Harvard architecture. Also controllers are used in different household appliances (washing machines, microwave ovens, air conditioners etc.) and in technological equipment.
VBT31. Computer modeling, or what computers actually do
The answer may seem to be simple: computers are for computing and they do computations. But if you give it a deeper thought, you’ll come to a conclusion that computers model the real world, or a virtual world. The latter may be a scenario of a computer game or just the internal world of the operating system or its subsystems. That’s what we did calculations and computations for before appearance the computers – for modeling the real world.
Computer models are not in metal, or bricks and mortar, or wood, or plastic or any other materials. They are abstract models. That is, they model parameters (or attributes) of real or virtual objects and how those parameters change over time. Usually these models are based on mathematical models, which describe how parameters of real objects change.
Computers as such cannot model anything. They are machines for performing certain rather simple operations like addition, subtraction, multiplication, division, Boolean algebra operations and many auxiliary operations. Typically, a computer can perform less than a hundred simple operations (sometimes much less), most of which can exist in several modifications. A model is created on the base of a computer by making a computer perform a computer program.
A computer program consists of two parts, which are useless without each other – the algorithm and the data structure. The algorithm represents a sequence of simple operations, which can be performed by the computer, and accomplishment of which will give the required result. And those operations are executed on the data structure of the program. Each data element represents a parameter of an object of the real or virtual world.
For instance, you want to brew some tea. The algorithm is approximately as follows. You boil some water in a kettle, you put some dried tea leaves in a teapot, you pour boiling water in the teapot, you wait several minutes, then pour tea in your cup. Optionally you can add sugar (or a sweetener) and/or milk. Then you enjoy your tea. Of course, this final operation can be extended (in the case of a human performing this algorithm) – if you really love tea, you can enjoy every previous operation as well.
The data structure represents information about how you can distinguish the kettle and the teapot from anything else, where tea leaves are stored, where you can take water for the kettle, how much of it you pour into the kettle, how you can recognize that the water is boiling, and many other details. This data structure is changing in the process of accomplishing the algorithm. Initially the tea leaves were in a box, then a part of them appears in the teapot, then they are not dry anymore, as you add boiling water into the teapot. The water was not boiling initially, it began to boil in the process of accomplishment of the algorithm. And so on. Many things are changing while you are preparing your tea.
In short, in a computer program, which is a model of the real or a virtual world, the data structure represents the current state of the modelled world (or its part), and the algorithm represents processes in the modelled world. An algorithm executed by a computer changes the data structure according to the changes in the modelled world (or, rather, a small part of it).
VBT30. RGB and CMYK – the most widely used color models
Our eyes distinguish hundreds of colors and millions of hues. When we want to see a color image, it would not be possible to reproduce it with the use of those hundreds of paints and millions of hues. Luckily, having learned how our color vision works, scientists and engineers developed methods of reproducing very large numbers of colors using a very limited number of so-called primary colors. Now we call methods of representation of large numbers of colors with primary colors “color models”.
There are two substantially different approaches depending on what medium is used for the color image. One situation is paper or canvas or any similar material. It does not emit light, it reflects light. The medium itself is white or almost white, it reflects a part of the light that falls on it. We can use paints for subtracting (absorbing) light waves of certain frequencies in order to give color to the reflected light. Color models for media reflecting light are subtractive color models.
A totally different case is the color screen, which emits light, or a traditional color film, that transmits light emitted by a powerful lamp. In this case, primary colors add up to produce a required color. Color models for media emitting light are additive color models.
The simplest additive color model is the RGB model. It is used practically in all color displays, color TV sets and traditional color photofilms. RGB stands for red, green and blue – the primary colors of this model. Each pixel of a screen is made of three elements, each of which can emit its own color with different intensity - from zero emission to the brightest emission. When all three elements emit their light at their maximum, we see a white pixel. When only red and green elements of the pixel emit at their maximum, and the blue element does not emit any light, we see a bright yellow pixel. When only red and blue elements of the pixel emit at their maximum, and the green element does not emit any light, we see a bright magenta pixel. When all three emit light at their medium and equal levels, the pixel is grey. When all three do not emit light at all, the pixel is black.
The simplest subtractive model is the CMYK model. The first three letters represent cyan, magenta and yellow colors, and the letter K means “key”. In color printing, the key plate is the printing plate used for printing details of the image, and they are printed in black color, therefore, letter K in the name of the model represents black color. In a subtractive model a pixel is formed out of three dots printed in cyan, magenta and yellow. The hue of the color is determined by the size of the dots. The smaller is the dot, the lighter is the color. Cyan and magenta without yellow give blue, yellow and magenta give red, and so on. Three dots of three colors do not produce black color, because each of them reflects light of its own color. As a result, a pixel looks grey rather than black. For this reason, black is printed over three colors to give more hues and to provide higher sharpness to the picture. This model is used in printing presses and color printers. The process of converting a color image into four printing plates of the color model is called “color separation”.
Color pigments for printing presses and jet printers have to be mixed with or dissolved in liquids. These liquids may have different viscosity, adhesion and surface tension depending on the media, to which they are applied. Generally they are called printing inks or just inks. Laser printers are a different story, they use dry powders rather than inks.
An important issue is the range of brightness between the maximum black and maximum white. In the case of displays, the range may be rather wide, and it is especially wide in plasma screens. In the case of printing on paper, the range is much narrower, and it depends on the intensity of the external light. Also, the impression we get of a printed colored picture depends on the spectral composition of the external light. A picture looks differently in the sunlight and in the light of an electric bulb, even though our brain makes an adjustment to reduce this difference.
In the case of printing, one has to take into account the nature of the paper used for printing. It can be coated or uncoated. Coated paper can be glossy, or matt, or cast coated. Uncoated paper may be newspaper grade, offset grade and supercalendered. Paper may be of different brightness and different whiteness. Paper may have a hue of its own. All these nuances have to be taken into account by printing technologists in order to achieve maximum performance.
VBT29. Heat pump. That’s how your refrigerators and air conditioners work
A heat pump is a device used for transporting heat from the place, the temperature of which should be lowered, to the place, where heat is dissipated. The first place is usually called the “heat source”, the second place is usually called the “heat sink”. Typically, heat pumps transport heat in the direction opposite to natural heat transfer, that is, from a cooler place to a hotter place. This requires an external input of energy.
Heat pumps are widely used in refrigerators, freezers, air conditioners and HVAC devices (heating, ventilating and conditioning – devices, which can be used both for heating and cooling rooms).
Most heat pumps use the same principle – evaporation of a liquid in the area of the heat source and condensation of vapor in the area of the heat sink. Both processes occur in special vessels – the evaporator and condenser – connected with two pipes. One pipe transports vapor from the evaporator to the condenser, and the other transports liquid from the condenser to the evaporator. Both the evaporator and condenser are designed as heat exchangers – the area through which they absorb or dissipate heat is increased by their design.
In the pipe transmitting vapor to the condenser there is a compressor. It compresses the vapor and makes it condense and release the energy of condensation. In the pipe transmitting liquid there is a resistive element – an expansion valve or a capillary tube. Its purpose is to reduce pressure in the pipe near the evaporator and in the evaporator itself and to make liquid evaporate and absorb evaporation energy. The cycle of evaporation and condensation of the same substance is called the refrigeration cycle.
The substance (or a mix of substances) which is used in heat pumps is called refrigerant. In the 20th century the most popular refrigerants used in household heat pumps (refrigerators and air conditioners) were chlorofluorocarbons (CFCs). Sometimes these substances escaped from the heat pumps in the atmosphere. By the end of the century scientists discovered that CFCs, once they escape into the atmosphere, destroy the ozone layer of our planet. Ozone protects us from excessive ultraviolet radiation of the Sun. For this reason, new refrigerants are being developed. Depending on the desirable temperature on the cold side, different refrigerants are used: ammonia, sulphur dioxide, propane and others. They also have issues – ammonia is toxic, and there have been cases when ammonia leaked from big industrial refrigerators and killed people. Propane is highly inflammable and can make explosive mixes with air. Perfect refrigerants are yet to be found, if they exist at all.
The compressor of the heat pump is driven by an electric motor (typically) or by any other kind of motor. A special feedback circuit monitors the temperature inside the volume which is being cooled and switches the motor on and off as necessary for maintaining the required temperature.
The efficiency of modern heat pumps is 3 to 4 units of heat energy transferred from the evaporator to the condenser per one unit of energy spent on the operation of the heat pump.
It is possible to design a reversible heat pump, in which the the condenser can become the evaporator and the evaporator, the condenser. This is used in many modern air conditioners (HVAC devices) – they can either cool down or heat up a room.
It is possible to use the heat of cold winter air (it does contain a lot of heat because its temperature is far above the absolute zero) for heating your abode with the use of heat pumps. Just imagine, if you need, for instance, 10 million joules per hour for heating your home, you can burn gas or use electric heaters and spend 2.8 kWh each hour. Or you can use a heat pump, which consumes from ¼ to ⅓ of the energy it pumps. Then you will get the same 10 MJ per hour, or 2.8 kWh of heat energy, by consuming only 0.93 kWh or even 0.7 kWh. Sounds like magic, but no, it’s just engineering.
VBT28. Binary numbers and their close relatives
These are numbers used by computers, and they are of a positional numeral system, base-2. Number 2 has been chosen as the base for the numeral system of computers for the simple reason that it is easy to design an electronic device with two distinct states, while a device with 10 distinct states is much more complicated. A lamp is on or off - two distinct states. Imagine ten levels of intensity of light emitted by a lamp - you won’t be able to reliably distinguish level 4 from level 6. A transistor may be open (low resistance state) or closed (high resistance state). But if you try to set 10 levels of resistance of a transistor, you’ll see that its resistance depends not only on the input signal, but also on the temperature of the transistor, which depends on the outside temperature, on the heat produced by the transistor itself when it works and on how effectively this heat is dissipated. Also, parameters of different transistors of the same model slightly differ, so, when you may have to replace a transistor with a new one, you will have to adjust your equipment to its new parameters.
A capacitor can be charged or discharged (but you can never be sure when it is charged 1/10 or 1/5, besides, its charging and discharging curves are nonlinear), a ferromagnetic core can be magnetized in one or the other direction, the current in a wire can be high or low, the voltage at a pin of a microchip can be high or low. These definite and easily identifiable states made scientists chose the binary system for computers.
For this and several other reasons the binary numbers became the language of almost all currently existing computers. As all other positional numeral systems, this system has the base, which comprises 2 digits – zero and one. There is a position of units, the value of which is one. Any position to the left of the current position has the value of the current position multiplied by the base (which is 2 in this case). Therefore, number 1 means 1, number 10 means two, number 100 means four and number 1000 means eight. Number 11 means, of course, three (2+1), number 101 means five (4+1), number 111 means seven (4+2+1).
Using binary numbers people represent in computers not only numbers, but also letters of their alphabets, music and images. If you agree that a 24-bit code 0000 0000 0000 0000 1111 1111 should represent a bright red dot on the screen of a device and other people agree that this number should be shown as a bright red dot and design their devices accordingly, then this code will be shown as a bright red dot on screens of all devices supporting this standard.
Octal numbers belong to the base-8 numeral system. They are not for computers, they are for people.Computer engineers and scientists use them for their convenience. The octal base comprises Arabic digits from 0 to 7, Each digit of an octal number represents three digits of a binary number. Octal numbers were widely used by computer scientists and engineers when a computer byte was made of 6 bits and could be represented with 2 octal digits. Now these numbers are still in use, but are less popular because now a byte comprises 8 binary digits. In this system, 7 means seven, 20 means sixteen and 100 means sixty four.
Hexadecimal numbers have also been designed for the convenience of people talking to computers in the computer languages. They belong to the base-16 numeral system. This system uses ten Arabic digits, from 0 to 9, and latin letters – a, b, c, d, e and f, or their capital equivalents, with values, respectively, 10, 11, 12, 13, 14 and 15. In this system, number 20 means thirty two, number AD means one hundred and seventy three, number ADD means two thousand seven hundred and eighty one.
When several numeral systems can be used in the same text, people use subscripts for indicating the base and avoiding misunderstanding. For example, 100₂ means four (binary), 100₁₀ means one hundred (decimal), 100₈ means sixty four (octal) and 100₁₆ means two hundred and fifty six (hexadecimal).
In programming languages people also have to use numbers of different bases. In different programming languages different signs are used for indicating the base. For instance, in c++ and languages based on similar syntax, an integer number beginning with a non-zero digit (except for decimal fractions with no integer component) is a decimal number. A number beginning with zero followed by a non-zero digit is an octal number, and a number beginning with 0x or 0X is a hexadecimal number. For example, 200 means two hundred (decimal), 0200 means one hundred and twenty eight (octal) and 0x200 (or 0X200) means five hundred and twelve (hexadecimal).
VBT27. Small celestial bodies. Part two
Meteoroids are much smaller than all kinds of minor planets and comets. Their size may be from tens of meters to a fraction of a millimeter across. Their number exceeds the number of minor planets manyfold. Most of them are not catalogued and are not known to scientists separately, scientists only know where such objects swarm. There are plenty of them in the interplanetary space of our Solar System. Some are coming from far away, from the interstellar space. Some of them fall on our planet. Most of them enter the atmosphere of the Earth at a very high speed, over 10 km/s relative to the Earth. The atmospheric drag at that speed makes them overheated at the heights of 100 to 30 km above the Earth, and most of them just evaporate or fall into pieces of dust. We call them meteors and see them in the night sky as shooting stars. But when they do not burn in the atmosphere completely and fall on the surface of the Earth we call them meteorites. The crucial issue is their relative speed, though size and composition are also important. The relative speed may range from 60 km/s to less than 1 km/s, depending on how the motion of the Earth on its orbit interacts with the motions of those small celestial bodies on their tricky orbits.
Micrometeoroids are pieces of stone or metal ranging in size from a few millimeters to ten micrometers. Their mass is typically less than one gram. They collide with the Earth at the rate of thousands a day. When they enter the atmosphere of the Earth, they can be seen as shooting stars, but when their relative speed is not very high, they fall on the surface of the Earth as micrometeorites.
Cosmic dust (or space dust) are particles of the size from ten micrometers to several nanometers. Sometimes a whole celestial body may consist of several molecules.
The density of cosmic dust and micrometeoroids is tiny. A place with a couple of thousands of particles of dust per cubic kilometer is a dusty place. These particles cannot be detected individually, but places with high concentration of dust are seen from big distances as dust clouds. A hundred micrometeoroids per cubic kilometer are already too many. Cosmic dust and micrometeoroids impose a serious threat to space missions and a challenge to designers of space vehicles and space suits for astronauts.
Estimations show that the Earth collects over 20 thousand tons a year of cosmic matter in the form of meteors, meteorites and cosmic dust.
VBT26. Small celestial bodies. Part one
In our Solar System, in addition to the Sun and eight planets (plus Pluto, which used to be called the 9th planet, but then was demoted to the rank of dwarf planets), plus satellites of planets, there are thousands of much smaller objects, but still big enough for taking a nearly spherical shape, and innumerable yet smaller objects, like pieces of rock or ice of irregular shape, some are over two thousand kilometers in diameter, others are in the scale of micrometers and nanometers. This story is about some of them.
Minor planets are big pieces of rock. These include dwarf planets, asteroids, trojans, centaurs, Kuiper Belt objects and other trans-Uranian objects. By 2013 astronomers catalogued over 620 thousand minor planets, some of them are larger than our Moon. All minor planets have their individual orbits around the Sun, however, the eccentricity of their orbits and inclination of the orbits to the Ecliptic plane are much larger than those of planets.
Some of the minor planets, which are almost as large as smaller planets, and are big enough for being shaped by their gravitational field into almost spherical objects, are called dwarf planets. Earlier they were also called planetoids, however, this term was used in a much wider sense, and now is not officially used. Many of them exist in the Kuiper Belt – a formation beyond the orbit of Neptune, 30 to 50 astronomical units (AU) away from the Sun. Some are much closer to the Sun, their orbits are between the orbits of Jupiter and Uranus.
Trojans are pieces of rock that share the orbit around the Sun with much larger objects, like planets and dwarf planets, but do not collide with them, because they move in the so-called Lagrangian points. They are quite small in size and of irregular shape. Mars, Jupiter, Neptune and two Saturnian moons have trojans. In 2011 NASA announced the first discovered trojan of the Earth - a piece of rock around 300 meters across oscillating around Lagrangian point L4, 60° ahead of the Earth on its orbit. Jupiter has many trojans in its Lagrangian points L1, L2, L4 and L5. Mars has 5 known trojans, and Neptune has 9.
Centaurs have strongly elliptical orbits, however, their perihelions are beyond the orbit of Mars. These orbits are unstable – when they pass close to planets, their orbits are changed, and some of them may appear closer to the Earth. Some of them occasionally fall on big planets. The first centaur named 944 Hidalgo was discovered in 1920. Its perihelion is 1.95 AU, and its aphelion is 9.54 AU. By now astronomers know around 44 thousand centaurs. Some of the centaurs have rings of tiny particles and dust similar to those of Saturn. The smallest known object having rings is 10199 Chariclo, a centaur just around 250 km across. It has only two known rings; its closer ring is just 400 km away from its center of mass, its farther ring is at a distance of 9 km from the closer ring.
Asteroids are pieces of rock of different size and shape. The majority of them are orbiting the Sun in the Asteroid Belt between orbits of Mars and Jupiter, however, many of them have elliptical orbits with high eccentricity and perihelion smaller than 1 astronomical unit, which means, they can cross the orbit of the Earth and potentially can collide with it. Their size may range from several hundred kilometers to less than a hundred meters. There are three basic types of them - carbon-rich, stony and metallic. 65 million years ago a big metallic asteroid fell on the Earth, It contained a lot of iridium, and this metal is now found all over the Earth in deposits of that period. This asteroid killed dinosaurs and started a new epoch in the geological and biological history of the Earth.
Comets are mainly made of ice and different frozen gases, but may contain pieces of rock and rocky dust. Their orbits are elliptical, with very high eccentricity. Their orbital periods may range from several years to several hundred years. When they come to their perihelions, the Sun heats up their surface. This causes evaporation of iced water and gases. The solar wind blows these clouds of gases and water vapor (together with tiny particles of dust in them) away. As a result, we see spectacular tails of comets when they come close to the Sun. The tails may be millions of kilometers long. A comet with a tail that can be seen with the naked eyes is a rare phenomenon. An average person can see a comet with a tail a couple of times in his or her life. Astronomers with their telescopes can see comets with tails much more often. As of March, 2014, there were 5058 known comets. In the Middle Ages appearance of a comet in the sky was deemed to be a very bad omen. With over 5000 bad omens around, you have to admit that they are not so bad (unless any of them is going to fall on our planet) and are not omens at all.
This is the end of part one. Part two is about much smaller and much more numerous celestial bodies.
VBT25. Numeral system
A numeral system (also number system, numerical notation, number language etc.) is a way of writing numbers with the use of graphic symbols. There are (and have been) different positional and non-positional numeral systems. For writing numbers in a numeral system either letters of the alphabet or special symbols called digits may be used.
Currently we use several positional numeral systems (aka positional notations or place-value notations) for different purposes. The most universally used is the decimal system (or base-10 system).
In a positional notation a limited set of symbols (digits or numerals) is used for writing all numbers, however small or big. The number of digits used in a particular system is its base. And the digits themselves are also the base.
In the decimal system there are ten digits – from 0 to 9. These are Arabic digits, and the whole decimal system is often referred to as Arabic. Also this system is called Hindu-Arabic, because the first decimal system (though, with digits, which we would not be able to identify as numerical symbols) was developed in India over 5000 years ago. In the English language, the word "digit" has an additional meaning stemming from Latin – a finger or a thumb. Probably the fact that we have 10 digits on our hands influenced the use of the decimal system.
In a positional notation, each digit of the base has its own value. In the decimal system these values are from 0 to 9. However, the value of a digit in a number is determined not only by the value of the digit, but also by its position in the number.
In a positional system there is a position (or place) of a digit, in which a digit of the base has its own value. This is the position of units, and the own value of this position is 1. The value of any digit in this position and any other position is the own value of the digit multiplied by the value of the position. For any given position, the position to the left of it has the value of the given position multiplied by the base, and the position to the right of it has the value of the given position divided by the base. This may look complicated, but it's very simple.
For instance, consider this number: 123. In this number, the digit "3" is in the position of units and represents its own value (which is three) multiplied by the value of the position (which is one). The digit 2 is to the left of the position of units, and its value is the value of the position of units multiplied by the base. That is, this is the position of tens, and the digit "2" represents its own value multiplied by ten, that is, 20. The next position to the left is the position of hundreds, and the digit "1" in this position represents one hundred. We are so used to this notation that we do not recognize this simple principle behind it.
Fractions in a positional system are represented with the use of a fraction separator. Usually it is a point in English-speaking countries or a comma in other countries. In the decimal system the fraction separator is often called "the decimal point"; the first position to the right of this point has value of 1/10, the second – 1/100 and so on. Thus it is possible to write any numbers, however large or small, using only 10 special symbols, which we call digits or numerals, and with a fraction separator.
One of the first known positional systems was sexagesimal (or base-60) system, which was developed and used by ancient Sumerians around 5 thousand years ago (about in the same time when the decimal system was invented in India) and then passed to Babylonians. We still use rudiments of it in measuring time (60 minutes in an hour and 60 seconds in a minute), angles and geographic coordinates. A modification of this system was the duodecimal (or base-12) system. We have traces of it in counting time (12 hours from midnight to midday and 12 hours from midday to midnight). Germanic languages have special words for 11 and 12, like eleven and twelve in English. The Mayan civilization used base-20 number system.
In the computer age we also use binary, octal and hexadecimal systems. But this is a completely different story, though the principles are exactly the same.
There were many non-positional systems, which went out of use because of their inconvenience. We now use only one such system – the Roman numbers – mainly for decorative purposes, like on the faces of mechanical watches and clocks and for numbering chapters in novels or episodes of TV series.
VBT24. Microwave oven
A microwave oven is a domestic electric appliance for heating food or defrosting frozen food with the use of electromagnetic radiation.
The heating is achieved by transformation of the energy of very short electromagnetic waves (electromagnetic oscillations of very high frequency) into the heat energy in the mass of food itself rather than by transfer of heat to the food through its surface, as it is done in traditional ovens and stoves. Heating occurs at the depth of 25 to 38 mm from the surface of a heated piece of food, therefore it is faster and more uniform than traditional heating.
The transformation of electromagnetic energy into heat energy occurs on the molecular level. Electromagnetic radiation influences molecules of water (which are electrically polarized because of their structure) and makes them oscillate. This is how electromagnetic energy transforms into mechanical motion of molecules. And mechanical motion of molecules is heat – the hotter is a piece of matter, the faster is the motion of molecules in it. Molecules of water bump into other molecules of food and pass them a part of their kinetic energy, and thus, the whole piece of food gets heated. If the piece of food is thick, thicker than 50 to 76 mm, only its outer layers get heated by microwaves, while deeper layers are heated by heat transfer from outer layers.
Food that does not contain water cannot be heated in a microwave oven. Luckily, we rarely use absolutely dry food, and when we do, we usually do not need to heat it.
A microwave oven uses electric energy from the domestic electric power supply network, typically, 220V/50Hz in Europe or 110V/60Hz in the USA. Special electronic circuits convert that energy into high frequency electromagnetic waves. The key part of those circuits is a magnetron, a device with one or several cavities of strictly determined dimensions, a cathode-ray gun and strong magnets or electromagnets. The gun injects accelerated electrons into the cavities, and there electrons perform circular motion in the strong magnetic field and produce electromagnetic waves. The frequency of those waves is determined by the dimensions of the cavities and the matching strength of the magnetic field.
Magnetrons are also used in radars, only they are much more powerful than those used in microwave ovens.
The energy efficiency of a typical modern microwave oven is 64%. That is, 64% of energy consumed from the electric network is converted into microwave energy, and the rest is used for operation of electronic devices and gets dissipated as heat in them, especially in the magnetron. With the input power of 1100 W, the microwave power is about 700 W.
You should not put metal objects in the operating microwave oven. Microwaves induce eddy currents in metal objects, and this takes away a lot of microwave energy for fast heating of the object. Besides, objects with pointed elements (like forks and knives) concentrate a lot of energy at the pointed tips, which results in electric arcs around those tips.
The first microwave oven was designed and manufactured in 1946, and it was for industrial use. The first microwave oven for cooking was designed for the first manned mission to the Moon in 1969. It was not used in the flight itself, but it was used in the isolated quarantine box, where astronauts were kept 21 days after the flight – there was a fear that they could have contracted some viruses or bacteria on the Moon.
Colloquially a microwave oven is typically called just "a microwave".
VBT23. Numbers – natural, integer, rational, irrational, real
People began to use numbers for very practical reasons. They wanted to know how many sheep or other domestic animals they had, how many children were born in their families, how many bushels of wheat they stored for the winter and other similar things. Development of counting things eventually led to development of mathematics. Now mathematicians distinguish different kinds of numbers. Here we talk about the most commonly used kinds of numbers.
Natural numbers came to us naturally. If you had 5 bushels of wheat you would say, "I have five bushels of wheat". If you did not have even one bushel of wheat, you would say “None” or “I don’t have any”. Natural numbers are a series represented by 1, 2, 3 and other numbers. Now mathematicians extend this series to infinity, though the notion of infinity was developed much later than natural numbers. In ancient cultures people could have natural numbers from 1 to 3 or 5, and they used "many" for any larger numbers. Number 0 appeared much later than the other natural numbers. Now it is included in natural numbers in some of the branches of mathematics, while in others it is still excluded. In ancient Indian mathematics number 0 appeared earlier than in other places. For Indians, zero was (and sometimes still is) a magic number related to perfection and nirvana, and it was introduced over 5 thousand years ago.
Integer numbers are natural numbers and their negatives. Negative numbers were invented much later than natural numbers, first in India, and later in Europe. I have 10 sheep, but I owe my neighbor 15 sheep. How many sheep do I own? I own minus 5 sheep (–5 sheep) in total.
Rational numbers are integer numbers and their fractions. One and a half is a rational number. One third is a rational number. In general, any rational number can be represented as a ratio of the following kind: a/b, where a and b are integer numbers, and b is not equal to zero. There are many ways of representing the same rational number. For instance, 4/3, 8/6, 12/9 and 20/15 are ratios representing the same rational number. The number of rational numbers is infinite. Moreover, between any two rational numbers, however close to each other, there is an infinite number of other rational numbers.
There are also irrational numbers. They cannot be represented exactly as a ratio, a/b. Such are the pi number, the e number (the Napierian base, or the base of the natural logarithm). There are also square and cubic roots of 2, 3 and many others. Some of the irrational numbers are also transcendental; they cannot be represented as an outcome of algebraic operations. The pi and e numbers are transcendental.
Rational and irrational numbers together make up real numbers. Real numbers are often represented or considered as an infinite numeric axis, on which any point represents a real number. One of the points represents zero. Between any two points on this axis, however close, there is an infinite number of other points representing other real numbers.
VBT22. Plasma
Plasma is a word of ancient Greek origin meaning "something formed". This word is used as a scientific term in physics, biology and geology. Also it is used in computing, mass media and by laymen. Here we are speaking about plasma in physics.
Physicists recognize so-called aggregate states (or states of aggregation, or fundamental states) of substances. Since the beginning of science, three aggregate states were known – solid, liquid and gaseous. In the 20th century one more state was added – plasma, which is ionized gas, however, its total electric charge is zero or nearly zero.
In all aggregate states molecules of a substance are moving. Motion of molecules depends on the temperature of a volume of a substance. The hotter is the substance, the faster its molecules move. They move with different speeds, but there is the average speed; and the total kinetic energy of motion of all molecules represents the heat energy of a volume of a substance. Molecules can establish bonds between each other, but those bonds can exist only when the motion of molecules is not fast enough for breaking those bonds.
In solids almost all molecules are bonded with each other and oscillate within the limits of those bonds without breaking them. If a piece of a solid substance is gradually heated, molecules in it begin to oscillate faster. At a certain temperature their oscillation becomes so strong that they can break the bonds and begin to move independently. This is the liquid state. Molecules still remain close to each other; they bump against each other, and bonds between them get established and broken all the time. However, at any given moment, most molecules are bonded with other molecules, though not as strongly as in solids.
Further heating increases kinetic energy of molecules to the point, at which no stable bonds between molecules can be established because molecules bump at each other with so high speed that they fly away before a bond can be made. This is the gaseous state.
All three states depend not only on the temperature, but also on the external pressure. For instance, on Mars, where the average temperature is –60 °C (–76 °F) and the average pressure is 0.6 kilopascals (1/170 of that on the surface of the Earth), if we put out water in a bucket, it will be freezing and boiling at the same time, until it evaporates completely.
If we continue heating gas, the speed of its molecules will increase to the extent that they will bump at each other really hard, taking away one or two electrons from the external orbits of their atoms. Thus, there will be positively charged ions of gas and negatively charged electrons moving around and bumping against each other. Some electrons will recombine with ions; then a next bump will tear them away again. This is the so-called "cold plasma", its temperature may range from 800 to several thousand degrees Celsius. Cold plasma is hot enough for emitting visible light, or even ultraviolet light.
The hotter the plasma is, the more electrons are torn away from molecules. This plasma can conduct electricity; if an electric potential is applied to it, electrons move to the positive electrode and positive ions move to the negative electrode. Electrons are absorbed by the positive electrode, and the negative electrode supplies electrons to nearby ions. This process maintains electric current in plasma. If plasma is moving in a magnetic field, it interacts with the magnetic field as any conductor moving in it would; ions and electrons get deflected into opposite directions.
Further increase of temperature makes molecules collide at such speed that the number of electrons torn away from molecules makes it impossible to maintain molecular structure, and molecules fall apart. Now the plasma consists of ionized atoms (rather than molecules) and electrons. But still atoms preserve a part of their electrons.
Finally, at certain temperatures, which are different for different gases, all electrons are torn away from atoms; and the plasma consists of atomic nuclei and electrons moving independently. This is the "hot plasma". It exists at the temperatures of tens or hundreds of thousands, or even millions, of degrees.
And, if the hot plasma is heated further, the speed of atomic nuclei becomes so high, that they begin to interact with each other, and some of them merge. This is called nuclear fusion reaction. It produces a defect of mass around 1% of the mass of the initial nuclei, which is a lot of energy, according to the famous Einstein's formula E=mc².
Where can we see plasma? It is all around us. A burning candle, the flame of a gas burner of your kitchen stove, your energy-saving fluorescent lamps, the neon lamps, which are going out of fashion now with the development of light-emitting diodes. We see plasma in every spark of lightning during a thunderstorm. Many of us use plasma displays or TV screens. They are made of small bubbles containing certain gases, which are turned into plasma and give off red, blue or green light. Three such tiny bubbles can produce any color our eyes can recognize. Together these three bubbles form a pixel of our plasma screen.
We can see hot plasma up in the sky, every day when the weather is not overcast, but usually we don't look at it. The corona of our Sun is hot plasma, over a million degrees Celsius hot – much hotter than its surface. Luckily, we do not see the hot plasma too often on the Earth. People can reproduce it in the form of an explosion of a thermonuclear bomb – experiments with such explosions were stopped many years ago. Scientists also try to reproduce hot plasma and use it for a confined and sustainable fusion reaction as a source of energy, without much success so far.
VBT21. Vacuum
Vacuum is space devoid of matter. This is the definition. The term stems from the latin language and means “emptiness”. But what does it actually mean?
For a long period of time people believed that “nature does not tolerate emptiness”.The first experiments, which could lead to our current understanding of vacuum, were carried out by an Islamic scientist Al-Farabi in the 9th century. Actually, he tried to prove the statement by Aristotle, who believed that no void occurs naturally, and got a small volume of low-grade vacuum. Neither he nor other contemporary scientists could understand the results and significance of that experiment. Further attempts to confirm or disprove the Aristotle’s statement were made over centuries.
In the 17th century experiments conducted by Torricelli and Blaise Pascal showed the possibility of existence of void (or, as we now know, partially void) spaces. For his discoveries and conclusions Pascal got the following description from his contemporary colleagues who kept to the Aristotle’s principle: “Void does exist, but only in Pascal’s head”.
Now we do not determine vacuum as space void of matter. We determine it as space filled with molecules of gases or atoms or elementary particles at the pressure, which is substantially lower than normal atmospheric pressure on the surface of the Earth. A device that we call “vacuum cleaner” generates pressure as low as 80% of the atmospheric pressure at the end of its “proboscis”. This is not vacuum, but it is a step to it. Aerospace experts set 100 km as the upper level of the atmosphere of the Earth, but they know that artificial satellites experience atmospheric drag even at 800 km above the Earth. The interplanetary space is also not completely void of matter – it is filled with the solar wind and cosmic rays (both are flows of charged particles). Even the interstellar space is full of atoms and elementary particles. There is no empty space, but not in the meaning suggested by Aristotle. There are different grades of vacuum, there is demand for and supply of vacuum, and the higher is the quality of vacuum, the higher it is valued by customers and priced by its manufacturers.
The SI unit for atmospheric pressure is pascal (symbol Pa, definition – pressure of one newton of force per one square meter), but vacuum is often measured in torrs or as a percentage of the standard atmospheric pressure or in millimeters of the mercury column. Initially torr was defined as 1 mm of mercury, but then it was redefined as 1/760th of the standard atmosphere, which made it by a tiny fraction of a percentage point different from the millimeter of mercury. Usual barometers are not suitable for measuring deep vacuum because they show 0 when there is actually some residual pressure. Special devices can indicate pressure as low as 10⁻⁶ torr (0.1 mPa). The standard atmospheric pressure is 101,325 Pa.
According to one of the quantum theory models, even complete vacuum is not empty. So-called virtual particles appear and disappear in it all the time like bubbles in a boiling kettle. A more recent discovery of dark energy and dark matter suggests that vacuum has mass. Even though density of this mass is tiny, in the huge volume of the Universe, dark energy and dark matter make 80% or more of the whole mass of the Universe.
VBT20. Gregorian Calendar
When we say “calendar” we usually mean “Gregorian calendar”. The Gregorian calendar was introduced by Pope Gregory XIII in 1582 and was an improvement of the Julian calendar used by all Christian countries before that date. All Catholic countries and Catholic churches in non-Catholic countries adopted that calendar immediately, following the ruling of Pope Gregory XIII. Other Christian churches (Protestant and Orthodox) adopted this calendar later. Several Orthodox churches still keep to the Julian calendar.
Even though in certain countries and regions local traditional calendars are still in everyday use, the Gregorian calendar is accepted almost everywhere for international use. It is recognized by many international organizations, including the UN and the Universal Postal Union.
The rules of the Gregorian calendar:
1) There are 7 days in a week.
2) There are 12 months in a year.
3) A normal year consists of 365 days.
4) A leap year, which is every year, the number of which is an integer multiple of four, consists of 366 days. The additional day is added to February, the shortest month of the twelve.
5) A year divisible by 100 without remainder is a normal year, and not a leap year, despite the fact that it is also divisible by 4 without remainder.
6) A year divisible by 400 without remainder is a leap year, despite the fact that it is also divisible by 100.
The rules of the Julian calendar are rules 1 to 4 of the Gregorian calendar. Rules 5 and 6 were introduced in the Gregorian calendar for correcting accumulation of errors in the Julian calendar; this was a 0.002% correction in the length of the year.
In Julian calendar, every 400 years the vernal (spring) equinox comes by 3 calendar days earlier than before. For the Catholic Church this was important, because one of the greatest celebrations of this church, Easter, is tied to the vernal equinox. By the time of introduction of the Gregorian calendar, the vernal equinox was coming by 10 days earlier than at the time when the Julian calendar was introduced (around 11th or 12th of March rather than on 21st or 22nd of March). This is because the actual year is slightly shorter than 365 and a quarter days. The reform of the calendar in 1582 also included resetting the counting of days. Currently the Julian calendar is behind the Gregorian calendar by 13 days. In the 22nd century it will be 14 days behind.
In historical texts, in order to avoid ambiguity, people use words Old Style (O.S.) to indicate that the date is given according to the Julian calendar and New Style (N.S.) to indicate that the date is given according to the Gregorian calendar. Quite often dual dating is used, and both dates (O.S. and N.S.) are mentioned.
The Gregorian calendar, though very accurate, still accumulates an error of one day in over 10 thousand years because the actual astronomical interval between vernal equinoxes cannot be accurately represented in an integer number of days. One of the suggested corrections is that we could just add that day to June and make the 31st of June an international holiday. However, we may not need this correction because the Earth decelerates its rotation due to the tidal forces induced by the Moon.
Also many people find it inconvenient that months in the Gregorian calendar comprise different numbers of days, and months begin on different days of the weeks. Many various reforms of the calendar have been proposed in the past few centuries, but so far none has been generally accepted. One of the reforms offers the following: 13 months, each month consists of 28 days or 4 weeks, each month begins on Monday and ends on Sunday, in total, 364 days, and the 365th day of the normal year and the 366th day of the leap year do not belong to any of the months and are holidays. This would be very convenient for planning of all kinds of activities (economic, educational, military, research, sports, etc.), but many people are scared of the number 13, which is one of the reasons why this calendar has not been accepted.
VBT19. Triangulation
VBT19. Triangulation
Triangulation is a technique for measuring distances on a plane with the use of trigonometry. If you have points A and B with known locations and known distance between them, you can measure the location of a point C, even if it is difficult or not possible to get to point C. The line between A and B is your baseline; you measure angles between the baseline and point C as seen from both A and B, and you apply trigonometry to calculating distances from both A and B to C and determining the location of C. Thus you get the location of C even when there is a river or a canyon between C and both A and B.
When you get distances between A and C, as well as between B and C, you can use either AC or BC as the baseline for calculating the location of points D or E. Eventually you can build a mesh of triangles with known distances, angles and locations of each point in the mesh relative to any other point in it. Thus you can map (theoretically) the whole Earth starting with a single baseline connecting two points with known locations determined by astronomical methods (these astronomical methods are beyond our consideration in this text).
With the use of a spreadsheet software (like MS Excel or Google Documents Spreadsheet or anything similar) or other mathematical software, or even an engineering electronic calculator, you can make the calculations very quickly and easily.
In old times, when there were no electronic computers and calculators, making the calculations was much harder, but still possible. Triangulation as the method of mapping a territory was founded in the early 1600s (early sixteen hundreds) with the works of a Dutch scientist Willebrord Snellius. Since those times many corrections have been introduced in triangulation. The main corrections are related to the fact that the angles of a triangle on a plane being added up make 180 degrees, while the angles of a triangle on the surface of the Earth make slightly more than 180 degrees. Ultimately, we can have a triangle with the total of angles 360 degrees, if the triangle is big enough. Eventually all necessary corrections have been worked out and the curvature of the surface of the Earth has been properly taken into account.
Now, with the satellite navigation available to every owner of a smartphone or a satnav gadget, triangulation may seem to be old-fashioned and useless. But this is not so. Your satnav receiver needs signals from at least three satellites in order to determine your location. Why three rather than four or two, or maybe five or just one? Because your satnav uses good old triangulation for determining your location. However, your satnav needs signals from four satellites in order to start navigation. What is the fourth satellite needed for? It is needed for setting the initial baseline between A and B mentioned in the beginning of our explanation.
A satellite navigation receiver is a microchip, which integrates a highly sensitive digital receiver, a fast microcomputer and a very sophisticated piece of software for making all necessary calculations.
VBT18. Superconductivity
VBT18. Superconductivity
Different materials conduct electricity differently. There are conductors, that is, materials with low resistivity or specific resistance (high specific conductivity), insulators, which are materials with very high specific resistance, and semiconductors, which are bad conductors and bad insulators, but their other properties are very useful.
When we want to conduct electric current from a source to a load, we want it conducted with minimal losses. One of the causes of losses is resistance. The loss of power in a wire is its resistance times current squared (R∙I²). If the resistance of one meter of a wire is 0.001 ohm per each meter (that would correspond to one meter of an aluminum wire with the cross-section area of nearly 30 square millimeters or 6 mm in diameter), and we want to transmit electric current of 10 amperes at a distance of 1000 km, the resistance of the wire will be 1000 ohms, and the loss will be 100,000 watts, or 100 kW⋅h of energy every hour. Enough for a family for a week or even a month, depending on the country, in which the family lives. If the resistance were zero, 168 families could get their weekly or monthly supply of electricity for free (not for the families, but for the energy supplying company, of course).
Superconductivity is a phenomenon of zero electrical resistance. It was discovered by a Dutch physicist Heike Kamerlingh Onnes in 1911. Many metals become superconductive at very low temperatures, below 3 kelvins (minus 270 degrees Celsius). Such temperatures can be achieved with the use of liquid helium. And it is difficult and expensive to liquefy helium. Some metals oxides are superconductive at up to 30 kelvins.
It is not economically feasible to cool down an electric wire 1000 km long to the superconductivity state and maintain it in this state for years. Effectively, this requires a freezer 1000 km long and with operational temperature of –240 °C or lower. Compare this with –18 °C to –25 °C in the freezer of a household refrigerator.
For years scientists have been trying to find materials becoming superconductive at higher temperatures. Such materials are known as high-temperature superconductors. This class of materials comprises all materials that are superconductive at temperatures above 30 kelvins (above –243 °C – not everyone’s idea of high temperature). The first material of this kind was discovered in 1986. Several such materials are known, most of them are cuprates. They are non-metals, they do not exist naturally, and their manufacturing on industrial scale is not yet possible.
Another group of materials is called room-temperature superconductors. These are materials that become superconductive at temperatures above 0 °C. Several reports of finding such materials have been published since late 1980s, however, none of them have been confirmed. Currently, the highest known temperature of superconductivity is –135 °C, and it is observed in one of the cuprates.
However, there are situations, in which superconductivity is used not only for demonstration of this curious phenomenon. In certain situation people need very strong magnetic fields. The magnetic field created by an electromagnet is proportional to the number of coils in its winding and the current in it. In normal circumstances the current is limited by the resistance and the power it dissipates - too strong current may heat the winding up to the point at which it can melt and burn. If the winding is superconductive, no heating occurs, and the magnetic flux that you can achieve depends only on the electric power you can apply.
All modern particle accelerators use superconductive electromagnets. Electromagnets of the Large Hadron Collider are blocks 10×10×10 meters, like three-storey blocks of flats. And their coils are superconductive. It takes two months to cool them down to the superconductivity state and more than three months to warm them up when they need maintenance or repair.Their extremely strong magnetic field accelerates protons and keeps them on track, when their speed is closing to the speed of light.
There is an application of superconductivity, which is much closer to us. Either you or somebody of your relatives or friends, or their relatives passed MRI scanning (of course, MRI stands for Magnetic Resonance Imaging). MRI scanners use powerful magnetic fields for inducing resonance of atomic nuclei in the scanned body. And this is achieved with superconductive electromagnets.
VBT17. Color Vision
VBT17. Color Vision
Why and how do we distinguish colors and hues? We have to start the explanation with the nature of light. Light is electromagnetic waves of different lengths (also different frequencies). This is a narrow part of the whole spectrum of electromagnetic waves. Our eyes cannot see their whole range. We can only see waves in the range approximately from 400 to 700 nanometers. We see 400 nm waves as violet color and 700 nm waves as red color.
The retina of the human eye contains two kinds of cells – rods and cones. Rods are sensitive to electromagnetic waves in the whole range from 400 to 700 nm. They are responsible for the high resolution and high sensitivity of vision. They provide us useful information in the bright light and in the darkness, when all cats are grey. They do not distinguish colors; they just inform our brain about intensity of the electromagnetic waves in the visible range of spectrum.
Cone cells are subdivided into 3 types – S, M and L. The S type responds to the waves from 400 to 510 nm, and gives the maximum response at approximately 440 nm. The M type responds to waves from 420 to 650 nm, and gives the maximum response at approximately 540 nm. The L type is sensitive in the range from 480 to 700 nm with the maximum sensitivity at approximately 520 nm. Our brain combines overlapping responses of the cone cells and recalculate them into our color vision. However, sensitivity of the cones is much lower than that of rods; therefore, in darkness we become color blind, we hardly distinguish any colors, or we see them, but do not recognize correctly. And, when the intensity of light is too low even for the rods, we can't see anything, it’s complete darkness.
There are people who, for the reason of some genetic variations, do not have some of the cone cells in their retina. Most typically, they do not have the L cells, but it may happen that they do not have one of the other types of cones, or even two types of cones. This is a disability; these people see colors not like other people see them. Extremely rarely a person may lack any cones in his or her retina; such people do not see colors at all.
Of all mammals, only humans, several other primates and some marsupials have cones of three types in their retinas. Most other mammals have only types S and M. This includes dogs, wolves and cats. There is a legend that dogs, wolves and cats are colorblind and cannot distinguish colors at all. Actually, they can, they are only partially color blind. They see colors, but not in the same way as most of humans do.
Many species of birds, reptiles, amphibians, fish and some invertebrates have 3 or more types of color sensitive cells in their eyes. Many insects cannot see red color, but they can see ultraviolet. There are species with six types of light sensitive cells. And there is a species of shrimps with 12 types of color receptors. Their color vision is much richer and more precise than ours.
Some animals can see light beyond the range that we call the visible light. They can see ultraviolet, waves shorter than 400 nm, or infrared, waves longer than 700 nm, or both. Snakes can see warm-blooded animals in complete darkness because the hot-blooded emit infrared radiation. The same is true about mosquitos – they attack us in complete darkness because they can see our infrared radiation. Sometimes they have eyes for the visible spectrum and separate organs for the infrared light.
Our skin is sensitive to the ultraviolet light that we cannot see with our eyes. Being exposed to it, it may get burnt. Or, if we expose it gradually, it gets a beautiful sun tan.
Your digital camera, either a separate device or a part of your smartphone or tablet PC, is sensitive to electromagnetic waves in the range from 400 to 1000 nm. It is color sensitive because of the color filter matrix mounted over the light-sensitive CCD or CMOS matrix. But beyond the visible spectrum, this filter does not produce any effect. Your camera can see when the infrared LED of the remote of your TV emits the IR signal (unless the filter is designed to block lightwaves longer than 700 nm). Specially designed cameras can see infrared light of much longer waves. They are used for infrared imaging. And there are special devices that can “see” much shorter waves, ultraviolet and even X-rays.
VBT16. Parallax
VBT16. Parallax
Parallax is a phenomenon of seeing the same object from different points of view at different angles and different locations relative to other objects. It occurs when the same object is viewed from different points, either simultaneously or at different moments of time.
Parallax is naturally used for obtaining stereopsis – the ability to see objects and their relative location in the 3D space. In the case of binocular vision (that is, vision of the same area of space with two eyes simultaneously), stereopsis is achieved due to parallax between the pictures seen by the left and the right eyes. For instance, the scope of vision of people is about 190 degrees, of which a segment of about 120 degrees can be seen by both eyes.
Typically, predators have larger area of binocular vision and smaller general scope of vision (field of view) than non-predators. Larger field of binocular view helps predators to aim at their prey more precisely and deadly. Larger general field of view helps non-predators to detect predators approaching from almost any direction.
Non-predators having a narrow area of binocular vision obtain stereopsis by using parallax of motion (seeing the same objects from different points in space and time), while they move themselves or just move their heads. For instance, pigeons have a very narrow range of binocular vision. When they sit on the ground or on twigs, they bob their heads up and down and thus get parallax for stereopsis.
The distance at which stereopsis can be obtained by binocular vision depends on the distance between centers of pupils of the eyes and on how accurately the angles of vision of different objects are determined by the brain. Most people distinguish relative location of objects with the binocular vision at distances less than 100 m. But this can be trained; and there are people who preserve stereopsis at distances up to 300 m or even more. At longer distances our brains use a different mechanism based on comparison of known sizes of different objects and their observed angular dimensions.
Binoculars or field glasses have much larger stereoscopic base than human eyes. Depending on the design and on the user, they allow people to obtain stereopsis at the distance of more than 1 km. Stereoscopic telescopes used by the military, for instance, in artillery, have even wider stereoscopic base. At the same time, opera glasses used by spectators in theatres do not increase the depth of stereopsis or even reduce it.
Parallax is used in triangulation – a measurement technique used by topologists for creating maps.
Parallax is used by astronomers for measuring distances to planets. For this purpose they observe planets from two observatories at a distance of several thousand kilometers and compare visible location of planets against stars. They also use this technique for measuring distances to close stars, however, for this purpose they measure the difference of angles, at which a star is visible from the opposite points of the orbit of the Earth. Thus they obtain a stereoscopic base of 300 million kilometers, but the interval between such observations is half a year. However, even at this base the difference of locations even of the closest star is less than 1 arcsecond (angular second), therefore, the measurement of angles must be very accurate.
Astronomers even introduced a unit of length based on parallax – it is called parsec. This is a distance, at which the parallax of an object measured from the opposite points of the Earth's orbit would be equal to 1 arc second. This distance is approximately 30.9 trillion km (3.09 times 10 to the 13 km, 3.09E+13 km). One parsec is equal to 3.26 light years. The closest star, Proxima Centauri, is 1.3 parsecs away from us.
Parallax is now used in robotic vision for determining distances and relative locations of objects around a robot.
VBT15. The Doppler Shift
VBT15. The Doppler Shift
The Doppler effect, or the Doppler shift, is the change of the frequency of a signal in the form of a wave emitted or reflected by an object moving relative to an observer.
In simple words, if an object is moving to us, any waves coming from it are of higher frequency for us than they are for the object. If the same object is moving away from us, the same waves are of lower frequency for us than they are for the moving object. This is applicable to sound waves, radio waves, visible light, infrared, ultraviolet, X-rays and gamma-rays.
For instance, if a motor vehicle is coming to you at a high speed, all sounds it produces, like motor roaring or honking, have higher pitch than when the same vehicle is standing near you, and their pitch is higher than how the driver of the vehicle hears them. When the vehicle has passed by you and is speeding away, the same sounds have lower pitch. Many people feel the difference even when the speed is about or exceeds 50 km/h (14 m/s), and the difference is very distinct when the speed exceeds 100 km/h (28 m/s). Musicians can hear the difference at much lower speeds than the majority of other people.
In the case of visible light, the Doppler shift of an object moving away from us is called the red shift, that is, the frequency of all radiation of light is slightly lower and closer to the red margin of the spectrum. If the object is moving to us, the frequency of its radiation is slightly higher and closer to the blue and violet margin of the visible spectrum. If the relative speed is very high, over a hundred thousand kilometers per second, the shifts are significant - blue light may look like red, or red light may look like blue. The Doppler shift is used by astronomers for measuring velocities of space objects relative to us. Though the speed of light in vacuum is much higher than the speed of sound in our atmosphere, the speeds of space objects are also much higher than of a truck passing by us. Besides, astronomers detect these frequency shifts with spectrometers, instruments, which are much more sensitive to the change of frequency than ears of musicians. Scientists know exact frequencies of radiation of different atoms and molecules, so it is "easy" to measure the Doppler shift of this radiation.
Doppler radars also utilize this effect. They send a radio beam of known frequency and measure the frequency shift of the beam reflected from an object. Such radars are used by air traffic control, military anti-aircraft surveillance and traffic police. Scientists use such radars for measuring the speed of clouds, raindrops, birds flying in the sky, wild animals running away from predators or for their prey. It is used for measuring velocity of liquids and gases in pipelines. There are many other useful applications for this effect.
VBT14. The Moon Moves Away from Us
VBT14. The Moon Moves Away from Us
The Moon is moving away from the Earth, and the Earth is slowing down its rotation around its axis. These two processes are manifestations of operation of the same mechanism, which transmits the kinetic energy of the rotation of the Earth around its axis into the kinetic energy of motion of the Moon around the Earth and dissipates part of this energy. And this mechanism operates through the gravitational force and the friction force.
The gravitational attraction of the Moon causes a bulge of oceanic waters, which we call tides. Actually, two bulges. One is on the side of the Moon, it is called sublunar tide, and the other is on the opposite side of the Earth, which is called antipodal tide. But this is only a part of the story. The Earth rotates around its axis much faster than the Moon rotates around the Earth. Through the friction in water and between water and the seabed, the Earth carries both bulges along with its rotation. As a result, the sublunar tide is slightly ahead of the Moon, and the antipodal tide is slightly behind the Moon. These bulges shift the gravitational force exerted by the Earth on the Moon in such a way that a constituent part of this force appears in the transversal direction of the motion of the Moon on its orbit.
This friction slows down the rotation of the Earth around its axis and accelerates the speed of rotation of the Moon around the Earth. The kinetic energy of rotation of the Earth is partially dissipated in friction by heating oceanic water and partially is transmitted into the kinetic energy of rotation of the Moon around the Earth.
Currently the rate of slowing down of the Earth is fractions of seconds (approximately 0.02 s) per a millennium and the Moon is moving away from the Earth by approximately 3.8 cm a year or by 1 km in 26 thousand years.
According to the most widely recognized hypothesis, the Moon was formed out of the part of the Earth around 4 billion years ago, 0.5 billion years after formation of the Earth, as a result of a catastrophic impact of a large celestial body, which hit the Earth at a huge speed, tore away a part of its mass and hurled it into an orbit around the Earth. Out of this piece of the Earth the Moon formed.
Initially the Moon was much closer to the Earth than it is now (probably, 10 times closer) and the Earth made a full turn around its axis in about 4 hours. Initially the described mechanism worked much stronger because of the closeness of the Moon. Eventually, in several billion years, the Moon will fly away into the outer space. That will happen before the Earth dissipates the kinetic energy of its rotation, so it will still rotate around its axis, but days and nights will be much longer than they are now. With much longer days and nights, the Earth will be unsuitable for current forms of life, because the difference of temperature between day and night may exceed 50 degrees Celsius, depending on the closeness of a particular area to the ocean or other big masses of water.
VBT13. Aurora Borealis and Aurora Australis.
VBT13. Aurora Borealis and Aurora Australis.
Aurora Borealis, or the northern lights, is a phenomenon of shine in the night sky, which is caused by luminescence of nitrogen and oxygen in the atmosphere of the Earth ionized by charged particles of the solar wind, which are deflected by the magnetic field of the Earth to the areas near the northern pole.
Aurora Australis, or the southern lights, is a similar effect caused by the oppositely charged particles near the southern pole. Particles of opposite charge are attracted to opposite poles.
In modern English it is recommended not to capitalize the first letters in these terms and to write "aurora borealis" and "aurora australis" with just small letters.
Auroras are visible in the night time. In the areas close to the poles, where, winter nights are long and may last 20 or more hours a day. In the latitudes higher than the polar circles nights may last several months in winter, up to half a year on the poles, and days may last up to half a year there in summer.
With modern electronic equipment it is possible to detect auroras in the daytime, when they are not visible with the naked eye. Scientists established that auroras have similar intensity near the north and the south poles. This intensity increases and decreases simultaneously near both poles, depending on the intensity of the solar wind.
As the magnetic poles do not coincide with the geographical poles, and auroras appear around magnetic poles, the latitudes at which auroras are visible vary depending on the longitude. For instance, currently aurora borealis can be seen in Canada in latitudes much to the south from the latitudes, in which it can be seen in Russia.
The intensity of auroras depends on the intensity of the solar wind, which is the flow of charged particles, mainly protons and electrons, emitted by the Sun. This flow is not stable and changes depending on the activity of the Sun. Events such as coronal mass ejections (CME) may increase the intensity of the flow manifold, and when the ejection is directed to the Earth or very closely, the auroras are especially strong and can be seen in territories, where usually they are not observed. For instance, in 2012 due to a powerful CME, aurora borealis was observed in Quebec and Ontario. And in an extremely powerful CME directed to the Earth, which happened in 1859 and is now called the Carrington Event, aurora borealis was observed even in areas close to the equator, such as islands in the Caribbean Sea.
Astronomers observe auroras near the magnetic poles of other planets, which have magnetic field. But there is nobody on those planets to admire or worship them or to be scared by them.
The dominating color of the auroras on the Earth is green. Red and blue colors are also present. These colors are determined by quantum levels of excitation of electrons in atoms of nitrogen and oxygen. The lights in the sky may shift every second or may stay unchanged for hours.
VBT12. Delay in Digital Broadcasting
VBT12. Delay in Digital Broadcasting
You may be surprised to know that when you watch a live broadcast from either a remote place or even from your own city or town, you see what's going on there with a delay, which may be up to several seconds, even if you watch your TV at the distance of 1.5 thousand km from the place of a broadcast event, and radio waves pass this distance in 0.005 s (5 milliseconds).
At the speed of light of almost 300 thousand km/s and the longest distance between two points on the Earth of around 20 thousand km, the delay in propagation of radio signals should not have exceeded 0.07 second to any point on Earth, and should have been no more than 1 millisecond per each 300 km of distance. Even in the case of broadcasting via the geostationary satellites hovering 36 thousand kilometers above the Earth, the total distance the signal passes will never exceed 280 thousand kilometers, and the delay due to propagation of radiowaves over this distance is still less than a second.
The explanation of this much longer delay is that you do not get the signal produced by a video camera directly on your TV screen. This signal is coded, compressed, split into packets and routed (through surface relay stations and, sometimes, via satellites). At each node of the route, packets of the signal are collected, checked for integrity and sent to the next node. Finally the signal comes to your digital receiver (or to the digital receiver of your local broadcaster or relay station, if you still use analog TV) and decoded. After that, it is shown on the screen of your TV set.
If you watch a broadcast online, via the Internet, the delay may exceed even one minute, taking into account specific procedures of routing of video webcasting protocols. Special protocols for voice and video communications via IP are designed to reduce this delay to no more that 2 seconds, if the connection is fast enough.
Digitizing, coding, packaging, sending, routing, retransmitting, receiving, arranging, un-packaging, decoding, digital-to analog conversion - all these operations are done by high performance digital processors and other electronic devices. Though they operate fast, each of them introduces its own delay.
VBT11. The Free Fall Acceleration and the Gravity.
VBT11. The Free Fall Acceleration and the Gravity.
We usually assume that the free fall acceleration of any object on the surface of our planet is directed to the center of the Earth and is due to the gravitational force. More advanced people would add that that it is directed to the gravity center of the Earth rather than to its geometrical center. Both statements are incorrect.
Well, actually they are correct and accurate enough for most applications. But for certain applications, like, for instance, launching space rockets, they are flawed.
First of all, there is no gravitational center of the Earth. There is a resultant of forces of gravitational pull exerted on an object on the surface of the Earth, or above it, by masses of the Earth, which are not homogeneous. In different places this resultant of forces is directed to different points. One has to take this into account while planning orbits of satellites.
Second, the Earth rotates around its axis. This rotation creates the "centrifugal force", which is actually not a force, it is a manifestation of inertia. It is proportional to the radius and angular velocity of an object. While the gravitational pull is directed to a point more or less close to the center of the Earth, the centrifugal force is directed into the outer space and is perpendicular to the axis of rotation of the Earth. The distance from the axis depends on the latitude: the radius of rotation is the largest at the equator and equals to zero at the poles. One has to take this into account while launching rockets.
At the equator the gravitational pull and the centrifugal force are more or less collinear, but oppositely directed. Therefore the free fall acceleration is somewhat smaller than if it were created by the gravitational attraction alone. There is no centrifugal force on the poles. The maximum difference between the directions of these two forces is at the latitude slightly above 45 degrees (either North or South). Besides, the Earth is not spherical, and its poles are closer to its center than points on the equator. As a result, the free fall acceleration is not collinear with the gravity force almost everywhere and is greater by 0.5% at the poles than at the equator.
VBT10. A Supervolcano Is Lurking under Yellowstone National Park.
VBT10. A Supervolcano Is Lurking under Yellowstone National Park.
The park is located primarily in Wyoming and extends into Montana and Idaho. The territory of the park is over 898 thousand hectares. A considerable part of this park is a huge caldera, 55 by 72 km wide.
A caldera is a geological structure, which forms in a place of a volcano. Typically, underneath of a volcano there is a big chamber of magma. During an eruption, a part of this magma goes up and flows on the sides of the volcano or is hurled in the air and falls far away from it. Also gases and ashes are expelled from the magma chamber. As a result, pressure in the chamber falls, while the weight of the volcano gets increased with additional magma. When the difference between the increased weight and decreased pressure becomes critical, the central part of the volcano collapses into the partially empty magma chamber and forms something resembling a big crater. But geologically, it is not a crater, it is a caldera (sometimes also called "cauldron" because of its shape resembling a cauldron).
However, after forming a caldera, a volcano does not necessarily become extinct. It may erupt later.
There were at least three known super-eruptions of the Yellowstone super-volcano in the past 2.1 million years, the latest was approximately 640 thousand years ago, which resulted in formation of the Yellowstone caldera. There was also a small eruption about 3.5 thousand years ago. But that was not the last eruption. Scientists in Yellowstone measure motions of the ground and have found out that it rises every year. This means that magma accumulates again in the huge magma chamber and another super-eruption is inevitable.
If such a super-volcano erupts, it will cause global consequences. Masses of smoke and dust will make our atmosphere less transparent for sunlight; and our planet will start cooling down. Acidity of oceans will increase. All this will cause change in the biosphere and result in massive death of living organisms.
But this is not a question of "IF". This is a question of "WHEN". Scientists believe that the next eruption of this super-volcano may occur in several hundred thousand years from now. So far, we are quite safe and do not need to worry about it.
VBT9. Libration Points. Part two.
VBT9. Libration Points. Part two.
In Part one, we explained the nature and discovery of these remarkable points. Four of the five Lagrangian points in the system of the Sun and the Earth are used for positioning robotic observatories.
L1, which is located about 1.5 million km from the Earth towards the Sun, is a convenient place for watching the behavior of the Sun. This point is now used for placing robotic observatories monitoring the Sun. These observatories can give early warnings of possible geomagnetic storms in the case of coronal mass ejections, several hours before the storm hits the Earth.
L2, on the same line between the Sun and the Earth, only in the opposite direction from the Earth, again, about 1.5 million km from the Earth, always stays in the shade of the Earth. This is a convenient point for placing robotic telescopes observing infrared radiation of the Universe. Strong radiation of the Sun would influence them and decrease their accuracy and sensitivity, but in that point they are protected from this influence by the shadow of the Earth.
L3 located on the opposite side of the Sun is not used for placing satellites so far. In pulp science fiction this is a place, where authors put mysterious planets, "duplicates of the Earth", or a large alien spaceship, from which aliens visit the Earth. However, once, a special observation of this point was conducted from an interplanetary spacecraft, and no objects were found there. Besides, this point becomes unstable when it passes close to Venus or Mars. In these periods, influence of the gravitational fields of these planets is much stronger than of the gravitational field of the Earth. No object, small or large, can stay there for a long time, unless it is stabilized with rocket thrusters. There are several projects of putting a solar observatory there. It could see appearance of the spots on the other side of the Sun and give a warning 7 days before these spots begin to influence the geomagnetic field.
L4 and L5 are used for placing solar observatories. Observations of the Sun from these points make it possible to obtain 3D photos of the Sun. In particular, such photos help scientists determine if a coronal mass ejection is directed towards the Earth or misses it several days before it may reach the orbit of our planet.
Satellites do not hang in those points. They circle around them in special orbits. Therefore, there can be several satellites near each point. For a detached observer it may look rather weird: satellites circling around an empty point in space as if they were attracted by that point, but there is nothing in it to attract them.
VBT8. Libration Points. Part one.
VBT8. Libration Points. Part one.
If we place a small object on the surface of the Earth, exactly on the line between the center of the Earth and the center of the Sun, and begin to move it along that line from the Earth towards the Sun, the gravitational pull of the Earth will be gradually reducing, and the gravitational pull of the Sun will be gradually increasing. At a certain point, both gravitational pulls will become equal, while oppositely directed. The resultant of two gravitation forces in this point is zero, and the object placed in it will not move either to the Sun or to the Earth. Instead, it will move together with the line between the Earth and the Sun, as the Earth will move on its orbit.
This is a quite obvious point. But there are four other less obvious points discovered by Joseph-Louis Lagrange, where the resultant force of gravitational attractions is such that a small object placed in it will move together with the Earth on its orbit without motion relative to the center of the Earth. Now we call these five points either Lagrangian points or libration points. The obvious point described above is L1. The remaining four were found from solving complicated equations describing interaction of the gravitational fields of two massive objects, of which one is far more massive than the other, and their influence on a third object, which is much less massive than the smaller of the two (this is the so-called three-body problem).
L1 is about 1.5 million km from the Earth towards the Sun.
L2 is located on the same line between the Sun and the Earth, only in the opposite direction from the Earth, again, about 1.5 million km from the Earth.
L3 is located on the opposite side of the Sun. It is slightly closer to the Sun than the respective opposite point on the Earth's orbit, approximately by 1.5 million km.
L4 and L5 are so-called triangular Lagrangian points. They move together with the Earth on an orbit with slightly larger distance from the Sun than the orbit of the Earth, 60 degrees ahead and 60 degrees behind the current position of the Earth on its orbit.
Similar Lagrangian points exist in the system of the Earth and the Moon and in any system of two big celestial bodies, one of which is much bigger than the other, and the smaller one is orbiting the bigger.
VBT7. The Barycenter
VBT7. The Barycenter
People usually believe that the Moon rotates around the Earth. But this is not exactly so. The truth is that the gravitational field of the Moon makes the Earth rotate too. In fact, both the Earth and the Moon rotate around the mutual center of mass of these two celestial bodies, which is called the barycenter.
The barycenter of these two celestial bodies is located around 1700 km below the surface of the Earth. This is approximately a quarter of the radius of the Earth. For this reason, despite the fact that the Moon is a big celestial body with its mass of 1/81 of the mass of the Earth, the Earth and the Moon are called a planet and a satellite rather than a system of two planets. In a system of two planets, the barycenter is located beyond any of them and on the line connecting their centers, between them.
The barycenter of the Sun and the most massive planet of our Solar System, Jupiter, is slightly above the surface of the Sun – at 1.068 solar radii. Barycenters of all other planets with the Sun are below the surface of the Sun.
VBT6. The Sea Level
VBT6. The Sea Level
We use the sea level as the reference of "zero height" in measurements of geographical heights and depths. But what does it actually mean?
The sea level is different in different times and places. It is influenced by local variations of gravity, tides, winds, oceanic streams and warmer or colder summer weather in the polar areas.
Therefore, for scientific and practical purposes we cannot use just the current sea level. Scientists calculate the average height of the oceanic surface. For this, they measure the sea level in the average high tides and average low tides in several points of the ocean and average out these data through all the measurements. This is called the MSL – the mean sea level.
However even the averaged sea level is not stable. It has been changing dramatically over geological epochs. Global climate change also influences it even within a single year, and it shows trends of rising or falling over decades and centuries. In the period of regular observations from the 1880s to the year 2000 the average sea level rose by 20 cm, while over short periods of 3 to 5 years it could decrease or increase by up to 4 centimeters. Besides, as the sea level is measured against a certain reference point on land, the rise or fall of the sea level may be also related to the change of the height of this reference point on land.
Therefore many countries have standards of MSL, which is the MSL averaged over several years in a certain period and taken as a standard zero height for a much longer period. For instance, in the UK these standards are called Ordnance Datum (OD). The current OD was introduced in 1921 and is the MSL measured in Newlyn of Cornwell and averaged over periods from 1915 to 1921. For this reason, it is called ODN (Ordnance Datum Newlyn) This is the zero height for all British geographical maps. Prior to 1921 the Ordnance Datum based on the level at the Victoria Dock, Liverpool, was used and was known as ODL.
Standard MSLs of different countries may differ by more than two meters. For most purposes this difference is not very important. But in certain engineering projects, especially, international, this difference may be crucial. In such cases, an agreement on which standard MSL should be used is necessary. Besides, in underground construction projects it is not convenient to use negative numbers for all heights or depths. For instance, in construction of the Channel Tunnel, a special ordnance datum was used both on the British and the French sides of the Channel. It is named the Tunnel Datum and is based on ODN, only its zero is by 200 meters lower.
VBT5. Narcissistic numbers
VBT5. Narcissistic numbers
Take this four-digit number: 8208
Raise each digit in it to the power of four (that is, the power of the number of digits in the number):
8 to the 4 → 8⁴ = 4096
2 to the 4 → 2⁴ = 16
0 to the 4 → 0⁴ = 0
8 to the 4 → 8⁴ = 4096
Now add up all these, and you'll get
8208
This number is one of the so-called narcissistic numbers. The number N of k digits is narcissistic, if the sum of all its digits, each raised to the k, equals to the initial number N. These numbers are in love with themselves and reproduce themselves.
153 is also a narcissistic number. As this is a three-digit number, we need to raise each digit to the power of three:
1³ = 1
5³ = 125
3³ = 27
The sum is
153
Numbers 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9 are also narcissistic, because any of them raised to the power of 1 is itself, and there's nothing to add up, because the number is made of one digit.
There are only 88 such numbers in the decimal system. The largest of them contains 39 digits. These numbers can be defined also in numerical systems other than the decimal system. It has been proven that the number of such numbers is finite in any numerical system. There are only two such numbers in the binary system - 0 and 1.
Narcissistic numbers are also known as plus perfect numbers, pluperfect digital invariants and Armstrong numbers.
This knowledge has no practical use. Mathematical phenomena and properties of this kind are studied in recreational mathematics, a branch of mathematics dealing with things, that are funny or entertaining or can be used in education for helping students to get interested in mathematics, but otherwise are useless.
VBT4. Transistors and CMOS
VBT4. Transistors and CMOS
Almost every modern microprocessor or microcontroller is based on the CMOS technology. CMOS stands for "complementary metal-oxide-semiconductor". This means, that the basic element of these circuits is a couple of MOSFET transistors. A FET transistor is a transistor based on the field effect. It is a piece of semiconductor with either p- or n- conductivity (hole or electron conductivity) with four terminals – source, drain, gate and body. Typically the body terminal is either not used or connected to the gate. In a MOSFET transistor (metal-oxide semiconductor field-effect transistor) the gate is isolated from the body of the transistor (semiconductor) with a thin layer of metal oxide.
Transistors in digital devices operate in the so-called switch mode. The transistor is either closed (maximum resistance) or open (minimum resistance). In other words, it is either in the cutoff state or in the saturation state. And there are very short periods of transition between these two states. When the transistor is open, it consumes electric energy. When it is closed, it does not (or almost does not) consume energy.
In a complementary pair, one of the transistors is a p-MOSFET and the other is an n-MOSFET. One of them is open and the other is closed, therefore the consumption of energy of the pair is minimal. The pair consumes energy only in the transition, when the opened transistor is closing and the closed transistor is opening. This is a very energy-efficient combination. However, in modern microprocessors these pairs switch their state several times in a nanosecond. Therefore the consumption of energy is still significant, because there are millions of such pairs in a typical microprocessor. Engineers invest a lot of effort in reducing the energy consumption in order to prolong the battery life of mobile devices.
VBT3. The Tallest Mountains in the Solar System
VBT3. The Tallest Mountains in the Solar System
The tallest planetary mountain of our Solar System is a volcano on Mars. Scientists named it Olympus Mons. Its height is 21.9 km above the average level of the surrounding territories, it is about 600 km wide and it covers an area of over 295 thousand square kilometers (approximately the size of Arizona or by 21% larger than the United Kingdom). The average slope of the sides of Olympus Mons is about 5 degrees – if you got there, you would not probably notice that you are on a side of a very big mountain. At the top of Olympus Mons the Martian atmosphere is so rare that stars are always visible there, regardless of whether it is Martian night or Martian day, just like in the outer space.
Volcanoes of such height are not possible on the Earth because of the plate tectonics. Plates of the crust move and eventually break the channel between a volcano and the mantle, through which lava comes to the surface, and the volcano becomes extinct.
On Mars there is no plate tectonics, and volcanoes can remain active for hundreds of millions of years, gradually getting bigger and taller with every eruption.
For forty years Olympus Mons remained the tallest known mountain in the Solar System. In 2011 an even taller mountain was discovered, but it is not on a planet, it is on the asteroid Vesta. This is an impact crater with its central peak 22 km tall. Therefore, now Olympus Mons is the second tallest mountain in the Solar System. But it remains the tallest planetary mountain in the Solar System.
Other tall mountains on Mars are Ascraeus Mons (14.9 km high) and Elysium Mons (12.6 km high). Both are volcanoes. And there is Arsia Mons, which is a caldera – a volcano, the central part of which collapsed. It looks like a very wide crater, though it is not a crater. The tallest point of the edge of this caldera is 11.7 km high.
VBT2. Non-Spherical Earth
VBT2
VBT2. Non-Spherical Earth
Many people know that our planet, the Earth, is not spherical. It has mountains and canyons, it has Mount Everest with its peak 8,848 meters above the sea level and the Mariana Trench with the maximum known depth of 10,911 meters below the sea level. In comparison with the average radius of the Earth (6,371,000 meters), those ups and downs are as small as invisible defects on a polished snooker ball.
The closest points to the center of the Earth are not in the deepest mines and even not in the Mariana Trench. They are the points of the South and the North Poles. The average equatorial radius of the Earth is 6,378.1 km, while the polar radius is 6,356.8 km, over 21 km shorter.
The result of this is that the intensity of the gravitational field is different at all points of the surface of the Earth, as well as in the near-Earth space. The difference between the equator and the pole is around 0.3%. Scientists and engineers designing orbits for artificial satellites have to take into account this and other irregularities of the gravitational field of our planet. However, this is only a small part of what they have to take into account in order to choose an orbit.
VBT1. Electronvolt: a Unit of Mass and Energy
VBT1
VBT1. Electronvolt: a Unit of Mass and Energy
2+2+4=900,
2+4+4=900.
Sick arithmetic, you may think. But this is quite normal in Particle Physics. However, something is missing in those equalities.
In that weird science there is one unit for measuring both mass and energy – electron volt, also spelled as electronvolt. The symbol is eV. It is not an SI unit and is defined as the energy gained (or lost) by an electron in passing across the electric potential difference of one volt. Larger units are MeV (megaelectronvolt), GeV (gigaelectronvolt), TeV (teraelectronvolt) and so on.
The possibility of using the same unit for both physical values arises from the famous Einstein’s equation: E=mc², the energy of an object at rest equals to the mass of the object times the speed of light squared. If we agree that the speed of light in vacuum equals 1, and all other speeds are less than 1, then, according to that formula, energy is equal to mass. Particle physicists see all the time that mass and energy are just two manifestations of one more profound thing – matter. Electron volt is a tiny unit. Approximately 1.6·10⁻¹⁹ joule (one point six times ten to the minus nineteen joule), or 1.78·10⁻³⁶ kg (one point seven eight times ten to the minus thirty six kg).
The proton is made of 3 quarks, two up quarks and one down quark. The mass of the up quark is 2.3 MeV, and the mass of the down quark is 4.8 MeV. Two up quarks and one down quark would make 9.4 MeV. However, the mass of the proton is 938.3 MeV, almost 100 times as much as the total mass of three quarks making it. The additional 99% of its mass is actually the energy of quarks, which was needed to make a proton.
Similarly, the neutron is made of one up quark and two down quarks. The total mass of three quarks is 11.9 MeV, while the mass of the neutron is 939.6 MeV.
Up quarks are positively charged, and the charge of each is 2/3 of the absolute value of that of an electron, while down quarks are negatively charged, and the charge of each is 1/3 of that of an electron. The resultant charge of the proton is positive and equal to the absolute value of that of the electron, and the resultant charge of the neutron is zero.