IoT

Geometry of wireless and mobile networks


The classic model in geometry of wireless consists in assuming that the location of the communicating nodes is a uniform Poisson point process in the plan with a density lambda>0. The animated figure on the left show an exemple of a Poisson map with an increasing density. But the Poisson process may not be appropriate to describe natural or artificial spatial dispatching of nodes. The animated figure on the left shows a fractal repartition of nodes (the fractal dimension is dF =2*log(1/2)/log((1-0.05)/2)=1.862...<2. The fractal repartition is particularly appropriate to describe human concentration in fractal architectures such as flat, buildings, blocks, cities, regions, etc. When dF tends to 2 the point process tends to the uniform Poisson process. The figure on the right shows an "hyperfractal" repartition which models the presence of users on streets and express the extreme variation of densities in a town streets. The term hyperfractal has been coined by Dalia Popescu and myself, and most of the results presented around this concept have been discovered by Dalia Popescu, Bernard Mans and me. Hyperfractal means that the repartition has an apparent dimension greater than the dimension embedding plan (namely 2). In the example below it is dF =log(4/(1-0.15)/log(2)=2.244... >2. As with classic fractal set, when dF tends to 2, the hyperfractal distribution tends to the uniform Poisson. But one must be aware that hyperfractal don't exist in the form of sets since sets can only have a fractal dimension smaller than the dimension of the embedding vector space. But in our model the hyperfractal shot is not a set but a measure on the plan. The general property of measure is that they are indifferent of vector space dimension (a space of dimension 1,000 can be mapped in a space of dimension 2 and still be measurable), and thus a measure on a plan can show self similarity corresponding to a vector space of a larger dimension. In the figure they are represented in the unit square but they can be extended on the infinite plan. The uniform Poisson point process extended toward infinity, keep the same uniform density, while the extension to infinity of the fractal map leads to limit density of zero, even if the density on the unit square is non zero. This fact is known since long and has been popularized under the name of the "dark night paradox". The hyperfractal map can be extended to infinity but contrary to the fractal map, the density will tend to infinity.

These new models of point process have properties which go beyond pure curiosity in particular with MIMO networks and MANET networks. MIMO (for Multiple Inputs Multiple Output) technologies are at the foundation of modern wireless networks, it consists to receives simultaneous data flows from multiple sources on a sigle antenna and the challenge is to extract as many information from the cumulated signals. In the figure below z1 z2 z3 ... zi. ... indicates the positions of the simultaneous emitters on the plan (assume an infinite plan) transmitting toward a single antenna at some arbitrary position. The quantity s(zi.) is the intensity of the signal receive by the antenna from the ith transmitter.


If we assume the flows are independent, i.e. each flow acts as a noise for each of the other flows, Shannon theorem tells that the average cumulated quantity of information, or capacity C follows the rather complicated formula below. Each term is the quantity of information received from transmitter i. The signal over noise of this transmission is simply the ratio of s(zi.) with the sum of all the other signals (to simplify we ignore other sources of noise such as thermal noises, etc).

If the map is considered infinite and if the density of transmitters is not zero, the sum will be infinite. The condition that these finite sums converges (and not to have an infinite noise thus zero capacity) will depend on many conditions, and in particular the propagation conditions. We suppose that the signal experiences an attenuation factor a and a random fading. In the uniform Poisson model the condition of convergence of all these sums is a>2. Surprisingly the average value of the capacity is independent of the transmitter density and has value displayed below.

This formula is interesting because it is short. It has an interesting generalisation. If instead of a network in the plan we consider a network in 3D (adding altitude) as illustrated in the figure below, or embedded in any space vector of dimension D, we get the general formula displayed on the right. We see that only two parameters are at play: the geometry via the dimension D and the physics via the attenuation factor. Surprisingly, the fading parameters play no role. The factor log 2 is just a consequence of the capac ity unit in bits.

For the MIMO capacity in the infinite fractal map, the result is no longer is invariant with the density (taken in a square set containing the receiving antenna). We assume that. the antenna is one of the Poisson point in the fractal map. The capacity indeed has the expression given below where p(.) is a periodic function of mean zero of the logarithm of the density. But in practice the periodic term has a very small amplitude and does not appear even after extensive simulations. Thus in practice the capacity has the same value as in the formula above but with D replaced by dF . The expression is interesting from a practical point of view because it means that the capacity can be much higher in a fractal environment than in an uniform Poisson map. This astonishing identities coming from a rather complicated formula are in fact simply derived from the mean energy field theorem displayed on the formula on the right below, where S(.) is the energy received at the antenna, considered as a random function of the density.

Regarding the capacity in the hyperfractal map there is not much distinction with the uniform Poisson, except when we introduce the canyon effect. With the millimeter wave, the propagation of radio signal has a low power of penetration in concrete walls. Therefore the mobile users using this technologies in the streets will see their signal transversally confined in the streets and be blocked by the buildings bording the street. We call this the canyon effect. With this effect the geometry of of the network is mostly mono dimensional as the street is roughly mono dimensional (at the scale of the city) and the MIMO capacity has the value with D=1.

More interesting is the routing capacity when the communication is multi-hop instead of relaying on a single central antenna (see left figure below). The transport capacity is the sum of the information rate generated at each source which can be delivered between each pairs of source-destination. This quantity does not include the packet retransmission by the relays, because they don't add information. The famous paper of Gupta Kumar proved the unexpected result that the capacity behaves in square root of the density when the latter increases. Thus the capacity increases even when the sources nodes have the same nominal modulation/information rate. When the Poisson map is embedded in a space vector of dimension D the transport capacity increases in power of 1-1/D which is faster when D increases. But it is rather difficult to imagine a practical map of dimension larger than 3 (thus the density at the power of 2/3). Anyhow the capacity cannot increase faster than linear because it is bounded by the sum of all capacities when all nodes emit at nominal rate.

In the hyperfractal maps the canyon effect makes difficult for a line of retransmission to escape a street unless a mobile user or car stands a the intersection between two streets, but this would incurs important delays (several seconds, for example). In order to accelerate the transmission we assume that some intersection are equipped with ground relay stations, for example on the traffic lights. Since the relay density naturally increases with the density of the streets crossing at intersection, the relay distribution will follow a hyperfractal distribution of dimension dr which is an adaptable parameter. The figure below (left) shows a map with a hyperfractal distribution of mobile user with dimension dF =2.244... and with relay distribution of dimension dr =3, the relays are represented with a circle symbol. The formula below right shows the transport capacity of the hyperfractal map as function of mobile and relay hyperfractal dimension. In the limit when dr tends to 2 (mostly a uniform Poisson) then the transport capacity is equivalent to the transport capacity of an uniform Poisson map of dimension D=1+dF. In other word regarding this parameter, a city is at least of dimension 3!

In the figure below we show a map of Minneapolis displaying the main street densities. After calculation the hyperfractal dimension dimension 2.9. To mbe hyperfractal the density does not need to be power of 2 and the map does not need follow a quarternary street pattern. In fact cities with more convoluted street network may have an hyperfractal dimension. In fact most of the cities have an hyperfractal dimension.

Blockchain with green mining

The rise of IoT will lead to the emergence of digital twins of complex system. The digital twin can be a twin of a building with a virtual map with the indication of the community maintenance and usage, the meeting rooms occupancy, the parking lots, etc. This can be extended to a city will all the information about work areas and traffic, air quality, road condition and traffic, etc. This can be imagined to the level of a region or a country by including weather conditions, level of pollution, state of plantations and forests. The next step would be to imagine a digital twin of the planet, some imagine a virtual copy. Already Facebook is paving the way to the so-called "metaverse". The digital copies will be a highly distributed complex system which maintenance would require a strong and reactive blockchain for the three following ledger:

  1. the data ledger, which will update all the information or meta data related to the twinned object;

  2. . the processing ledger which will update all the processing program (storage, retrieval, machine learning and AI suite);

  3. the network ledger, which will update the protocols and algorithms needed to and keep usable the communication network through the physical area from which the world copy is fed.


The challenge here is that the number of contributor of the ledger is potentially huge. Just considering the data ledger, it is expected to have flow of data coming from several hundred millions of sensors. For the forest alone one would expect to monitor at least 3,000 billions of tree. The update are expected to occur at very high frequency. Therefore a blockchain of the kind which monitor bitcoin will be simply unsustainable. For a population size of some thousands of miner, the quantity of energy needed to moderate and secure Bitcoin is of the order of the power consumption of a country like Greece or Ireland because of the burden of the proof of work. The moderation of bitcoin consists to the requirement of finding a number of 256 bits (so called the "nounce") which will make the hash value of a block starting with a number of D straight zeroes. The aim is to moderate the flow of mining to 1 block every 10 minutes. The difficulty is an adaptable parameter every two weeks (every 2016 blocks). Its current value is 56. The average number of hash values to be generated before getting an appropriate value is 2D which means 80 quadrillions CPU cycles. It implies that the cost of the block mining is the number of hash competitors time expensive CPU running at full speed. Some miners manage huge CPU farms in place where energy is cheaper. After data centers, the bitcoin mining is the second source of energy consumption in IT, well before deep neural processing power for AI training. The figure below shows the evolution of the difficulty (in fact the value 2D in CPU cycles) together with the bitcoin current value.


The alternative we propose is based on green leader election. The green leader election is like a reversed leader election. The goal of leader election is to select randomly one winner among a large unknown population of competitors (say n competitors) via distributed network. In our blockchain the competitors are all the blocks which are in the waiting queue and the winner is the block which will actually be the next to be mined. In general the protocol consists to a have a fixed probability p and to let the n competitor to collide to access the blockchain. After the collision the competitors randomly select among them with probability p those who will compete for the next call. The process will proceed in calls with a random selection with probability p at each call until a reasonably small number of block can access the blockchain and be mined. The average number of calls is k around log(n)/log(1/p), and the average number of block transmission for each mined block is equivalent to n/(1-p). For p=1/2 and n=106, this would be unsustainable. Furthermore the pressure on the network will make the later to quickly collapse.

The reverse leader election assume an upper bound N on the number of competitors, say N=232 or 264 (in the later case it would be close to a mole of competitors, fitting the need of a metaverse of the size of a galaxy) and fix k, say 16 or 32 which will be the upper bound of the number of calls. By the identity pk=1/N, we determine p=N-1/k . The protocol is as follow. For each block the competitor determine if the block would survive k calls, k-1 calls, etc till initial call in the classic leader election, thus with the sequence of probabilities pk, pk-1,…,1 . But instead of proceeding from the first call to the kth call, the green election proceed by starting from the kth call, and then proceed in reverse order until block are actually received in the blockchain, actually the average number of eventually mined blocks is upper-bounded by 1/p (actually 4 or 16 depending on the protocol parameters). Thus the cost in transmission is small since a reverse election resolve in a sequence of empty calls terminating with a call of small multiplicity.

In fact the green mining protocol does not rely on random trials, since this would not be a verifiable feature. Instead the random selection is made via the hash value of the block. If the hash value is smaller than 2256pl for l<=k then the block is selectable for call l. It is not possible to tamper with the hash value since there is no nounce field. The only way to change the hash value of a block is to change the update data inside. The details of the implementation give the option of explicit calls via empty blocks or implicit call by time synchronisation. The figure below shows the empty block option.



The figure below shows the theoretical evaluation of the average number of mined blocks per election as function of n. We assume N=232 and k=16. Green is for explicit calls, red is for implicit calls.

The figure above gives the statistics of the simulations of the number of mined blocks versus the number of competitors. For each value of n there are 1,000 simulations. Black points have been reported more than 64 times, red between 16 and 64 times, blue between 4 and 16 times, green at least once.