Synchronization is not a new term for the new generation mobile networks and was needed for previous generation mobile networks too. For example, more than a decade older technology, time division-synchronous code division multiple access (TD-SCDMA), required frame synchronization among base stations (BSs) as a fundamental feature to minimize interference and therefore optimize the overall traffic capacity. The radio frame synchronization ensured that uplink and downlink transmission directions were positioned, at least for adjacent cells, in the same time intervals, thus preventing interference between adjacent downlink and/or uplink slots.
Frequency and time synchronization requirements of the 5G or the Next Generation (NG) RAN are critical. Extreme accuracy is needed In this blog we will discuss why it is extremely important to 'engineer' the frequency and time synchronization within the 5G network.
Types of duplexing
There are 2 most popular duplexing methods or spectrum usage techniques used in mobile or fixed wireless broadband, namely FDD, which stands for Frequency Division Duplex, and TDD, which stands for Time Division Duplex.
FDD
FDD uses a paired spectrum with different frequencies to transmit (uplink) and receive (downlink) signals simultaneously. A sufficient guard band is added to avoid interference between uplink and downlink channels, so they do not interfere with one another. This enables a clear and uninterrupted data transmission.
TDD
TDD uses a single frequency band for both transmitting and receiving data. When a system uses TDD, it shares the same band but assigns alternative time slots for transmit and receive operations. Time slots could be dynamically allocated and can be variable in length based on network needs. A guard period is needed to ensure that UL and DL transmissions do not overlap or interfere with each other.
Frequency and Time/Phase Synchronization for 5G RAN
What would happen if the 5G RAN is not synchronized in Frequency or in Phase/Time? The answer is simple - Operator network won't work properly. It may even not work at all. Frequency and Phase/Time synchronization is needed in order to ensure:
Minimizing guard frequencies, that would result in maximizing of the spectral efficiency and eventually maximizing the data rates on the air interface.
Optimizing user experience, that includes smooth network handover. There is significant reduction in dropped calls when synchronization is good.
Utilizing bandwidth boosting technologies, for example Carrier Aggregation and MIMO/CoMP.
Supporting user applications that rely on highly accurate timing, such as location based services.
The specific requirements for RAN synchronization and timing are entirely dependent on the radio technology deployed and the spectrum used. If frequency-divided (FDD) spectrum is used, a relatively loose frequency synchronization (50 parts per billion or 50 ppb) is sufficient. And since frequency synchronization is not stringent, FDD-based networks can easily survive lengthy synchronization outage. On the other hand, for Time division duplex (TDD) spectrum such as LTE-A or LTE-TDD, much tighter time and phase synchronization is required to ensure against interference between the uplink and downlink channels.
In addition to the spectrum requirements, different RAN methods and technologies rely on time synchronization of varying accuracy as mentioned in Table 1. Two terms are mostly used in synchronization - Absolute (Network Wide, measured with respect to the UTC) and Relative (where a common clock is shared by a cluster of radios). When the time synchronization network is designed, a mix of absolute and relative time/phase accuracy of ~1.5 μs or less is engineered in order to support network services and different RAN features.
Table 1
Now that the time error budget is defined, at least on a high level, there needs to be a method that provides the mechanism to carry 'timing' across the IP based 5G network. One and the most popular method is the IEEE 1588-2008 Precision Time Protocol (PTP v2). IEEE 1588 Precision Time Protocol (PTP) is a packet-based two-way communications protocol specifically designed to precisely synchronize distributed clocks to sub-microsecond resolution, typically on an Ethernet or IP-based network (source: Microchip FTS)
ITU-T standard G.8272 defines two types of Primary Reference Time Clock (PRTC) and these are:
PRTC Class A - PRTC using single-band GNSS receiver. Accuracy of this type of clock is <100ns
PRTC Class B - PRTC using multi-band GNSS receiver. Accuracy of this type of clock is <40ns
Many argue that the above levels of accuracy can also be easily achieved with the help of GPS or GNSS antennas at the DU or radio unit itself, and there is no doubt about this understanding, but considering the critical nature of the services provided by the telecommunication operators, sole reliance on GPS/GNSS would define a poorly designed synchronization network. Why I am saying this is because the number of point of failures here would be equal to the number of deployed sites of GPS/GNSS antennas. So, if there are thousands of GPS/GNSS antennas installed within a network, there are simply thousand point of failures too. The failure of multiple such deployments per month would significantly add to the operational costs. In addition to this, there will be poor user experience and bad or no service in case of GPS/GNSS outages because the DU or the small form factor RU won't be equipped with good oscillator and hence will have poor holdover. Moreover DU is a function and does not necessarily need to have the PTP GM function; it might be collocated with a Clock Function.
Per the O-RAN specification, Fronthaul protocol stack is divided into 3 parts: C/U-Plane, S-Plane and M-Plane. We will focus on the S-Plane or Sync Plane.
Figure 1
Coming back to the 'GNSS at every DU as the solution' - The major challenge of using GNSS is to find an optimum location for installation. The multipath effects are location dependent and are hard to measure. The impact of non-ideal antenna location includes reflections, multipath and both intentional and unintentional jamming of the signal.
Figure 2
GNSS has more cons than pros...
GNSS installation and signal reception in dense urban environment involve lot of CAPEX.
No centralised management.
No protection and leads to longer time to restore. More outages.
Indoor installation can be problematic and doesn’t guarantee the required accuracy.
There are antenna/cable vulnerabilities and lack of redundancy.
GNSS jamming and spoofing are also the factors to consider.
A lot of expenditure on OPEX needed. Managing dense clusters can be extremely challenging.
Substantial amount of spares needs to be maintained.
Let us get into more detailed structure and requirements of the 5G synchronisation needs...
From Table 1 above, we have noticed that the absolute time error should be within +/- 1.5us (1.5 micro second), while the relative time error needs to be within 260ns (For FR1) and 130ns (for FR2). What do we actually mean when we say 'absolute' and 'relative'? Well, absolute time error means that from UTC or Universal Time Coordinated / Coordinated Universal Time, the RU is required to be within +/- 1.5us. In this case, UTC is the reference point for all time measurements. Relative time error corresponds to the relative time offset measured between each RU that is connected to the common source of time (Common Clock or CC in short). That is, in other words, say a DU taking time reference from a PTP GrandMaster or GNSS and hosts 100 x RUs, then each RU will have to be within 260ns from each other when operating in FR1 and 130ns from each other when operating in FR2.
We can understand 'absolute' and 'relative' time error from Figure 3 below.
Figure 3
In order to meet the stringent timing requirement of 130 ns and 260 ns in the fronthaul, it is recommended to place the timing source near the Radio Units. However, it is the operator's choice where they want to place the timing source, but the most optimum point for placing the timing source is at the DU itself. By doing so, the relative TE (130ns or 260ns) at the RU can be confidently maintained and RU will always remain within the 1.5us Absolute TE limit.
The other architecture might allow placing the timing source at the CU, but extra care needs to be taken when this type of timing architecture is planned. In this case, the DU has to be equipped with a Boundary Clock that takes the PTP input and sends the PTP output to the RU. The boundary clock or BC at DU then needs to be at least a Class C boundary clock.
Table 2 below talks about the different classes of Boundary Clock:
Table 2
The need to have a Class C Boundary Clock at DU arises from the fact that the same Boundary Clock will not only act as the Common Clock for all the RUs taking PTP input from it, but will also add to the Absolute Time Error. Moreover, an accurate Boundary Clock helps the network to scale in the future.
Summary:
If the fronthaul requirement is 260ns relative TE and the noise in Fronthaul is less than 160ns, then PRTC Class A (+/- 100ns from UTC) should suffice. But if Fronthaul TE requirement is 130ns and the noise in Fronthaul network is more than 30ns, then PRTC Class B (+/- 40ns from UTC) should be used.
The Timing source can be placed at the CU in a well engineered network (with Class C BC at every hop). However, it is highly recommended to place the timing source as close as possible to the Radio Unit, perhaps at the DU itself.