R&D AI/ML for 5G/6G

Artificial intelligence (AI) has emerged as a powerful technology that improves system performance and enables new features in 5G and beyond. Standardization — defining functionality and interfaces — is essential for driving the industry alignment required to deliver the mass adoption of AI in 5G-Advanced and 6G.We have entered the era of artificial intelligence (AI), brought about by three converging forces, including the availability of big data, the invention of deep learning algorithms, and high-performance computing. AI has emerged as a powerful technology that improves system performance and enables new functions in the fifth-generation (5G) mobile communication systems and beyond. The current application of AI in 5G is mainly based on the proprietary implementation in different domains, such as radio access network (RAN), core network, operations support system (OSS), business support system (BSS), and cloud infrastructure (CC). Taking the AI adoption in 5G and beyond to the next level requires industry alignment through global standardization efforts. Indeed, a variety of AI-related activities have been taking place in many standardization bodies. 

  • 3GPP NG-RAN architecture. The basic building block of the is an NG-RAN node, for example, 5G node B (gNB) providing NR access. The gNBs are connected to 5G core (5GC) network through the NG interface. They can also be interconnected through the Xn interface. The logical architecture (Figure 1) accommodates diverse deployment options (e.g., centralized, distributed, and monolithic) in a transparent manner. A gNB can be split into a central unit (CU) and one or more distributed units (DUs). A gNBCU and a gNB-DU are connected through the F1 interface. The gNB-CU hosts radio resource control (RRC) protocol, service data adaptation protocol (SDAP), and packet data convergence protocol (PDCP) of the gNB. The gNB-DU hosts radio link control (RLC), medium access control (MAC), and physical (PHY) layers of the gNB. The gNB-CU manages user equipment (UE) context and instructs the gNB-DU on the radio resource allocation for the UE. The gNB-DU is responsible for the scheduling of the radio resources. A gNBCU can be further split into a control plane part (gNB-CU-CP) and one or more user plane parts (gNB-CU-UP). The gNB-CU-CP hosts the RRC and the control plane part of the PDCP of the gNBCU. The gNB-CU-UP hosts the SDAP and the user plane part of the PDCP of the gNB-CU.
  • O-RAN Open RAN architecture.  With the vision of disaggregating NG-RAN toward an open, virtualized, interoperable, and AI-driven architecture, O-RAN Alliance augmented the CU/DU-split framework with a set of open interfaces. These interfaces include the enhanced 3GPP interfaces as well as additional interfaces standardized by O-RAN, interconnecting logical/physical O-RAN nodes. Figure 2 shows the high-level O-RAN-defined architecture with key nodes and interfaces enabling disaggregation of NG-RAN. Within the O-RAN infrastructure, RAN domain management services are provided by service management and orchestration (SMO) framework. It contains a non-real-time (non-RT) RAN intelligent controller (RIC) function and interfaces with other O-RAN network functions through A1, O1, open fronthaul management-plane, and O2 interfaces. Key network functions orchestrated by SMO include near-real-time (near-RT) RIC, O-RAN central unit control plane/user plane (O-CU-CP/UP), distributed unit (O-DU), and radio unit (O-RU). Note that the gNB-DU in NG-RAN manifests into two separate nodes in O-RAN architecture: O-DU and O-RU, interconnected by O-RAN defined open fronthaul interface. The decoupling follows 7-2x lower layer split, which aggregates RLC, MAC, and the majority of the PHY functionalities (High-PHY) within O-DU, while allocating the rest of the PHY functionalities (Low-PHY), radio frequency (RF) processing, and transmission-reception points to O-RU. O-RAN network functions (such as near-RT RIC, O-CU-CP/UP, and O-DU) are hosted by O-Cloud platform. Virtualization of NG-RAN with O-Cloud makes network deployment options more versatile and facilitates remote lifecycle management.


The cellular systems have been incrementally evolved, and the old and new network equipment co-exists for a certain period. Likewise, the 4G equipment will continuously roll out, adopt a newfeature, and evolve to 5G systems. 5Gsystems are deployed progressively. The transition to 5G may take longer time than 4G because many different features should be included. While 5G systems are now deployed, research groups of cellular communications and networks started investigating beyond 5G systems and conceptualizing 6G systems. 6G will revolutionize the wireless communications and networksmore intelligently with higher requirements than 5G systems. In the era of 6G, we need a game-changing approach. 6G systems will redefine the communications and networks depending on the required services. 6G business will not play a game but change a game. In order to support newrequirements and services, a new blood technology is required. Artificial intelligence (AI) and machine learning (ML) will be one of key technologies for 6G. They are now matured technologies and improve many other research fields significantly. AI and ML make our day-today life easier. They pervade every aspect of our life. For example, we use mobile phone apps such as maps and navigation, facial recognition, autocorrect text, search recommendation, and so on. In addition, Chabot, social media, social media, and Internet banking are widely used. They all are based on AI and ML technologies. Thus, many experts and businessmen expect that they can dramatically improve the efficiencies of our workplaces as well as create new applications and services. AI will play a critical role in wireless communications and networks and change how we design and manage 6G communications and networks. We expect that AI makes communications and networks design and management smarter and safer. Key question is not about whether but when and how to implement AI in 6G communication systems.
We have an age-old question about intelligence: What is intelligence? There is no consensus on the definition of intelligence because it can be defined in many different ways. However, most of definitions includes some key words: understanding, selfconsciousness, learning, reasoning, creativity, critical thinking, and problem-solving. In general, intelligence can be defined as the ability to perceive information and contain it as knowledge to adapt to an environment. More simply, it is the ability to understand, learn from experience, and apply knowledge to adapt an environment. In terms of science and engineering, keywords would be understanding, learning, and making decisions. Intelligence can be defined as understanding an environment and learning from an experience, and making a decision to achieve its goals. The term artificial intelligence was coined by John McCarthy in 1955. He organized the famous Dartmouth conference in 1956.
The history of AI may begin in Greek mythology. The concept of AI appeared in literature and actual devices with some degree with intelligence were demonstrated. We can call it AI pre-history. In Greek myths, Hephaestus is the god of the smiths. He manufactured a giant bronze automaton Talos incorporating the concept of AI. Talos became a guardian for the Crete Island. In the fourth century B.C, Aristotle invented syllogistic logic that is a mechanical thought for reasoning. He laid the foundations of modern science including AI research. He believes that the matter of all material substances is distinguished from form of them.This approach allows us to have a basic concept about computation and data abstraction in computer science. It becomes an intellectual heritage of AI. In the thirteenth century, Spanish theologian Ramon Llull invented a combining logical machines for discovering nonmathematical truths. Arab inventor Al-Jazari created the first programmable humanoid robot driven by water flow. In the sixteenth century, many clockmakers created mechanical animals and others. Rabbi Judah Loewben Bezalel of Prague invented theGolem that is an anthropomorphic thing. In the seventeenth century, a French philosopher, mathematician, and scientist Rene Descartes thought that animals are complex physical machines without experience and cuckoo clocks are less complex machines. He believed that thoughts and minds are properties of immaterial souls and humans with immaterial souls only have subjective experience. A French mathematician Blaise Pascal invented the first mechanical calculator. An English philosopher Thomas Hobbes published The Leviathan and presented a mechanical and combinatorial theory of thinking. In the book, he said that reasoning is nothing but reckoning. A German mathematician Gottfried Leibniz improved Pascal’s machine and made the Stepped Reckoner to do multiplication and division. In addition, he thought that reasoning could be reduced to mechanical calculation and tried to devise a universal calculus of formal reasoning. In the eighteenth century, plenty of mechanical systems are invented. Automation chess play also known as the Mechanical Turkwas constructed in 1770 and astounded the world. However, it was revealed later as a fake machine containing a human chess master hidden inside to operate the machine. In the nineteenth century, Joseph Marie Jacquard invented the Jacquard loom in 1801. It was controlled by a number of punched cards. An EnglishwriterMary Shelley published a science fiction book Frankenstein or The Modern Prometheus in 1818. In the book, a young scientist Victor Frankenstein created a hideous monster assembled from old body parts, chemicals, and electricity. English mathematicians Charles Babbage and Ada Lovelace invented a programmable mechanical calculating machines. An English mathematician George Boole published The Laws of Thought containing Boolean algebra in 1854. Boolean algebra laid the foundations of digital world. In the twentieth century, a Spanish engineer Leonardo Torres y Quevedo invented the first chess-playing machine El Ajedrecista in 1912 using electromagnets under the board. It was a true automation to play chess without human intervention and is regarded as the first computer game. British mathematicians Bertrand Russell and Alfred North Whitehead published Principia Mathematica in 1913. In this book, they presented a formal logic of mathematics. It covers set theory, cardinal numbers, ordinal numbers, and real numbers. A Czech writer Karel Capek introduced the word Robot in his play R.U.R (Rossum’s Universal Robots) in 1921. In the play, artificial people called Robot are manufactured in a factory. In 1927, German science fiction film Metropolis is released. The film describes a futuristic urban dystopia and became the first robot film. In 1942, American writher Isaac Asimov published a short story Runaround and created three laws of robotics in the story. The three rules describe theways a robot must act: (1) Arobot may not injure a human being or, through inaction, allow a human being to come to harm, (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law, and (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. In 1943, Warren McCulloch and Walter Pitts published a paper A logical Calculus of the Ideas Immanent in Nervous Activity in the Bulletin of Mathematical Biophysics. It described networks of idealized and simplified artificial neurons and inspired to neural networks. In 1950, Alan Turing published a paper Computing Machinery and Intelligence describing the Turing test. In the paper, he proposed to consider the question Can machines think? and replace the question by another, which is closely related to it and is expressed in relatively unambiguous word. In order to determine if machines can think, he invented the Imitation Game involving three players: an AI respondent, a human respondent, and an interrogator. In the game, an interrogator stays in a room apart front an AI respondent and a human respondent and can communicate with them through written notes. The objective of the game is whether or not an interrogator can determine which of the others is the AI respondent and which is the human respondent by asking questions of them. The role of the AI respondent is to trick the interrogator to make a wrong decision and the role of the human respondent is to assist the interrogator to make a right decision.  In 1951, Marvin Minsky designed Stochastic Neural Analog Reinforcement Calculator (SNARC) that is the first artificial neural network using vacuum tubes and synapse. From 1952 to 1972, AI celebrates golden age. The field of modern AI research was created. The study from various academic areas such as mathematics, psychology, neuroscience, economics, linguistics, and computer science began to create AI machines. In 1952, Arthur Samuel invented the checkers-playing program as the world’s first self-learning program. It has enough skill to challenge a world champion. He coined the term machine learning as The field of study that gives computers the ability to learn without being explicitly programmed. In 1955, the term artificial intelligence was coined. Dartmouth conference was organized in July and August 1956. Many people say that it is the official birthdate of the research field AI. In 1958, Johan McCarthy developed the Lisp language that is the most popular programming language for AI research. In 1961, the first industrial robot Unimate started working on a General Motors assembly line in New Jersey, USA. It was invented by George Devol using his patent [8]. The patent describes the machine as follows : The automatic operation of machinery, particularly to automatically operable materials handling apparatus, and to automatic control apparatus suitable for such machinery.” He founded the first industrial robot company Unimation. The film 2001: Space odyssey featuring a sentient computer HAL 9000 is released in 1961. In 1964, Daniel Bobrow completed his Ph.D. dissertation Natural language Input for a Computer Problem Solving System and developed a natural language understanding computer program STUDENT. In his dissertation, he showed that a computer is able to understand natural language well to solve algebra word problems. In 1965, Josep Weizenbaum built an interactive program ELIZA that performs a dialogue in English language on any topic. He demonstrated communication between human and machine. It was very attractive because people feel a human-like computer. In 1966, the first animated robot Shakey was produced. In 1969, Arthur Bryson and Yu-Chi Ho proposed a back propagation procedure as a multi-stage dynamic system optimizationmethod that is nowwidely used for machine learning and deep learning. In 1972, the world’s first full-scale anthropomorphic robotWABOT-1 was developed at Waseda University in Tokyo, Japan. It is composed of a limb control, vision and conversation systems. It has ability to communicate with a human in Japanese as well as walk, grip, and transport objects. In 1978, the R1 (also called XCON (eXpert CONfigurer)) program as a production rule-based system was developed by John P. McDermott at Carnegie Mellon University. It assists in the ordering of DEC’s VAX computer systems. From 1980 to 1987, an expert system emulating the decisionmaking ability and solving complex problems by if–then rules are widely adopted around the world. AI research faces the second boom. In 1981, IBM produced the first personal computer Model 5150. In 1986, the first automated vehicle Mercedes Benz van equipped with cameras and sensors was built at Bundeswehr University in Munich, Germany. It achieved a speed of 63 km/h. In 1990’s, an intelligent agent concept perceiving its environment and acting tomaximize its success rates is widely accepted. In 1997, a chess-playing computer Deep Blue of IBM won against a world chess champion. In 2000, Cynthia Breazeal of MIT developed Kismet that can recognize and simulate human emotions. A humanoid robot ASIMO of Honda was developed for practical applications and achieved towalk like a human and deliver a tray to customers in a restaurant. In 2004, the first competition of the DARPA Grand Challenge was held in the Mojave Desert, USA. The goal of competition was to drive 240km route, but none of the automated vehicles finished the route. From 2010, a new AI research and industry have boomed due to mobile broadband, strong computing power, and massive data. In 2009, Google started developing automated vehicles. In 2013, IBM developed the first commercial question answering computer system Watson for using decision of lung cancer treatment at Memorial Sloan Kettering Cancer Centre, New York, USA. In 2016, Google DeepMind AlphaGo won Go against Go world champion Lee Sedol.
As we reviewed the AI definition, classification, and history, AI will impact on our day-to-day life and many research areasand industries. It will be the main driver of emerging ICT technologies such as 5G/6G.
AI techniques are evolving rapidly and widely adopted in many different areas because it provides us with many opportunities such as problem solving, decision making, and performance improving. 
Deep learning is a sub-field ofmachine learning inspired by artificial neural networks (ANNs). Deep learning can be regarded as ANNs, and we often use the terms interchangeably. ANNs use artificial neurons (nodes) and synapses (connections), and they are built on same idea as how our brains work. The input, output, node, and interconnection of the artificial neuron arematched with dendrites, axon, cell nucleus, and synapse of the biological neuron, respectively. The nodes receive data, calculate them, and pass the new data to other nodes via the connections. The connections include weights or biases affecting the calculation of the next nodes. Deep learning uses a neural network with multiple layers of nodes between input and output. We can make the neural network deeper as the number of hidden layers increases. In deep learning, the term learning means finding optimal weights from data. The term deep means the multiple hidden layers in the networks in order to extract high-level features from inputs.
  1. Deep learning is built on same idea as how our brains work. The input, output, node and interconnection of the artificial neuron is matched with dendrites, axon, cell nucleus, and synapse of the biological neuron, respectively.
  2. The process of deep learning can be summarized as follows: (1) Training data, (2) Forward to neural network for prediction, (3) Finding errors (loss, cost, or empirical risks) between true values and predictions, and (4) Updating the weight of connections.
  3. Similarity and difference between an artificial neural network and a human brain can be summarized as follows: (1) When same inputs are given in both, ANNs produce same outputs but a human brain may not give the same output. (2) ANNs learn from training while a human brain can learn from recalling information. (3) ANNs never forget if it is fully trained but a human brain can forget. (4) The data processing of ANNs can be monitored by a control unit while a human brain does not have a separate control unit.
  4. Machine learning steps are composed of manual feature extraction and classification. The manual feature extraction step is tedious and costly. However, deep learning performs end-to-end learning including feature extraction.

Convolutional Neural Network (CNN) take into account spatial structure of inputs by using convolution operation and enable us to have much fewer parameters. This approach allows us to train a huge data more efficiently even if the performance is slightly degraded.
  1. The convolutional neural networks (CNNs) take into account spatial structure of inputs by using convolution operation and enable us to have much fewer parameters.
  2. The CNNs arranges neurons in 3 dimensions: width, height and depth. each layer transforms the 3 dimensional input volume to the 3 dimensional output volume of neuron activations. The width and height would be the 2 dimensional image and the depth would be the colour channels such as red, green and blue (RGB). This is very helpful for reducing complexity.
  3. A CNN is composed of four different layers: convolution, activation (Relu), pooling and fully connected. Convolutional layer plays important role in CNNs. It is composed of a set of learnable filter sliding over the input. The activation layer allows the neural network to learn harder decision functions and reduce the overfitting. The pooling layer is also known as a sub-sampling layer. Its aim is to downsample the output of a convolutional layer and gradually reduce the spatial size of the representation. The fully connected layer is analogous to conventional neural networks. The neurons in the fully connected layer are fully connected to all activations in the previous layer.
  4. The common process of CNN is summarized as follows: input layer → → Convolution + Relu layer → → Pooling layer → → …. → → Convolution + Relu layer→→Pooling layer→→Fully connected layer→→Softmax layer→→Output layer.


The basic idea of Recurrent neural networks (RNNs) is similar to CNNs. They both use parameter sharing. CNNs share the parameters across spatial dimension on pixel image data while RNNs share the parameters across temporal dimension on speech or text data. We can obtain gradients by concatenation because the parameters among hidden states are shared. However, as we have a long-time sequence and the depth grows, RNNs are in trouble about long-term dependencies. We call this gradient exploring and vanishing problem. In order to solve this problem, long short-term memory (LSTM) is widely adopted. LSTMs are an extension of RNNs. In addition, the computation of CNNs is fixed because the input size is fixed while RNNs have a variable number of computations and the hidden layers and the outputs rely on the previous states of the hidden layers. The main advantages of RNNs can be summarized as follows: (1)variable input length, (2) model complexity not depending on the input size, and (3) shared computation across time. The disadvantages of RNNs are (1) slow computation and (2) gradient vanishing problem. 
  1. Recurrent neural networks (RNNs) are developed to recognize sequential characteristics of data and use the sequential data or time series data to solve common temporal problems in speech recognition or natural language processing.
  2. The input and output of RNNs could have different structures depending on various input and output pairs such as one-to-many, many-to-one or and many-to-many. One-to-many RNNs are used for generative models such as generating text or music, drawing an image and so on. Many-to-one RNNs are used for binary classification such as yes or no grammatical correction of a sentence and so on. Many-to-many RNNs can be used for image or text translations. If we want to translate one English sentence to another language sentence, the length of words between two languages may not be equal.
  3. Training the RNNs is straightforward.We can apply the back propagation algorithm to the unrolled RNN. We call it backpropagation through time (BPTT). The training data for RNNs is an ordered sequence.
  4. The basic concept ofLSTMsis to regulate the information flowusing some internal gate mechanisms. LSTMs suppress vanishing gradients through a gating mechanism and provide us with great performance.
  5. LSTM decides whether the information should be kept or forgotten and controls the gradients values at each time step. It allows to neural network to have desired behaviour from the error gradient by updating the learning process at each time step. 

RNNs are developed to recognize sequential characteristics of data and use the sequential data or time series data to solve common temporal problems in speech recognition, natural language processing, and others. Feedforward neural networks can’t take time dependencies into account and can’t deal with variable length inputs. For example, a feedforward neural network classifies three word sentences such as “I love you”. If we receive four word sentences such as “I love you too.”, it is not easy to classify them even if it is just the slight modification. One approach is to drop one word from the sentence and match the input size. However, it might lose the meaning. It is not suitable for this application. RNNs allow us to process arbitrary sequences of inputs. They have dynamic hidden states that can store a long-term information. The internal state of the RNNs enables us to have internal memory and exploit the dynamic behaviours. RNNs are very powerful in a sequential pattern because they employ key properties such as temporal and accumulative, distributed hidden states, and nonlinear dynamics. The input and output of RNNs could have different structures depending on various input and output pairs such as one-to-many, many-to-one, and many-to-many. One-to-many RNNs are used for generative models such as generating text or music, drawing an image, and so on. Many-to-one RNNs are used for binary classification such as yes or no grammatical correction of a sentence and so on. Many-to-many RNNs can be used for image or text translations. If we want to translate one English sentence to another language sentence, the length of words between two languages may not be equal.When we have an input vector x = [x1, x2, . . . , xn] and we should receive n times at each time instead of receiving n inputs at a time, RNNs are very useful because it reads the input sequentially and the input is processed in sequential order.

AI-enabled 6G wireless communications and networks

The AI and ML algorithms can be applied to solve the optimization problems and improve 6G performances as well as develop new services.The traditional cellular communications and networks are designed and deployed with pre-defined systems configuration by iterative trial-and-error manner for each scenario. Their values or thresholds of cellular systems can be adjusted by networks operators. This design approach has worked well so far. However, 6G systems consider more complicated communication and network scenarios and supportmany different use cases such as automated vehicles, factory automation, telemedicine, and so on. The traditional approach may notwork well for supportingmultiple6Grequirements. Thus, 5G NR systems contain many new features such as network slicing in order to support multiple services flexibly and scale-up for feature applications. In 6G systems, data-driven approach of AI algorithms will be equipped with heuristic parameter setting and thresholds. AI-enabled communications and networks will be helpful for containing new features. As AI algorithms significantly affect many research fields such as image recognition by new achievement like deep learning, adoptingAI algorithms to wireless systems became a key driver of 6Gsystems.Many research groups in wireless communications and networks have studied the use of AI algorithms in various areas such as handover mechanism, network traffic classification, elephant flow detection, network traffic prediction, network fault detection, and so on. Practical implementation of AI algorithms is considered in 6G systems. In particular, it will improve the 6G system design and optimization. The optimization of wireless communications and networks systems is basically entangled with many parameters such as complexity, cost, energy, latency, throughput, and so on. Thus, the optimization problem is non-deterministic polynomial (NP) hard and also is defined as a multi-objective optimization problem subject to complex constraints. The AI and ML algorithms can be applied to solve the optimization problems and contribute to improve 6G performance as well as develop a new service. AI and ML algorithms are useful for classification, clustering, regression, dimension reduction, and decision-making. Many components of 6G systems are highly related to them. For example, a resource allocation and scheduling problem is a sort of classification and clustering problem.Channel estimation is a regression problem.Viterbi decoding is based on dynamic programming. Network traffic management is highly related to a sequential decision-making problem. Thus, the AI and ML algorithms will be 6G key enablers. In this section, we discuss in which domains AI and ML algorithms will helpful for 6G systems and also how they face research challenges. 
Overview of AI/ML in 3GPP.  The initial foray of AI and Machine Learning into 3GPP began within the realm of network automation. In Release 15, the introduction of the Network Data Analytics Function (NWDAF) marked the advent of AI/ML, providing network slice analysis capabilities. Subsequently, it expanded its scope to include data collection and exposure within the 5G core in Rel-16, as well as facilitating UE application data collection in Release 17.In Rel-17, the integration of Machine Learning further extended to the Next-Generation Radio Access Network (NG-RAN). The technical report TR 37.817 outlines the principles governing RAN intelligence empowered by ML. It delves into the functional framework and explores various use cases and solutions for ML-enabled RAN. The initial set use cases encompass network energy savings, load balancing and mobility management. Finally, in Rel-18, AI/ML spread to the new radio air interface17. This is a remarkable enhancement to be brought by 5G-Advanced18. More specifically, in the Rel-18 study, 3GPP is looking at ML solutions that require interactions between network (NW) and UE. 3GPP categorizes those solutions into one-sided models and two-sided models. The one-sided model means that inference of ML solution happens only at UE or only at NW. 3GPP refers to these as UE-sided models and NW-sided models accordingly. The study considers three representative use cases: beam management, CSI enhancements, positioning.In 3GPP, the exploration of AI/ML use cases within the air interface domain is aimed at establishing a generalizable ML framework. This framework plays a pivotal role in enabling ML applications across the air interface, ensuring its versatility for accommodating additional use cases in forthcoming releases, including the evolution to 6G. As we move into Rel-19, we anticipate the comprehensive incorporation of AI/ML use cases within the air interface, radio access network (RAN), and overall system architecture. In Release-20, the study of 6G technology will take center stage, with AI and ML seamlessly integrated into the system’s core. The advanced technologies, such as distributed learning, working in harmony with AI deeply embedded in both network infrastructure and user devices, will significantly enhance the overall performance and user experience.
Trustworthy AI.  As ML continues to proliferate throughout mobile networks, the issue of trustworthiness becomes increasingly paramount. Trustworthiness advocates a set of fundamental principles that should be inherently woven intothe system’s design: 
  • Explainable Machine Learning.  This aspect pertains to the capacity of ML models to provide human-readable explanations for their decisions and predictions, facilitating transparency and understanding. 
  • Fair Machine Learning.  Fairness in machine learning entails the identification and rectification of biases within ML models, ensuring equitable and unbiased outcomes.
  • Robust Machine Learning.  Robustness in machine learning addresses the ability to manage different types of errors, corruptions, and shifts in underlying data distributions autonomously, enhancing the reliability and adaptability of ML models. 
These features are applicable across the entire ML process, spanning four key aspects:
  • Data ProcessingThis phase involves data preparation for training, testing, and inference.
  • Training. The training of ML models with data to establish their predictive capabilities. 
  • Testing. The evaluation of ML models to gauge their performance and accuracy.
  • Inference. The practical application of trained models to make predictions or decisions in real-world scenarios.
Although ML trustworthiness is a burgeoning topic within 3GPP, the foundational design principles inspired by this paradigm have already been incorporated in 5G-Advanced. Notably, TR 28.908 underscores the necessity for managing AI/ML trustworthiness during the phases of ML training, testing, and inference, with a specific emphasis on ensuring that ML solutions are explainable, fair, and robust.Furthermore, given the varying degrees of risk associated with ML’s impact on different use cases, the trustworthiness requirements for ML may fluctuate accordingly. Consequently, the mechanisms related to trustworthiness must be individually configured and monitored for each specific use case to maintain the desired levels of reliability and integrity.
An AI/ML evaluation framework in 5G media services is defined in 3GPP SA4 TR 26.847. 5G-MAG project is the implementation of framework:
  • The purpose is to establish an evaluation framework and use it for the evaluation of scenarios collected for the 3GPP FS_AI4Media study. This includes the collection of scenarios based on the use cases identified, and defining a scenario template for the description of scenarios for the evaluation.
  • The evaluation framework documents common testbed architectures and anchors, metrics (e.g. AI/ML task metrics, feasibility/performance metrics), and specific details (such as test configuration and constraints) for each scenario evaluation.