AERPAW is a batch-mode facility. Experimenters develop experiments in a virtual environment, and submit experiments for execution on the physical testbed once development is complete. AERPAW Operations personnel (Ops) then execute these submitted experiments in the physical testbed environment, and collect the output of the experiments as designed by the Experimenters, which are available for Experimenters to view and analyze back in the virtual environment.
This is not an arbitrarily decided constraint, but a considered architectural choice. In operating a facility with programmable radios and programmable air vehicles, we are obligated to make, and uphold, certain guarantees to the FCC and FAA. However, we also want to allow Experimenters the ability to program those radios and air vehicles, ideally without needing to become fully conversant with FCC and FAA regulation details, obtaining exemptions, or expertise at techniques for ensure compliance.
Batch mode operation allows us to interpose critical filters and monitors into the Experiment code execution flow that allow us to guarantee safe and compliant operation. It is one of the most valuable features of the AERPAW platform that we assume this guarantee ourselves, rather than passing on the responsibility for compliant operations (and liability for non-compliance) to you, the Experimenter.
The rest of this User Manual provides guidance and instructions for using the platform facilities. Details of the relevant FCC and FAA regulations, for those interested, may be found later in this introductory section, in the section entitled "1.6) FCC and FAA regulations".
The diagram below shows a high-level view of the main entities that interact in the AERPAW facility. (The diagram may be easier to view by expanding your browser to the largest possible width.)
Fig. 1.3.1 AERPAW High-Level Entity Relationship Diagram (Experimenter/Ops view)
All the facilities of AERPAW are connected with each other by the AERPAW Secure Backplane (not explicitly shown in the above diagram), a dynamically reconfigurable programmable network that isolates AERPAW from the rest of the world at Layer-1, by means of a dedicated fiber plant. Remote resources connect to this backplane using VPN access, integrating them into one or more of the virtual Layer-2 networks in the AERPAW Backplane. Experimenters use public Internet access to interact with the AERPAW Portal website, and VPN access to interact with their specific experiments. The AERPAW Backplane allows AERPAW Operations personnel to manage the entire facility, using multiple virtual Layer-2 networks isolated by VLANs.
The AERPAW Backplane also isolates the different experiments from each other, and from the AERPAW management planes mentioned above. Each Experiment is provided with two distinct Layer-2 VLAN networks, briefly described below in the "Experiments" section of this page.
The physical resources of the AERPAW platform (indicated by the rightmost pane, "Platform Resources", in the above diagram in Fig. 1.3.1) are distributed over the Centennial Campus, and the Laker Wheeler Road Agricultural Field Laboratory (LWRFL, or LWL) campus, of NC State University at the end of Phase-2. In future, more resources are expected to be added at both these campuses, and additional resources may be added outside the NC State campus (such as the Town of Cary of City of Raleigh), and integrated logically with existing resources, for seamless deployability in support of AERPAW experiments.
(AVNs, AFRNs, APRNs, that this diagram refers to, are described briefly below in the rest of this page, and in more detail later in this User Manual.)
Note 1: The "Platform Control Software Tools" and "Portal Website" shown in the middle pane of Fig. 1.3.1 are software components, that physically run on the "Servers" shown in the "Virtual Resources" on the rightmost pane.
Note 2: The AERPAW APRNs UAVs, and UGVs, during testbed operation, are physically on the grounds or in the airspace above the LWL area, but are, obviously, not connected to the fiber plant of the Secure Backplane. They connect back securely to the AERPAW Secure Backplane through the public Internet by mediation of custom AERPAW control and networking software.
The diagram below shows the logical map of the physical resources of AERPAW as at the end of Phase-2. (The diagram may be easier to view by expanding your browser to the largest possible width.)
Fig. 1.3.2 Logical Connectivity of AERPAW Physical Resources, Phase-2 end
Most of the diagram is self-explanatory. To understand the physical location of the AFRNs shown in the above diagram, please refer to Section 1.1, "Deployment Areas and Map", of this User Manual. The DCS is located in a distributed fashion, in various buildings on the Centennial Campus. The gold-colored line represent the fiber plant dedicated to the AERPAW Secure Backplane, the gold-colored switches represent the switches installed and operated by NC State Communication Technologies exclusively to realize the AERPAW Secure Backplane.
The AERPAW platform will have three experimentation modes:
Development: The development environment involves virtual experiments over AERPAW's cloud resources. Experimenters will have live access to this environment for their experiments, and be able to develop their experiments here. In this environment, each ARN maps instead to an AERPAW Virtual Node (AVN). Experimenters develop code on each AVN that they desire should run on the corresponding ARN. A custom-designed emulation of radio environment and vehicle mobility is integrated into this environment, to enable the software being developed by the Experimenter to run with no disruption. However, at this time, the emulation is not physically faithful to any particular radio or aerial environment; no meaningful data can be usefully extracted about any physical aspect of AERPAW's radio or aerial environments from executing in this environment.
Internally, we distinguish between a variant of this environment called the Emulation environment, which is identical to the above, but does not allow live access by the Experimenter; it is an instance of a Development environment that may be optionally utilized by AERPAW Operations personnel to validate or safety-check an experiment in a virtual environment.
Sandbox: Sandbox environment involves real radio hardware in an indoor environment. While the AERPAW Radio Nodes in this environment are real hardware, consisting of real USRPs and UAVs/UGVs, and are indistinguishable from field-installed AERPAW Radio Nodes, they are not installed in the field, but rather in labs with controlled surroundings. The radio signal transmissions from the USRPs are limited, and the UAVs/UGVs are restricted from actual motion. Experimenters have live access to the Sandbox environment for their own experiments, but this access will need to be scheduled. Original planning called for the Sandbox to be available to Experimenters at the end of Phase-2; unfortunately this has turned out to be impossible to achieve due to circumstances beyond our control. AERPAW Sandbox will be made generally available at the end of Phase-3; until then the Sandbox mode is unavailable to Experimenters.
Testbed: In this mode, the experiments are executed over actual AERPAW fixed and portable radio nodes in the field. As mentioned previously, AERPAW is a batch-processing facility; Experimenters do not have live login access to the Testbed environment. The experiment code developed by the Experimenter in the Development mode are executed by AERPAW Operations personnel in the Testbed mode, after the Experimenter submits the experiment for Testbed execution.
Experiments execute in one of several environments as described above. Each experiment is considered a distributed program executing on a specific set of AERPAW Radio Nodes (ARNs) in the Testbed environment, as selected by the Experimenters. Each ARN represents some (i) local (edge) compute resources, (ii) radio resources, such as Software Defined Radios (SDRs), and (iii) optionally, vehicle resources, such as a programmable Unmanned Aerial/Ground Vehicle (UAV/UGV). ARNs are briefly described further below, and in more detail later in this User Manual. Experimenters have the opportunity to code this program, on a node-by-node basis, including SDR control code, vehicle control code, test traffic generation and capture code, and any other arbitrary experiment logic the Experimenter might wish to code.
In AERPAW, ARNs are the only locations where Experimenter code runs. An Experimenter planning to represent some real-world scenario of interest to an AERPAW Experiment must first map all the compute locations of interest in that scenario to one or more ARNs in AERPAW. For example, to represent a real-world scenario with one cellular base station, and one mobile User Equipment (UE) device, must design an AERPAW Experiment with one ARN to run a software implementation of a base station, and another ARN with software implementation of an UE.
When an Experiment is instantiated in the Development Environment (described below), the AERPAW Backplane creates two distinct dedicated Layer-2 networks for it, isolated from other Experiments, and AERPAW facility management.
The first connects all the AVNs in the experiment - this is the Experimenter's Management (XM) network, that is intended to provide the Experimenter-authored code to coordinate operation of Experiment logic across AVNs. This is often part of experiment design, to allow the Experimenter to predictably experiment with scenarios that arise rarely or by chance in the real-world scenario. For example, the Experimenter may be interested in studying the case where the base station suffers a temporary high processing delay just as an UE controlling an UAV has moved to a certain altitude. To ensure repeatable occurrence of this exact scenario, which in the real-world arises by chance and does not involve any communication between the UE and the base station, the Experimenter would use a coordination message sent by their experiment logic on the UE when the target altitude is reached, using the XM plane.
The XM plane also allows live login access to each AVN during Development. This ability does not persist in Testbed Mode, but the XM plane itself is re-created on the ARNs, so any such coordination messages generated by the Experiment logic in various AVNs would be successfully exchanged.
The other connects only those AVNs of the Experiment that represent fixed (not portable) nodes. Such nodes, in the target real-world scenario, may well be connected through a wired network, to allow them to exchange coordination messages that are part of the real-world scenario. For example, in an Experiment that includes multiple ARNs representing multiple cellular base stations, these base stations are connected by a core network. In the AERPAW Experiment representing this scenario, the corresponding AVNs would be connected by an Experimenter's Data (XD) network; this network would persist into the Testbed environment, connecting the corresponding subset of ARNs. At this time, each fixed node is provided with a single XD interface, which are all connected in a flat Layer-2 network.
Future Enhancements: During Phase-3 and beyond, we also expect to start supporting (i) AERPAW Cloud Nodes (ACNs), which are pure compute nodes, essentially ARNs with any radios of vehicles (suitable for representing back-end clouds in real-world scenarios), and (ii) integration of commercial third-party radio equipment, such as Keysight R/F sensors, Ericsson 4G/5G base stations, Facebook Teragraph radios, Fortem radars, and LoRa sensors and base stations, into their own experiments. (AERPAW has already integrated most of these into the platform, however they are not at this time accessible by Experimenters with general access to the facility; in future, such access will become possible.)
There are two types of physical nodes in AERPAW:
AERPAW portable radio nodes (APRN)
AERPAW fixed radio nodes (AFRN)
The two are intended to be functionally very similar or identical, except that the APRN uses a much smaller form-factor so that it is suitable for powering from a battery, and being physically carried on a vehicle. Accordingly, the APRN has much less compute power, storage, memory, and fewer (and less powerful) SDRs.
Information on what specific equipment is available in each type of radio nodes can be found in Section 1.4) Equipment Information. The diagram on the right summarizes the relations and individual components of the AFRN and APRN. The APRN includes an experimental virtual machine (E-VM) that is programmed by the experimenter, and it includes the radio software and vehicle software that define the experiment. The APRN is connected to a USRP B205mini that is controlled by the radio software. The vehicle software, on the other hand, controls the autopilot on the UAV, for managing the trajectory of the drone.
The AFRN architecture is similar to the APRN. The difference is there is no vehicle application and autopilot in an AFRN. There are also multiple USRPs connected to an AFRN; for Phase-1, only one USRP will be accessible from an AFRN.
Both the APRN and AFRN include a control virtual machine (C-VM) which is accessible only to platform operators. The C-VM interacts with a separate USRP at both the APRN and AFRN, not accessible to experimenters, which is dedicated to spectrum compliance monitoring. If the experiment transmits at bands that AERPAW is not authorized to transmit, or the AERPAW vehicle moves beyond the pre-defined geofence for safety, the C-VM (or the AERPAW operator) can stop the AERPAW experiment.
AFRN Architecture
APRN Architecture