System Analysis Framework

These are the tools that we are creating that will empower the next generation of decision makers to create a system architecture, analyze the system, and make informed decisions quickly and effectively.

Tools List

Definition Phase

Architecture Creation Tool (ACT)

This tool is what is used to define the entities and processes in the system of interest and all related systems, defining the entire world outside of the lab. All other tools are downstream from the ACT and their specific capabilities depend on the product of the ACT that is adopted for analysis.

Currently the ACT is comprised of MagicDraw, a software that facilitates the creation of UML and SysML diagrams, which we manually change parameters for the creations of architectures.

In the future we envision a "virtual tool bag" where the user can enter a virtual environment, reach into the bag and pull out the desired entities for the architecture, and place them into that virtual environment. From there the user could configure the parameters of the components, such as the number of copies of a satellite in orbit, the field of view of a sensor, or the position of a specific object of interest, all still in the virtual reality environment.

Scenario Creation Tool (SCT)

This tool defines all events that will occur in the operational phase including time, events, location, unique entities, and all other specifics to the scenario. Imagine a storyboard for a movie or short film. The SCT will define events, times, and places much in the way a storyboard would describe the different scenes in a movie. The architecture created in the ACT will be run through the SCT and tested against all scenarios that are created.

Currently, the SCT is manually operated, with researchers inputting desired parameters and analyzing the results of the scenarios one at a time.

In the future, we see the SCT being completely automated, with various scenarios created from the knowledge of the architecture, the environment it is in, and things that could possibly happen. This along with individual researcher input will create hundreds of relevant and unique scenarios to test the created architecture against.

Operational Phase

Collaborative Command System (CCS)

Integrating all of these tools and the data they create is a difficult task, and that is what the CCS is responsible for. It will be the backbone of our whole system, it will be the interface to the user with all other tools embedded into it. This allows all users to view and access all tools and data that the system generates, as well as allowing other operators to facilitate collaboration.

Currently the CCS is under development and will allow for multiple users to access one system of interest and allow for the easy creation of architectures, analysis of data, and selection of the best architecture as a solution.

As the CCS is developed, we see it becoming a virtual or augmented reality where multiple users can enter one environment for research. We hope to incorporate Cyverse as a data management and collaboration tool.

Integrated Sensor Viewer (ISV)

This tool is responsible for displaying all of the data that the sensors in the system collect, no matter the data type. This data could be images, videos, thermal analysis, efficiency ratings, altitude, or anything else. The ISV is responsible for taking in the data that is necessary to display, and format it for the easiest and most effective viewing.

Currently the ISV is capable of taking in multiple data feeds from different sources and displaying them on the user's screen. It also allows for the manual input of other data to display, but cannot currently search for data and display it.

In the future we would like the ISV to be able to automatically search for data, pull it out and display it to the users. Also we would like the ISV to take advantage of image manipulation in order to convert multiple single images from different angles and create a 3D model from those individual images.

Sensor Tasking Tool (STT)

After the creation of an architecture, there will be many sensors that will need to be controlled in the system. That is the responsibility of the STT, to format and send requests to all sensors to change their position or orientation.

Currently the STT is not an automated tool, and is done by hand. Requests to change the position of sensors are formatted and sent by a user to change the orientation or position of a sensor, and the data from those sensors is recorded and sent to the ISV for viewing.

In the future we envision the STT a fully automated system, where a user can input the desired sensor to be changed, the task for that sensor, and then the STT will format and send the request to the sensor.

Relevant Aspects of the World Emulator (RAWE)

The RAWE is responsible for emulating everything that exists outside of the ADSL. This is the visualization of our architectures and scenarios, once they have been defined and created then they will be modeled using a variety of programs that include AGI's Systems Took Kit, Unity, and other programs.

Currently we are working on the RAWE being able to automatically read our architecture and scenario files, then create the simulation. The main area we are currently focusing on is defining the language in which we create architectures and scenarios.

Analysis Phase

Response Surface Analysis Tool (RSAT)

The RSAT is what will be guiding the user to make their final decision. As the architectures are run through scenarios, they receive an evaluation score that represents if the architecture passes or fails in each scenario. Based on these results each architecture will have a response surface, or 3D graph of these scores associated with it. It is the RSAT's job to go through all of these response surfaces, analyze them, and suggest the best architecture to the user.

Currently the selection is done by hand, and a researcher reads through these response surfaces and selects the best architecture by hand.

In the future we envision a fully automated system that takes advantage of machine learning and artificial intelligence to record these results, categorize and analyze all response surfaces, and then ultimately present the users with the best suggested architecture for their problem.