AI/ML USAF
Case Study
Case Study
Week 2 & 3 - Product Setup & Review: (4)
Establish Test Environment:
Customer engineering staff will provision and configure an AWS test environment to Quadrant specifications for the evaluation and provide login credentials to Quadrant. Quadrant will review the AWS configuration and confirm testing environment is ready for product installation.
Product Setup & Configuration:
Once the testing environment is established, tested, and validated, Quadrant will implement the platform to be evaluated. The testing environment and implementation of various platforms to be tested will be performed in entirety, building from an established baseline environment
Customer Product Demonstration and Evaluation Assistance:
The customer will provide Quadrant access to a customer test environment and train on product features and usage. Customer will provide a resource to assist with the product evaluation with the competitive product review.
Competitive Analysis Criteria Assessment & Documentation:
Armed with the testing environment, implemented platforms, established reporting and evidence collection capabilities, and established criteria for testing, Quadrant will begin the technical competitive analysis. This process is the execution of all the planning and setting up of the conditions, processes, policies, and practices to evaluate and analyze the performance and competitive capabilities
Week 4 - Technical Competitive Analysis Presentation: (2)
Develop Findings Presentation:
Synthesize data from market research, Quadrant product testing, and other market discussions to finalize a competitive comparison matrix and develop a written PowerPoint presentation.
Present Findings Presentation:
Quadrant consultants and market experts will present the PowerPoint deck, provide additional commentary on competitive dynamics, and provide recommendations for Customer corporate and product strategy. The Quadrant project team will deliver these presentations over a videoconference, or, preferably, in-person.
Roadmap -- AI -- by Microsoft.
BLUF: The AI Strategy Roadmap: Navigating the Stages of AI Value Creation -- URL: The AI Strategy Roadmap
An Application Architect plays a crucial role in designing, developing, and implementing software applications.
R&R:
Designing Software Architecture: They create the blueprint (OV-1 to SV-5a, from Reference to Logical Architecture, to the Physical Twin) for software applications, ensuring that the architecture meets both technical and operational requirements.
Developing Technical Specifications: They outline the technical details and specifications (SV-5, AV-1, AV-2) needed for the development process.
Collaborating with Teams: They work closely with development and operations teams to ensure the application is built according to the architectural plan.
Managing the Development Lifecycle: They oversee the entire development process, from initial design to deployment.
What is AI?
BLUF: AI is software that imitates human behaviors and capabilities. Data is king in AI.
Use Case: Non-Kinetic (NK) Enemy Target App for the USAF (363rd IRS Wing) and the intelligence community (CIA, NSA. NASIC, Navy, Army, and NATO).
Approach:
>> START << Logical framework -- DODAF artifact (OV-1, The "SIPOC")...
Plan > Strategy (VMGO) > Roadmap (Milestones) > CSF (Risk) > (M&M) > (LL)
Framework/Process Models:
Management Methodology -- PRINCE v2, Agile/Loop (Report at each SIPOC step), and DevSecOps.
Logical framework -- DODAF artifacts (OV-1, SV-4, AV-1, AV-2), MACH Arch.
Into the Physical Twin: Built by Kessel Run (a USAF Agency) in Boston.
MACH Architecture (Microservices-based, API-first, Cloud-native, and Headless) -- Microservices (Independent code deployed), API (Inboard/Outbound), Cloud (Azure tools), Headless (Non-Server Reach-back).
Security: MS Entra ID (MFA, Conditional Access, SSO).
Low Code/No Code: Azure tools and Python.
STEPS:
Step 1: Methodology Plan-Build-Loops:
Agile Methodology—Instead of building it right the first time (ref: Amazon), we created the minimum viable logical architecture-to-physical product (aka v1.0) that is usable/daily operational to meet the deadlines, meet prioritization (what is important), meet milestones, and be efficient.
M.A.C.H Architecture (Microservice, API, Cloud, & Headless: No server)
No Code-Low-Code (& AI Agents).
Used Azure AI Agent Service is a fully managed service that empowers developers to securely build, deploy, and scale high-quality, extensible AI agents. This service allows you to create AI agents using powerful models such as GPT-1 and coded via Python using Azure AI Foundry SDK.
Tool: Azure AI solutions (see below)
Step 2: Used Azure AI Tools.
Images (Vision) - Interprets the world visually through cameras, video, and image processing. In voice, describes what the AI sees in detail from images to documents. -- Study.
-- Tool-1: Azure AI Vision via Azure Vision Studio develops computer vision solutions to do:
Image Analysis: capabilities for analyzing images and video and extracting descriptions, tags, objects, and text.
Face Recognition: capabilities that enable you to build face detection and facial recognition solutions.
Optical Character Recognition (OCR): capabilities for extracting printed or handwritten text from images, enabling access to a digital version of the scanned text. -- DOES (6): (1) Image Classification: ID's images if it is a Taxi, a Car, or a Truck with a gun. (2) Object Detection: To identify the location of different classes of objects, where is the Bus, where is the Car, and where is the People. (3) Semantic Segmentation: To ID objects by color category -- Red=Bus or Hostile, Blue=Car or Non-Hostile, and Green=Cycalist or People, and White=Friendly. (4) Image Analysis: Extract description information from an image. Ex: A satellite next to a warehouse surrounded by a fence in the desert. Or, a person with a dog on a street. (5) Face Detection, Analysis, and Recognition: Face detection uses Facial Geometry Analysis techniques to recognize individuals based on their facial features. (6) Optical Character Recognition (OCR): To detect and read text in images and documents, like road signs, storefronts, and buildings, in addition to websites, forms, and documents.
--Tool-2: MS Seeing AI app uses the device camera to identify people and objects, and then the app audibly & describes those objects via text for analysis.
Data & Document Intelligence (see Knowledge Mining) - Deals with managing, processing, and using high volumes of data found in forms and documents. -- VALUE: To create software that can automate processing for contracts, health documents, financial forms, etc. -- Study.
Data Extract -- Tool: Azure Data Factory (ADF) or Azure AI Search: Does image processing, document intelligence, and natural language processing (NLP): Extracts-Transfers-Loads (ETL) large data from various domain servers. It indexes previously unsearchable documents and extracts and surfaces insights from large amounts of data quickly.
Classifications/Categorizations -- Tool: Azure AI Document Intelligence: To manage data collection (extracts data) from scanned documents, and "Hierarchical Document Classification (HDC)".
Doc Format -- Tool: Azure Functions: Format the analytical text (e.g., bolding titles, adjusting margins, inserting images) is a post-processing workflow task.
Analytical Writing -- Tool: Azure OpenAI Service: Uses GPT's large language models (LLMs) to write "Analytical" writings / narrative.
Generative AI - Creates original content based on credible sources into an USAF Narrative in a variety of formats, including natural language (analytical writings and spoken language), image processing, code, audio, etc. -- Study.
-- Tool: Azure ML enables to streamline prompt engineering projects, build language model-based applications, and automate AI workflows.
Machine Learning (ML) - "Teach" the computer model to make predictions and draw conclusions from data (based on credible & non-credible content). -- Study.
Case Study: USAF AI/ML Non-Kinetic Enemy Target Application. -- Process (5): (1) Collect Data. A team of scientists identify TS/SCI level servers across the IC Network to collect data of enemy samples. (2) Categorize/Label. The team labels the enemy samples and categorize them. (3) Process the Data. The categorized/labeled data is processed using an algorithm that finds relationships between the features of the samples and the labeled data. (4) Model(s). The results of the algorithm are encapsulated in a model. (5) Automation & Maturity. When new data are found, the model can identify the correct data to categorize/label.
-- Tools (2): Use Azure ML Studio to: (1) Automated ML: this feature enables non-experts to quickly create an effective machine learning model from data. (2) Azure ML Designer: a graphical interface enabling no-code development of ML solutions. (3) Data metric visualization: analyze and optimize your experiments with visualization. (4) Notebooks: write and run your code on managed Jupyter Notebook Servers that are directly integrated into Azure ML Studio.
Natural Language Processing (NLP) - Deals with creating software that understands written and spoken language. -- Study.
-- DOES (4): (1) Analyze and interpret text in documents, email messages, and other sources. (2) Interpret spoken language, and synthesize speech responses. (3) Automatically translate spoken or written phrases between languages. (4) Interpret commands and determine appropriate actions.
-- Tools (X): Azure OpenAI Service (not available at this time), (1) Azure Cognitive Services. (2) Azure AI Language: To understand and analyze text (into analytical writing), train conversational language models that can understand spoken or text-based commands, and build intelligent apps. (3) Azure AI Speech: Speech recognition and synthesis, real-time translations, conversation transcriptions, and more.
No Code/Low Code --
Tools: Azure Logic Apps orchestrate the workflow (using no-code) to call the Custom Vision model and resize the output image for the final package.