We prefer participants from across multiple disciplines who are willing to work together to contribute to the advancement of AI in welcoming SE to build a science of interdependence for autonomous human- machine teams. Papers should be single column MS-word or latex documents (if latex, please submit a pdf) as an outline (of a thesis), extended abstract (1-2 pages) or manuscripts of up to 8 pages in length. Please use APA-style citations in the text and references at the end. We plan to publish revised and extended conference papers in a book as chapters of about 10,000 words for each chapter after the Symposium. Please send symposium papers to the organizers listed above by November 1, 2019 until January 15, 2020.
Introduction. The disruptive nature of AI:
Presently, the U.S. is facing formidable threats from China and Russia. In response to this threat, the Director of the Defense Intelligence Agency (Ashley, 2019) and DNI stated:
China ... [is] acquiring technology by any means available. Domestic [Chinese] laws forced foreign partners of Chinese-based joint ventures to release their technology in exchange for entry into China’s lucrative market, and China has used other means to secure needed technology and expertise. The result ... is a PLA on the verge of fielding some of the most modern weapon systems in the world. ... China is building a robust, lethal force with capabilities spanning the air, maritime, space and information domains which will enable China to impose its will in the region. (p. V) ... From China’s leader, Xi Jinping, to his 19th Party Congress (p. 17) “We must do more to safeguard China’s sovereignty, security, and development interests, and staunchly oppose all attempts to split China or undermine its ethnic unity and social harmony and stability.”
To address these and other competitive threats, Artificial Intelligence (AI), especially machine learning (ML) which we discuss with fusion next, is a major factor. The U.S. Department of Defense (DoD), industry, commerce, education, and medicine among many other fields are seeking to use AI to gain a comparative advantage for systems. From the perspective of DoD (2019),
AI is rapidly changing a wide range of businesses and industries. It is also poised to change the character of the future battlefield and the pace of threats we must face.
Simultaneously, the DoD recognizes the disruptive nature of AI (Oh et al., 2019). To mitigate this disruption while taking advantage of the ready-made solutions AI already offers to commerce, the current thinking appears to first use AI in areas that are less threatening to military planners, the public and potential users; e.g., back-office administration; finance (e.g., Airbus is using AI to cut its financial costs by increasing efficiency, reducing errors, and freeing up humans for more strategic tasks such as planning, analysis and audits; in Maurer, 2019); data collection and management; basic personnel matters; virtual assistants for basic skills training (i.e., Military Occupational Specialties, or MOSs); personal medical monitoring (e.g., drug compliance, weight reduction, sleep cycles); military maintenance; and simple logistics (e.g., ordering, tracking, maintaining supplies).
Second, when the DoD and other fields address the more disruptive aspects of AI, like autonomy and autonomous human-machine teams, many more social changes and impacts will arise, including the adverse threats posed by the use of AI (e.g., Howowitz, 2018); or the unexplainable black box models applied anywhere but especially in mission critical loops or other critical applications (e.g., the so-called “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills; in Kuang, 2017); or the "consequences of failure in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” (DoD, 2019)
Machine Learning (ML) and Fusion: Machine learning has already had an extraordinary economic impact worldwide estimated in the trillions of dollars with even more economic and social impact to come (Brynjolfsson & Mitchell, 2017). The basic idea behind traditional ML methods is that a computer algorithm is trained with data collected in the field to learn a behavior presented to it as part of previous experience (e.g., self-driving cars) or with a data set to an extent that an outcome can be produced by the computer algorithm when it is presented with a novel situation (Raz et al., 2019).
Autonomy is changing the situation dramatically in the design and operational contexts for which future IF systems are evolving. There are many factors that influence or define these new contexts but among them are: movement to cloud-based environments involving possibly many semi- autonomous functional agents (e.g., the Internet of Things, or IoT; in Lawless et al., 2019c), the employment of a wide range of processing technologies and methods spread across agents and teams, an exceptional breadth of types and modalities of available data, and diverse and asynchronous communication patterns among independent and distributed agents and teams. These factors describe the contexts of Complex Adaptive Systems (CAS) for “systems in which a perfect understanding of the individual parts does not automatically convey a perfect understanding of the whole system's behavior” (in Raz et al., 2019).
Managing these disruptions must justify the need for speedy decisions; a systems approach; the commonality of interdependence in systems and social science; social science, including trust; the science of human-human teams (HHT); and human-machine teams (HMT). We discuss these topics in turn.
Justifying speedy decisions:
Now is the time when decisions may need to be made faster than humans can process (Horowitz, 2019), as with the military development of hypersonic weapons by competitor nations (e.g., China, in Wong, 2018); the push for quicker command, control and communication upgrades for nuclear weapons (NC-3, in DoD, 2018); and the common use of AI in public conveyances like self-driving cars, trucks, ships or subways.
Many systems are approaching an operational status that use AI with humans “in-the-loop,” characterized by when a human can override decisions by human-machine or machine-machine teams in combat, such as the Navy’s new Ghost fleet (LaGrone, 2019); the Army’s autonomous self-driving combat convoy (Langford, 2018); and the Marine Corps’ remote ordinance disposal by human-machine teams (CRS, 2018).
Even more dramatic changes are to occur with human “on-the-loop” decisions, characterized by when decisions must be made faster than humans can process and take action based on the incoming information. Among the new weapon systems, these decisions may be made by a human-machine team composed of an F-35 teaming with the Air Force's aggressive, dispensable “attritable” drones flying in a wing or offensive position (Insinna, 2019); moreover, hypersonic weapons are forcing humans into roles as passive bystanders until a decision and its accompanying action have been completed. From an article in the New York Times Magazine (Smith, 2019),
One of the two main hypersonic prototypes now under development in the United States is meant to fly at speeds between Mach 15 and Mach 20 ... when fired by the U.S. submarines or bombers stationed at Guam, they could in theory hit China’s important inland missile bases ... in less than 15 minutes ...
By attacking the United States at hypersonic speeds, however, these speeds would make ballistic missile interceptors ineffective (e.g., Aegis ship-based, Thad ground-based and Patriot systems). If launched by China or Russia against the U.S. (Smith, 2019), these missiles
would zoom along in the defensive void, maneuvering unpredictably, and then, in just a few final seconds of blindingly fast, mile-per-second flight, dive and strike a target such as an aircraft carrier from an altitude of 100,000 feet.
Human “on-the-loop” observations of autonomous machines making self-directed decisions carry significant risks. On the positive side, since most accidents are caused by human error (Lawless et al., 2017), self-directed machines may save more lives. But an editorial in the New York Times (Editors, 2019) expressed the public’s concerns that AI systems can be hacked, suffer data breaches, and lose control to adversaries. The Editors quoted the UN Secretary General, Antonio Guterres, that “machines with the power and discretion to take lives without human involvement ... should be prohibited by international law.” The editorial recommended that “humans never completely surrender life and decision choices in combat to machines.”
Whether or not a treaty to manage threats from the use of “on the loop” decisions is enacted, the violations of existing treaties by nuclear states (e.g., NATO’s judgement about suspected Russian treaty violations; in Gramer & Seligman, 2018) suggests the need to understand the science of autonomy for “on the loop” decisions and to counter the systems that use them.
Further, the warning by the Editors of the New York Times is similar to those that arose during the early years of atomic science, balanced by managing the threats posed while at the same time allowing scientists to make numerous discoveries leading to the extraordinary gifts to humanity that have followed, crowned by the Higgs (the so-called “God”) particle and quantum computing. The science of autonomy must also be managed to balance its threats while allowing scientists to make what we hope are similar advances in the social sphere ranging from systems engineering and social science to international affairs.
Systems Engineering (SE):
SE is also concerned about whether AI and ML will replace humans in the decision loop (Howell, 2019). System Engineers prefer that humans and machines coexist together, that machines be used to augment human intelligence, but that if decisions by machines overtake human decision-making as is happening with “on-the-loop” decisions, at least humans should audit the machine decisions afterwards (viz., see the Uber car fatality case below). SE also raises a series of other concerns and questions.
In addition to the public’s concerns about AI expressed by the Editors in the New York Times, the application of AI/ML raises several concerns and questions for SE. One concern is whether or not to use a modular approach to build models (Rhodes, 2019). System Engineers note that safety is an emergent property of a system (Howell, 2019). When a team “emerges,” the whole has become more than the sum of its parts (Raz et al., 2019); in contrast, it can create “a whole significantly less than the sum of its parts” (e.g., Mead, 2019). But if SE using AI/ ML is to be transformed through model-centric engineering (Blackburn, 2019), how is that to be accomplished for autonomous teams? Systems often do not stand alone; in those cases where systems are a network of networks, how shall system engineers assure that the “pieces work together to achieve the objectives of the whole” (Thomas, 2019)? From retired General Stanley McCrystal’s book, Team of teams, “We needed to enable a team operating in an interdependent environment to understand the butterfly-effect ramifications of their work and make them aware of the other teams with whom they would have to cooperate” (in Long, 2019). Continuing with emphasis added by Long (2019) in the attempt by the Canadian Armed Forces to build a shared Communication and Information Systems (CIS) with networked teams and teams of teams in its systems of organizations,
Systems must be specifically designed to enable resilient organizations, with the designer and community fully aware of the trade-offs that must be made to functionality, security, and cost. However, the benefits of creating shared consciousness, lowering the cost of participation, and emulating familiar human communication patterns are significant (Long's emphasis).
For more concerns, along with metrics for autonomous AI systems, formal verification (V&V), certification and risk assessments of these systems at the design, operational and maintenance stages will be imperative for engineers (Lemnios, 2019; Richards, 2019). Is there a metric to assess the risk from collaboration, and if so, can it be calculated (Grogan, 2019)? The risk from not deploying AI systems should also be addressed (DeLaurentis, 2019); while an excellent suggestion, how can this concern be addressed?1 Measured in performance versus expectations, when will these risks preclude humans from joining teams with machines; and what effect will machine redundancy have in autonomous systems (Barton, 2019)? Because data are dumb, how will the operational requirements and architectures be tested and evaluated for these systems over their lifecycle (Dare, 2019; Freeman, 2019)?
Boundaries and deception: AI can be used to defend against outsiders, or used with deception to exploit vulnerabilities in targeted networks (Yampolskiy, 2017). A team’s system boundaries must be protected (Lawless, 2017a). Protecting a team’s networks is also a concern. In contrast, deception functions by not standing out (i.e., fitting in structurally; in Lawless, 2017b). Deception can be used to compromise a network. From the Wall Street Journal (Volz & Youssef, 2019), the Department of Homeland Security’s top cybersecurity official, Chris Krebs, issued a statement warning that Iran’s malicious cyber activities were on the rise. “What might start as an account compromise ... can quickly become a situation where you’ve lost your whole network.”
Caution: In the search for the optimization of an interdependent system, tradeoffs occur (Long, 2019); however, an AI system optimized to be effective and efficient should not tradeoff its resilience to disruption. For example, the most optimum AI solution might ignore resilience; or the most resilient AI solution might ignore optimization; in contrast, the preferred solution should be to optimize effectiveness and efficiency while simultaneously assuring the most resilient solution possible. If an interaction to each part of a system has no effect on its other parts, a designer can ignore the system and focus on its independent components (Bode & Holstein, 2015); but if the resilience of a sytem is affected by its optimization, a tradeoff is involved, and these interdependent factors must be considered at the same time. The same reasoning should apply to the radically new system of autonomous Human-Machine Teams (discussed later) with their design, operation and management, including their financial, social and environmental impacts.
Common ground: AI, interdependence and SE:
Systems engineers know about interdependence from a system’s perspective. They claim to know little about human teams, which they hope can be improved by working with social scientists and by studying their own SE teams and organizations (DeLaurentis, 2019). Their own teams and organizations, however, are systems of social interdependence.
System Engineering addresses the interactions of systems too complex for an analyses of their independent parts without taking a system as a whole into account across its life cycle. System complexity from the “interdependencies between ... constituent systems” can produce unexpected effects (Walden et al., 2015, p. 10), making the management of systemic interdependence critical to a system’s success. For example, the interactions for complex systems with numerous subsystems, like the International Space Station (ISS), interact interdependently (i.e., interdependence affected how the ISS modules were assembled into an integrated whole, how module upgrades affected each other, how interfaces between ISS modules were determined to be effective, how the overall configuration of the modules was constructed, how modules were modeled, etc.; in Stockman et al., 2010). From the ISS, in SE, we can see that interdependence transmits the interactions of subsystems. The study of interdependence in systems is not a new idea. For example, Llinas (2014, pp. 1,6) issued a
call for action among the fusion, cognitive, decision-making, and computer-science communities to muster a cooperative initiative to examine and develop [the] ... metrics involved in measuring and evaluating process interdependencies ... [otherwise, the design of] modern decision support systems ... will remain disconnected and suboptimal going forward.
Similarly, in the social sciences, interdependence is the means of transmitting social effects (Lawless, 2019), such as the construction of a shared context between two humans, and, we propose, for human- machine teams (HMT). Interdependence then is the phenomenon that not only links Systems Engineering,
AI and other disciplines (e.g., Social Science, law, philosophy, etc.), but also, if interdependence can be mastered, it will provide a means to assist AI and SE in the development of a science of interdependence for human-machine teams.
The application of interdependence in a system to analyze an accident: In 2018, an Uber self-driving car struck and killed a pedestrian. From the investigation report (NTSB, 2018), the machine saw the pedestrian 6s before striking her, selected the brakes 1.3s before impact, but the emergency brakes had been disconnected by Uber engineers to help the car ride better. The human operator saw the victim 1s before impact and hit her brakes 1s after impact. Of the conclusions to be drawn, first, the Uber car performed faster than the human and as designed; but, second, the Uber car was a poor team player by not updating the context it should have shared with its human operator (Lawless et al., 2019b).
Trust as part of the accident analysis. When will machines be qualified to be trusted remains an important question. As we pointed out in a bet forthcoming in AI Magazine (Lawless et al, 2019a), despite the complexity and costs of validating these systems, according to a New York Times (Wakabayashi, 2018) investigation of the pedestrian's death in 2018 by the Uber self-driving car, Waymo self-driving cars:
went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble. As of March [2018, when the accident happened], Uber was struggling to meet its target of 13 miles per “intervention” in Arizona ...
It must be kept in mind, however, that as incompletely and poorly trained as was the Uber car and its engineers and operator, the Uber car still responded to the situation as it had been designed; further, its response was faster than its human operator.
Social science:
The National Academy of Sciences (2019) Decadal Survey of Social and Behavioral Sciences finds that the social sciences want to be included in research using computational social science for human and AI agents in teams. In their thinking, social scientists are concerned about ethical and privacy issues with the large digital data bases being collected. For systems of social networks, they recommended further study on
how information can be transmitted effectively ... [from] change in social networks ... network structure of online communities, the types of actors in those communities ...
In addition, social scientists want more research to counter social cyberattacks, research on emotion, and, for our purposes (see below in Bisbey et al., 2019 for similar issues with research on human teams),
... how to assemble and divide tasks among teams of humans and AI agents and measure performance in such teams. ...
More importantly, while social scientists want to be included in the AI/ML revolution, they have had setbacks in their own disciplines with the reproducibility of experiments (e.g., Nosek, 2015; also, Harris, 2018). For our purposes, unexpectedly, research has indicated that the poorest performing teams of scientists were interdisciplinary teams (Cummings, 2015).2 In addition, however, Cummings added that the best scientist teams maximized interdependence. Based on Cummings and our research (e.g., Lawless, 2019), we conclude that for interdisciplinary teams to function optimally, its team members must also be operating under maximum interdependence (Lawless, 2017a). By extension, for the optimum size of a team to maximize interdependence, a team’s size must be the minimum size to solve a targeted problem (Lawless, 2017a), contradicting the Academy’s two assertions that "more hands make light work" (Cooke & Hilton, 2015, Ch. 1, p. 13) and that the optimal size of a scientific team is an open problem (p. 33).
The advent of human-machine teams has elevated the need to determine context computationally, yet social science has offered little guidance for their design, operation or to prevent accidents (see the Uber self-driving car accident described above that killed a pedestrian in 2018), let alone the means to construct a computational context (Lawless et al., 2019b). Recognizing their plight, social scientists argue, and we agree, that their science is the repository of an extraordinary amount of statistical and qualitative experience in determining and evaluating contexts for humans and human teams (NAS, 2019). Nonetheless, this situation leaves engineers to seek a quantitative path on their own. Instead, we foresee an integrated path as the better course going forward (Lawless, 2019).
Trust and machine autonomy: In the rapid decision-making milieux where trust between machine and human members of a team becomes a factor (Beling, 2019), to build trust, each member of a human- machine team must be able to exchange information about their status between teammates but also to keep that information private (Lawless et al., 2019b). In that humans cause most accidents (Lawless et al., 2017), trust can be important outside of the team, as when a human operator threatens passengers being transported, which happened with the crash of GermanWings Flight 9525 in the Alps in March 2015, killing all 150 aboard at the hands of its copilot who committed suicide (BEA, 2016); or the engineer on the train in the Northeast Corridor in the United States who allowed his train rounding a curve to speed above the track's limits (NTSB, 2016); or the ship’s captain on the bridge of the McCain at the time the destroyer was turning out of controlling in a high-traffic zone (NTSB, 2019). In these and numerous other cases, it is possible with current technology and AI to authorize a plane, train, other public vehicle or military vehicle or Navy ship as part of a human-machine team to take control from its human operator (the bet that a machine will be authorized to take control from a dysfunctional human operator, in Lawless et al., 2019a).
The science of human teams:
From our review of human teams, Proctor and Vu (2019) conclude that the best forecasts improve with competition (Mellers & Tetlock, 2019). They also conclude that teams are formed by “extrinsic factors, intrinsic factors, or a combination of both. Extensive motivation is often generated from the collective consensus of many stakeholders (the public, researchers, and sponsoring agencies) that there is an urgent problem that needs to be solved. But they asserted that solutions require “a multidisciplinary team that is large in score ... [with] the resources required to carry out the research ... to appropriate subject-matter experts, community organizations and other stakeholders ... [and] within an organization, administrative support for forming, coordinating, and motivating multidisciplinary teams ...”
Salas and his colleagues (Bisbey et al., 2019) conclude that “Teamwork allows a group of individuals to function effectively as a unit by using a set of interrelated knowledge, skills and attitudes (KSAs; p. 279). [On the other hand] ... poor teamwork can have devastating results ... plane crashes, ... friendly fire, ... surgical implications ... When the stakes are high, survival largely depends on effective teamwork.” One of the first successes with human teams was: “Crew resource management [CRM] prompted by not “human error,” but crew phenomena outside of crew member competencies such as poor communication in United Flight 173 led the Captain to disregard fuel state. ... CRM required the crew to solve its problems as a team (p. 280).” Another success for team science occurred in the attempts to understand the shoot-down of an Iranian commercial airliner by the USS Vincennes in 1988, leading to the study of stress in decision-making. Subsequently, following the combination of a significant number of unrelated human errors which led to new research after President Clinton’s Institute of Medicine (IOM) review of medical errors in hospitals; the coordination errors with the BP/Deepwater Horizon oil spill in 2011; Hurricane Katrina in 2005; and the NASA accidents Columbia in 2003 and Challenger in 1986 space shuttle accidents. Based on this new research, human team scientists separated task-work from teamwork. Task work dealt with skills or a skills domain (flying a plane), teamwork skills with team effectiveness across contexts (e.g., how to communicate with others; p. 282).
Human-machine teams:
A précis of our research on mathematical models of interdependence and future directions follows. From our hypothesis that the best teams maximize interdependence to communicate information via constructive and destructive interference, we have established that the optimum size of teams and organizations occurs when they are freely able to choose to minimize redundant team members (Lawless, 2017a); we replicated the finding about redundancy and freedom in making choices, adding that redundancy in over-sized teams is associated with corruption (Lawless, 2017b), and that the decision- making of teams and organizations in interdependent states under the pressure of competition implies tradeoffs that require intelligence to navigate around the obstacles that would otherwise preclude a team from reaching its goal, such as producing patents (Lawless, 2019). Our findings on redundancy contradict network scientists (Centola & Macy, 2007, p. 716) and the Academy (Cooke & Hilton, 2015, Chapter 1, p. 13); we also found that interdependence identified in tracking polls indicates that it interferes adversely with predictions based on those polls (Lawless, 2017a,b); e.g., Tetlock and Gardiner’s first super- forecasters failed in their predictions in 2016 that Brexit would not occur followed by their second in 2016 that Trump would not be elected President.
In a recent article (Lawless, 2019), we found evidence that intelligence measured by levels of education is significantly associated with the production of patents; however, in earlier research from 2001 reviewed in the same article, we reported that education specific to air-combat maneuvering was unrelated to the performance of fighter pilots engaged in air-to-air combat, indicating that intelligence and physical skills tap orthogonal phenomena, offering a new model of mathematics and thermodynamics for teams that also accounts for the failure of complementarity to be established; viz., for the latter, the best teams are composed of agents in orthogonal roles, measured by Von Neumann subadditivity, whereas agents in the worst teams are in roles measured by Shannon information (e.g., the conflict between CBS and Viacom during 2016-18). Finally, orthogonality figures into our proposed next study on fundamental decision processes and emotion for a model of a social harmonic oscillator where we hypothesize that the best teams operate in a ground state while underperforming teams operate in excited states (Lawless, 2019).
Summary:
Interdependence is the common ingredient that motivates Systems Engineering, AI and the science of human-machine teamwork. Should AI scientists, systems engineers and others contribute to the development of autonomy for human-machine teams, the threats autonomy poses to the world must be managed to permit the advances that may accrue across the social, systems, ethical, political, international and other landscapes for the benefit of humanity.
References:
Ashley, Jr., Robert P., Lieutenant General, U.S. Army Director (2019), China, Military Power. Modernizing a force to fight and win, Defense Intelligence Agency, from https://www.dia.mil/Portals/ 27/Documents/News/Military%20Power%20Publications/ China_Military_Power_FINAL_5MB_20190103.pdf
Barton, T. (2019, 4/17), Sea Hunter/AI, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
BEA (2016, 3/13), Accident to the Airbus A320-211, registered D-AIPX and operated by Germanwings, flight GWI18G, on 03/24/15 at Prads-Haute-Bléone, BEA2015-0125
Beling, P. (2019, 4/16), A systems theoretic framework for the AI LifeCycle, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Bisbey, T.M., Reyes, D.L., Traylor, A.M. & Salas, E. (2019), Teams of psychologists helping teams: The evolution of the science of team training, American Psychologist, 74(3): 278-289.
Blackburn, M. (2019, 4/16), Transforming SE through model centric engineering, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Bode, H.W. & Holstein, W.K. (2015), Systems engineering, Encyclopedia Britannica, from https://www.britannica.com/topic/systems-engineering
Brynjolfsson, E. & Mitchell, T. (2017), What can machine learning do? Workplace implications. Profound changes are coming, but roles for humans remain,” Science, 358: 1530–34.
Centola, D. & Macy, M. (2007), Complex Contagions and the Weakness of Long Ties, American Journal of Sociology, 113(3): 702–34.
Cooke, N.J. & Hilton, M.L. (Eds.) (2015), Enhancing the Effectiveness of Team Science. Authors: Committee on the Science of Team Science; Board on Behavioral, Cognitive, and Sensory Sciences; Division of Behavioral and Social Sciences and Education; National Research Council. Washington (DC): National Academies Press
CRS (2018, 11/20), "U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence (AI): Considerations for Congress, Congressional Research Service, p. 9, R45392, Version 3, from https://fas.org/sgp/crs/weapons/R45392.pdf
Cummings, J. (2015). Team Science Successes and Challenges. National Science Foundation Sponsored Workshop on Fundamentals of Team Science and the Science of Team Science (June 2), Bethesda MD (https://www.ohsu.edu/xd/education/schools/school-of-medicine/departments/clinical-departments/ radiation-medicine/upload/12-_cummings_talk.pdf).
DeLaurentis, D. (2019, 4/17), Breakout session, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
DoD (2018, 2), NUCLEAR POSTURE REVIEW, OFFICE OF THE SECRETARY OF DEFENSE, https://www.defense.gov/News/SpecialReports/2018NuclearPostureReview.aspx
DoD (2019, 2/12), SUMMARY OF THE 2018 DEPARTMENT OF DEFENSE ARTIFICIAL INTELLIGENCE STRATEGY Harnessing AI to Advance Our Security and Prosperity, from https:// media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF
Editors (2019), “Ready for weapons with free will?” New York Times, from https://www.nytimes.com/ 2019/06/26/opinion/weapons-artificial-intelligence.html
Freeman, L. (2019, 4/17), AI as a change agent for test and evaluation, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Gardner, G. (2019, 3/6), “Uber Won't Face Charges In Fatal Arizona Crash, But Prosecutor Urges Further Probe,” Forbes, from https://www.forbes.com/sites/greggardner/2019/03/06/uber-wont-face-charges- in-fatal-arizona-crash-but-prosecutor-urges-further-probe/#6820859f475a
Gramer, R. & Seligman, L. (2018, 12/4), “Trump and NATO Show Rare Unity in Confronting Russia’s Arms Treaty Violation. NATO backs U.S. assertion that Moscow is violating a key Cold War-era arms treaty,” Foreign Policy, from https://foreignpolicy.com/2018/12/04/trump-and-nato-show-rare-unity-in- confronting-russia-arms-treaty-violation-inf/
Grogan, P. (2019, 4/17), Game-theoretic risk assessment for distributed systems, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Harris, R. (2018, 8/27), “In Psychology And Other Social Sciences, Many Studies Fail The Reproducibility Test,” National Public Radio, from https://www.npr.org/sections/health-shots/ 2018/08/27/642218377/in-psychology-and-other-social-sciences-many-studies-fail-the-reproducibility- te
Horowitz, B. (2019, 4/16), Introduction of the life cycle-ready AI concept, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Howowitz, M.C. (2018, 4/23), "The promise and peril of military applications of artificial intelligence," The Bulletin of the Atomic Scientists, from https://thebulletin.org/2018/04/the-promise-and-peril-of-military-applications-of-artificial-intelligence/
Howell, C. (2019, 4/16), Lifecycle implications for dependable AI, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Insinna, V. (2019, 6/21), “Lockheed hypes F-35′s upgrade plan as interest in ‘sixth-gen’ fighters grows,” Defense News, from https://www.defensenews.com/digital-show-dailies/paris-air-show/2019/06/21/ lockheed-hypes-f-35s-upgrade-plan-as-interest-in-sixth-gen-fighters-grows/
Kuang, C. (2017, 11/21), “Can A.I. Be Taught to Explain Itself? As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it”, New York Times, from https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html
LaGrone, S. (2019, 3/13), ”Navy Wants 10-Ship Unmanned ‘Ghost Fleet’ to Supplement Manned Force,”U.S. Naval Institute, from https://news.usni.org/2019/03/13/navy-wants-ten-ship-3b- unmanned-experimental-ghost-fleet
Langford, J. (2018, 7/30), "Lockheed wins Army contract for self-driving military convoy systems,” Washington Examiner, from https://www.washingtonexaminer.com/business/lockheed-wins-army- contract-for-self-driving-military-convoy-systems
Lawless, W.F. Mittu, R., Sofge, D., & Russell, S., Eds. (2017), Autonomy and Artificial Intelligence: A threat or savior? New York: Springer.
Lawless, W.F. (2017a), The entangled nature of interdependence. Bistability, irreproducibility and uncertainty, Journal of Mathematical Psychology, 78: 51-64.
Lawless, W.F. (2017b), The physics of teams: Interdependence, measurable entropy and computational emotion, Frontiers of physics. 5:30. doi: 10.3389/fphy.2017.00030
Lawless, W.F. (2019, forthcoming), Interdependence for human-machine teams, Froundations of Science. Lawless, W.F. (Pro Bet), Mittu, Ranjeev (Con) & Sofge, Donald (Referee) (2019a, forthcoming), AI Bookie Bet: How likely is it that an AI-based system will self-authorize taking control from a human operator? AI Magazine.
Lawless, W.F., Mittu, R., Sofge, D.A. & Hiatt, L. (2019b), Introduction to the Special Issue, “Artificial intelligence (AI), autonomy and human-machine teams: Interdependence, context and explainable AI,” AI Magazine.
Lawless, W.F., Mittu, R., Sofge, D., Moskowitz, I.S. & Russell, S. (Eds.) (2019c), Artificial Intelligence for the Internet of Everything. Elsevier.
Lemnios, Z. (2019, 4/17), IBM research, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Llinas, J. (2014). Reexamining Information Fusion–Decision Making Inter-dependencies, Presented at the IEEE CogSIMA conference, San Antonio, TX.
Long, J. (2019), “National Defence and the Canadian Armed Forces: Enabling Organizational Resilience through Communication and Information Systems Design,” Canadian Military Journal, 119(2): 15;
from http://www.journal.forces.gc.ca/Vol19/No2/page15-eng.asp
Maurer, M. (2019, 8/19), "Airbus Harnessing AI in Bid to Save Millions on Finance Tasks. The aircraft maker’s Americas unit is digitizing the approval of expense reports and payment of invoices,” Wall Street Journal, from https://www.wsj.com/articles/airbus-harnessing-ai-in-bid-to-save-millions-on- finance-tasks-11566207002
Mead, W.R. (2019, 6/4), “Trump’s Case Against Europe. The president sees Brussels as too weak, too liberal, and anti-American on trade,” Wall Street Journal, from https://www.wsj.com/articles/trumps- case-against-europe-11559602940
NAS (2019), A Decadal Survey of the Social and Behavioral Sciences: A Research Agenda for Advancing Intelligence Analysis, National Academies of Sciences.
Nosek, B., corresponding author from OCS (2015), Open Collaboration of Science: Estimating the reproducibility of psychological science, Science, 349 (6251): 943; supplementary: 4716-1 to 4716-9. (National Academies of Sciences, Engineering, and Medicine. 2019. Reproducibility and Replicability in Science. Washington, DC: The National Academies Press. https://doi.org/10.17226/25303.)
NTSB (2016, 5/17), Derailment of Amtrak Passenger Train 188. National Transportation Safety Board (NTSB), NTSB Number: RAR-16-02, from https://www.ntsb.gov/Investigations/AccidentReports/ Pages/RAR1602.aspx
NTSB (2018, 5/24), "Preliminary Report Released for Crash Involving Pedestrian, Uber Technologies, Inc., Test Vehicle,” National Transportation Safety Board, from https://www.ntsb.gov/news/press- releases/Pages/NR20180524.aspx nnn
NTSB (2019, 8/5), “Insufficient Training, Inadequate Bridge Operating Procedures, Lack of Operational Oversight Led to Fatal Ship Collision,” NTSB: Collision between US Navy Destroyer John S McCain and Tanker Alnic MC Singapore Strait, 5 Miles Northeast of Horsburgh Lighthouse [accident occurred on] August 21, 2017, Marine Accident Report, NTSB/MAR-19/01 PB2019-100970, from https:// www.ntsb.gov/investigations/AccidentReports/Reports/MAR1901.pdf
Oh, P., Spahr, T., Chase, C. & Abadie, A. (2019, 6/19), “INCORPORATING ARTIFICIAL INTELLIGENCE: LESSONS FROM THE PRIVATE SECTOR,” War Room, United States Army War College, from https://warroom.armywarcollege.edu/articles/incorporating-artificial-intelligence- private-sector/
Proctor, R.W. & Vu, K.P.L. (2019), How Psychologists help solve real-world problems in multidisciplinary research teams: Introduction to the Special Issue, American Psychologist, 74(3): 271-277.
Raz, Ali K., Llinas, James, Mittu, Ranjeev & Lawless, W. (2019), Engineering for Emergence in Information Fusion Systems: A Review of Some Challenges, Fusion 2019, Ottawa, Canada | July 2-5, 2019.
Rhodes, D. (2019, 4/16), Interactive model-centric engineering (IMCSE), SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Richards, R. (2019, 4/17), Program manager at DARPA, invited talk, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Smith, R.J. (2019, 6/23), “Scary fast: How hypersonic Missiles—which travel at more than 15 times the speed of sound—are touching off a New Global Arms Race that threatens to change the nature of warfare,” New York Times Magazine, pp. 42-48; also, see https://www.nytimes.com/2019/06/19/ magazine/hypersonic-missiles.html
Stockman, B., Boyle, J. & Bacon, J. (2010), International Space Station Systems Engineering Case Study, Air Force Center for Systems Engineering, Air Force Institute of Technology, from https:// spacese.spacegrant.org/uploads/images/ISS/ISS%20SE%20Case%20Study.pdf
Thomas, J. (2019, 4/17), INCOSE discussion, SERC Workshop: Model Centric Engineering, Georgetown University, Washington, DC, April 16 & 17, 2019.
Volz, D. & Youssef, N. (2019, 6/23), “U.S. Launched Cyberattacks on Iran. The cyberstrikes on Thursday targeted computer systems used to control missile and rocket launches,” Wall Street Journal, from https://www.wsj.com/articles/u-s-launched-cyberattacks-on-iran-11561263454
Wakabayashi, D. (2018, 3/23), “Uber’s Self-Driving Cars Were Struggling Before Arizona Crash”, New York Times, from https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars- arizona.html
Walden, D.D., Roedler, G.J., Forsberg, K.J., Hamelin, R.D. & Shortell, T.M. (Eds.) (2015), Systems Engineering Handbook. A guide for system life cycle processes and activities (4th Edition). Prepared by International Council on System Engineering (INCOSE-TP-2003-002-04. Hoboken, NJ: John Wiley & Sons.
Wong, K. (2018, 8/10), “China claims successful test of hypersonic waverider,” Jane’s 360, from https:// www.janes.com/article/82295/china-claims-successful-test-of-hypersonic-waverider
Yampolskiy, R.V. (2017, 5/8), “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business R3view, from https://hbr.org/2017/05/ai-is-the-future-of-cybersecurity-for-better-and-for- worse
Endnotes:
1 One possibility is to use global metrics. In the case of the Uber car accident that killed a pedestrian discussed below, the industry’s first pedestrian fatality, the company’s self-driving section did not suffer until the accident, and then Uber and the rest of the self-driving industry have been significantly slowed by the fatality (Gardner, 2019).
2 Cummings studied about 500 teams of scientists in the National Science Foundation’s data base.