Applications

Research from a CSET survey reveals that AI professionals are more willing to work with the U.S. military than originally perceived.

The U.S.-China split in space

Axios
| November 17, 2020

Axios' article on the divided U.S.-China space collaboration highlighted reports by CSET's Matthew Daniels and Lorand Laskai for a project of the Johns Hopkins' Applied Research Laboratory.

CSET survey research shows that employees in the AI sector either support working on Department of Defense projects or are neutral. FedScoop highlighted this research in the article below.

“Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?”

Catherine Aiken, Rebecca Kagan, and Michael Page
| November 2020

Is there a rift between the U.S. tech sector and the Department of Defense? To better understand this relationship, CSET surveyed U.S. AI industry professionals about their views toward working on DOD-funded AI projects. The authors find that these professionals hold a broad range of opinions about working with DOD. Among the key findings: Most AI professionals are positive or neutral about working on DOD-funded AI projects, and willingness to work with DOD increases for projects with humanitarian applications.

Foretell was CSET's crowd forecasting pilot project focused on technology and security policy. It connected historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers. In January 2022, Foretell became part of a larger forecasting program to support U.S. government policy decisions called INFER, which is run by the Applied Research Laboratory for Intelligence and Security at the University of Maryland and Cultivate Labs.

During this live event, CSET Research Fellow Dr. Margarita Konaev discussed U.S. military investments in autonomy and artificial intelligence. She provided recommendations for the Defense Department to effectively develop and field AI in the future.

CSET Research Fellow Margarita Konaev assessed the importance of building trust in AI systems for the DOD's successful use of AI in the battlefield. See FedScoop's interview with her and readout of her research below.

National security leaders view AI as a priority technology for defending the United States. This two-part analysis is intended to help policymakers better understand the scope and implications of U.S. military investment in autonomy and AI. It focuses on the range of autonomous and AI-enabled technologies the Pentagon is developing, the military capabilities these applications promise to deliver, and the impact that such advances could have on key strategic issues.

This brief examines how the Pentagon’s investments in autonomy and AI may affect its military capabilities and strategic interests. It proposes that DOD invest in improving its understanding of trust in human-machine teams and leverage existing AI technologies to enhance military readiness and endurance. In the long term, investments in reliable, trustworthy, and resilient AI systems are critical for ensuring sustained military, technological, and strategic advantages.

The Pentagon has a wide range of research and development programs using autonomy and AI in unmanned vehicles and systems, information processing, decision support, targeting functions, and other areas. This policy brief delves into the details of DOD’s science and technology program to assess trends in funding, key areas of focus, and gaps in investment that could stymie the development and fielding of AI systems in operational settings.