Husanjot Chahal was a Research Analyst at Georgetown University’s Center for Security and Emerging Technology (CSET). Prior to CSET, she worked in the World Bank’s Corporate Security division, and at Prevalent, Inc., a cybersecurity risk management firm in Washington, D.C. She also worked in New Delhi-based research organizations including the Indian Ministry of Defence’s Institute for Defence Studies and Analyses (IDSA), and the Institute of Peace and Conflict Studies (IPCS), examining security issues in South Asia.
Husan holds a Bachelor’s degree in Political Science from Lady Shri Ram College at the University of Delhi, a Master’s degree in International Security and Terrorism from the University of Nottingham, and a Master’s degree in Security Studies from Georgetown University. While studying at Georgetown she was the Walsh School of Foreign Service’s Junior Centennial Fellow and a researcher at the Berkley Center for Religion, Peace, and World Affairs. She is a recipient of Georgetown University’s Global Citizen Award (2019) and the Director’s Citizenship Award (2018).
In a report for the Observer Research Foundation, Research Analyst Husan Chahal writes about the ethics of artificial intelligence and how the multitude of efforts across such a diverse group of stakeholders reflects the need for guidance in AI development.
CSET’s CAT presents data related to countries' artificial intelligence ecosystems to give an overview of domestic capabilities, as well as insights on competitiveness and collaboration globally. It presents metrics on AI research, patents, and investment-related activities for AI overall and its various subfields.
Through the Quad forum, the United States, Australia, Japan and India have committed to pursuing an open, accessible and secure technology ecosystem and offering a democratic alternative to China’s techno-authoritarian model. This report assesses artificial intelligence collaboration across the Quad and finds that while Australia, Japan and India each have close AI-related research and investment ties to both the United States and China, they collaborate far less with one another.
Conventional wisdom suggests that cutting-edge artificial intelligence is dependent on large volumes of data. An overemphasis on “big data” ignores the existence—and underestimates the potential—of several AI approaches that do not require massive labeled datasets. This issue brief is a primer on “small data” approaches to AI. It presents exploratory findings on the current and projected progress in scientific research across these approaches, which country leads, and the major sources of funding for this research.
As multinational collaboration on emerging technologies takes center stage, U.S. allies and partners must overcome the technological, bureaucratic, and political barriers to working together. This report assesses the challenges to multinational collaboration and explains how joint projects centered on artificial intelligence applications for military logistics and sustainment offer a viable path forward.
With its massive information technology workforce, thriving research community and a growing technology ecosystem, India has a significant stake in the development of artificial intelligence globally. Drawing from a variety of original CSET datasets, the authors evaluate India’s potential for AI by examining its progress across five categories of indicators pertinent to AI development: talent, research, patents, companies and investments, and compute.
CSET Research Fellow Margarita Konaev and Research Analyst Husanjot Chahal discuss research gaps on trust in human-machine teaming and how to build trustworthy AI systems for military systems and missions.
As the U.S. military integrates artificial intelligence into its systems and missions, there are outstanding questions about the role of trust in human-machine teams. This report examines the drivers and effects of such trust, assesses the risks from too much or too little trust in intelligent technologies, reviews efforts to build trustworthy AI systems, and offers future directions for research on trust relevant to the U.S. military.
National security leaders view AI as a priority technology for defending the United States. This two-part analysis is intended to help policymakers better understand the scope and implications of U.S. military investment in autonomy and AI. It focuses on the range of autonomous and AI-enabled technologies the Pentagon is developing, the military capabilities these applications promise to deliver, and the impact that such advances could have on key strategic issues.
This brief examines how the Pentagon’s investments in autonomy and AI may affect its military capabilities and strategic interests. It proposes that DOD invest in improving its understanding of trust in human-machine teams and leverage existing AI technologies to enhance military readiness and endurance. In the long term, investments in reliable, trustworthy, and resilient AI systems are critical for ensuring sustained military, technological, and strategic advantages.
The Pentagon has a wide range of research and development programs using autonomy and AI in unmanned vehicles and systems, information processing, decision support, targeting functions, and other areas. This policy brief delves into the details of DOD’s science and technology program to assess trends in funding, key areas of focus, and gaps in investment that could stymie the development and fielding of AI systems in operational settings.
Today’s research and development investments will set the course for artificial intelligence in national security in the coming years. This Executive Summary presents key findings and recommendations from CSET’s two-part analysis of U.S. military investments in autonomy and AI, including our assessment of DOD’s research priorities, trends and gaps, as well as ways to ensure U.S. military leadership in AI in the short and the long term.
Both China and the United States seek to develop military applications enabled by artificial intelligence. This issue brief reviews the obstacles to assessing data competitiveness and provides metrics for measuring data advantage.
The United States must collaborate with its allies and partners to shape the trajectory of artificial intelligence, promoting liberal democratic values and protecting against efforts to wield AI for authoritarian ends.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.