Our researchers have been sought out by the media for their expertise on the latest developments in emerging technology and national security. This month, they weighed in on U.S. artificial intelligence workforce policy, AI safety and reliability, data fusion in Chinese surveillance programs, and much more.
Research Analyst Emily Weinstein made an appearance across the pond in a segment on the BBC program “The World Tonight” to discuss China’s investment in science and technology development and its position as a global competitor in emerging technology. She said Chinese President Xi Jinping “has really doubled down on his efforts to boost not only specific S&T sectors, but has also focused on the underlying issues that China has been facing in achieving its science and technology goals.”
Emerging Tech Brew
A new CSET report says the United States lacks a dedicated, clearly defined AI education and workforce policy. Emerging Tech Brew reached out to CSET Research Fellow and lead author of the report Diana Gehlhaus to learn more about AI education gaps and the report’s policy recommendations. “This really does get into the fact of: What do you want to be when you grow up? Nobody says data scientist, right? Nobody says, ‘I want to be an AI researcher’…We want to make sure that everybody has an opportunity, not just the people who fit in the four-year-college mold,” Gehlhaus said. If the United States wants to improve its AI workforce policy, the authors suggest that policymakers increase AI PhDs, sustain and diversify pipelines for AI roles, and introduce AI education into K-12 curriculum.
“The United States can’t lead in AI talent without education and workforce policies that target growing and cultivating the U.S. AI workforce,” Gehlhaus wrote in an op-ed for The Hill timed to the report’s release. “The U.S. needs dedicated policies that consider the entire range of technical and nontechnical talent needed to design, develop, and deploy safe and trustworthy AI capabilities. The United States also needs policies that prepare future workers to compete and succeed in a world characterized by widespread AI adoption.”
CSET’s September webinar featuring Andrew Lohn, Katerina Sedova and Micah Musser raised alarms on how AI technology, specifically the natural language processing system GPT-3, can generate credible disinformation. Breaking Defense covered the event and highlighted their report “Truth, Lies, and Automation.” The authors have “grown pretty concerned because… these language models have become very, very capable, and it’s difficult for humans to tell what’s written by a human and what’s written by a machine.”
In an opinion piece for Brookings Tech Stream, Research Analyst Dahlia Peterson unpacked China’s use of data fusion in surveillance programs and its policy implications. “Increased attention to Chinese data fusion practices—and its supporting companies—would allow U.S. policy to target China’s surveillance state at a core level, rather than only facial and voice recognition elements that feed into fusion architectures,” Peterson wrote.
Air Force Magazine
Increased reliance on AI raises the risk of adversarial attacks through hacking according to Air Force Magazine, which cited Andrew Lohn’s machine learning primer “Hacking AI.” The paper illustrates how hackers can access AI systems since “machine learning vulnerabilities often cannot be patched the way traditional software can, leaving enduring holes for attackers to exploit.” To build resilient AI systems, Lohn argues that “policymakers should pursue approaches for providing increased robustness, including the use of redundant components and ensuring opportunities for human oversight and intervention when possible.”
University World News
Ongoing tensions between the United States and China over the U.S. Justice Department’s China Initiative have raised questions about the viability of U.S.-China academic collaboration. University World News spoke with Emily Weinstein, who said China is no longer seeking to place its students only at premier universities. “[China’s] strategy is much broader,” she said. “I think they understand there are reasons to go to technical colleges too. University of California, Davis came up in one of the China Initiative cases recently. UC Davis is not a school you think of as a place you can go to steal technology.” She also discussed the need for a public-private clearinghouse to conduct investigations on Chinese scholars seeking to come to the United States and American scholars seeking to do research in China.
CSET Director of Strategy Helen Toner joined the EE Times podcast “On the Air” to discuss what safe and reliable AI should look like drawing from her co-authored report “AI Accidents: An Emerging Threat.” In the interview, Toner made the case for technical standards and agreed-upon safety guidelines as artificial intelligence continues to develop. She noted, “We need time and discussion and debate at a very broad level, with engagement of many, many stakeholders around, if we are going to automate some task — and there are many, many tasks we could consider automating — if we are going to automate that, what is acceptable performance look like?”
Spotlight on CSET Experts: Diana Gehlhaus
Diana Gehlhaus is a Research Fellow focused on tech and talent, including domestic talent pipelines in AI; workforce development and education policy; youth career and educational decision making; trends in employer hiring, recruiting, and retention; military and federal civilian talent management; and technology and telecommunications policy.
Her latest publications include U.S. AI Workforce: Policy Recommendations, The DOD’s Hidden Artificial Intelligence Workforce, and AI Education in China and the United States. Her research has been featured in The Hill, Emerging Tech Brew, Forbes, Fortune, The Washington Post, and many more.
Interested in speaking with Diana or our other experts? Contact External Affairs Specialist Adrienne Thompson at firstname.lastname@example.org