We’re hiring! If you’re interested in joining our team, check out the positions in the “Job Openings” section below or consult our careers page.
Princeton Researchers Sound the Alarm on ML & Research Reproducibility: The widespread adoption of machine learning–based research methods has created a “reproducibility crisis” across a number of research fields, according to two computer science researchers. In their preprint paper, Princeton Professor Arvind Narayanan and PhD student Sayash Kapoor reviewed papers across 18 fields, including political science, medicine, satellite imaging, and computer security, and identified issues in more than 300 of them. Machine learning has been increasingly popular among military planners, investors and law enforcement because of its potential to predict future outcomes from past results, but Narayanan and Kapoor warn that common issues stemming from the way training data is collected, sampled and processed can produce models that are significantly overconfident in their predictions. Because these “data leakage” issues can be difficult (if not impossible) to identify with current peer-review practices, Narayanan and Kapoor warn that the problems are likely to persist unless changes are made. To help researchers and reviewers identify machine learning–related issues, they proposed adopting “model info sheets” (inspired by and similar to model cards) that researchers could fill out and publish alongside their papers.
NIST Publishes A New Draft of Its AI Risk Management Framework: Last week, the National Institute of Standards and Technology published the second draft of its AI Risk Management Framework, a document meant to help AI developers anticipate and manage the risks unique to AI systems. In the draft, NIST clarifies that the AI RMF is not meant to serve as a checklist — it is intended to be consulted voluntarily by organizations throughout their AI products’ lifecycles, from early stage planning through deployment. Alongside the framework, NIST also released an early version of the accompanying AI Risk Management Framework Playbook, which includes specific actions organizations can take to ensure the trustworthiness of their systems. The playbook is only partially complete — a NIST official told FedScoop that the rest of the playbook will be completed after stakeholder feedback is received. Both the draft and the playbook are open for public comment until September 29, 2022. NIST will then host a third and final workshop on October 18–19, and aims to officially publish the AI RMF in January 2023.
In Translation CSET’s translations of significant foreign language documents on AI
PRC AI Training White Paper:China Artificial Intelligence Talent Training Report. This white paper, co-authored by China’s prestigious Zhejiang University and Chinese tech company Baidu, examines China’s system for AI training in depth. Pointing to the United States’ dominance in AI talent, the authors urge closer and more rational cooperation between Chinese universities and tech companies in training AI talent to improve China’s self-sufficiency in AI. The authors also call for China to establish an international standards body for AI talent training.
If you have a foreign-language document related to security and emerging technologies that you’d like translated into English, CSET may be able to help! Click here for details.
We’re hiring! Please apply or share the roles below with candidates in your network:
Visual Communications Specialist: The Visual Communications Specialist will support the work of the External Affairs Team to raise the profile of CSET’s research through a variety of outreach activities such as graphics, multimedia, and publications. Apply by September 16.
Research Fellow — AI Applications: This Research Fellow will focus on helping decision makers evaluate and translate new and emerging technologies, particularly in the field of AI, into novel capabilities by separating real trends and strategic opportunities from technological hope and hype. Rolling application — Apply today.
Senior Fellow: This Senior Fellow will provide mentorship and intellectual leadership to CSET researchers; shape analysis that is aligned with our research priorities; and facilitate engagements with government, military, academic, and industry leaders. Rolling application — Apply today.
Georgetown Student Opportunities: We are currently seeking applications for Emerging Technology Observatory Research Assistants (JobID: 29149), an External Affairs Assistant (JobID: 26749), and an Editorial Assistant (JobID: 27389). Please note that applications must be submitted via the SEO website. Rolling application — Apply today.
Please bookmark our careers page to stay up to date on all active job postings. You can also subscribe to receive job announcements by updating your subscription preferences in the footer of this email.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.