CSET's Ngor Luong testified before the U.S.-China Economic and Security Review Commission where she discussed Chinese investments in military applications of AI.
CSET's Jack Corrigan testified before the U.S.-China Economic and Security Review Commission where he discussed security threats posed by Chinese information and communications technology systems.
As the U.S. government considers banning genomics companies from China in the Biosecure Act, it opens a broader question of how the U.S. and other market economies should deal with China’s national champions. This blog post provides an overview of BGI and how China’s industrial policy impacts technology development.
As the U.S. government tightens its controls on China’s semiconductor ecosystem, a new dimension is increasingly worrying Congress: the open-source chip architecture known as RISC-V (pronounced “risk-five”). This blog post provides an introduction to the RISC-V architecture and an explanation of what policy-makers can do to address concerns about this open architecture.
On January 17, 2024, CSET Researchers submitted a response to proposed rules from the Bureau of Industry and Security at the U.S. Department of Commerce. In the submission, CSET recommends that Commerce not implement controls on U.S. companies providing IaaS to Chinese entities, among other recommendations.
In an article published by Bloomberg that discusses the the influence of Big Tech companies on the artificial intelligence (AI) startup ecosystem, CSET's Ngor Luong provided her expert insights.
In an article by Forbes discussing the recent economic trends in the United States following the Covid-19 recession, CSET's Matthias Oschinski provided his expert insights.
This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.
In an op-ed published in The Bulletin, CSET’s Owen J. Daniels, discusses the Biden administration's executive order on responsible AI use, emphasizing the importance of clear signals in AI policymaking.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.