In an op-ed article published in Lawfare, CSET’s Lauren Kahn discusses the increasing integration of Artificial Intelligence (AI) in military operations globally and the need for effective governance to avoid potential mishaps and escalation.
In an article published by The Messenger, Luke Koslosky provided his expert insights in the evolving landscape of artificial intelligence (AI) education and job opportunities.
In an article published by The Hill that discusses the growing concerns and risks associated with artificial intelligence (AI), CSET's Dewey Murdick provided his expert insights.
While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.
In an article published by Bloomberg that discusses the the influence of Big Tech companies on the artificial intelligence (AI) startup ecosystem, CSET's Ngor Luong provided her expert insights.
In an article by Forbes discussing the recent economic trends in the United States following the Covid-19 recession, CSET's Matthias Oschinski provided his expert insights.
This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.
In a WIRED article discussing issues with Microsoft's AI chatbot providing misinformation, conspiracies, and outdated information in response to political queries, CSET's Josh A. Goldstein provided his expert insights.
In an op-ed published in The Bulletin, CSET’s Owen J. Daniels, discusses the Biden administration's executive order on responsible AI use, emphasizing the importance of clear signals in AI policymaking.
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.