Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.
Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.
It is common for observers to compare machine intelligence with individual human intelligence, but this tendency can narrow and distort understanding. Rather, this paper suggests that machines, bureaucracies and markets can usefully be regarded as a set of artificial intelligences that have been invented to complement the limited abilities of individual human minds to discern patterns in large amounts of data. This approach opens an array of possibilities for insight and future investigation.
Progress in artificial intelligence has led to growing concern about the capabilities of AI-powered surveillance systems. This data brief uses bibliometric analysis to chart recent trends in visual surveillance research — what share of overall computer vision research it comprises, which countries are leading the way, and how things have varied over time.
By combining a versatile and frequently updated bibliometrics tool — the CSET Map of Science — with more hands-on analyses of technical developments, this brief outlines a methodology for measuring the publication growth of AI-related topics, where that growth is occurring, what organizations and individuals are involved, and when technical improvements in performance occur.
This brief explores the development and testing of artificial intelligence system classification frameworks intended to distill AI systems into concise, comparable and policy-relevant dimensions. Comparing more than 1,800 system classifications, it points to several factors that increase the utility of a framework for human classification of AI systems and enable AI system management, risk assessment and governance.
Drawing from their report "Small Data's Big AI Potential," CSET's Helen Toner and Husanjot Chahal discuss why smaller data approaches to AI can be helpful and how this approach can be applied within Europe.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.