The Financial Times interviewed CSET’s Heather Frase in an article about a red team recently organized by OpenAI, a group of 50 academics and experts hired to test and mitigate the risks of GPT-4.
As a red team member, Frase tested GPT-4’s potential for aiding criminal activities and found that the risks associated with the technology would increase with widespread use. She emphasized the importance of operational testing, stating that “things behave differently once they’re actually in use in the real environment.” Frase also advocated for the creation of a public ledger to report incidents related to large language models, similar to existing reporting systems for cyber security or consumer fraud.
Read the full article in The Financial Times.