The big picture: Fears about AI’s dark side — from privacy violations and the spread of misinformation to losing control of consumer data — recently prompted the White House to issue a preliminary “AI Bill of Rights,” encouraging technologists to build safeguards into their products.
- While Google published its principles of AI development in 2018 and other tech companies have done the same, there’s little-to-no government regulation.
- Although investors have been pulling back on AI startups recently, Google’s deep pockets could give it more time to develop projects that aren’t immediate moneymakers.
Yes, but: Google executives sounded multiple notes of caution as they showed off their wares.
- AI “can have immense social benefits” and “unleash all this creativity,” said Marian Croak, head of Google Research’s center of expertise on responsible AI.
- “But because it has such a broad impact on people, the risk involved can also be very huge. And if we don’t get that right … it can be very destructive.”
Threat level: A recent Georgetown Center for Security and Emerging Technology report examined how text-generating AI could “be used to turbocharge disinformation campaigns.”
- And as Axios’ Scott Rosenberg has written, society is only just beginning to grapple with the legal and ethical questions raised by AI’s new capacity to generate text and images.
See original article at Axios.