A “presumption of causality” that non-compliance by an AI developer or provider led to harm. Under current liability rules, claimants must demonstrate a direct causal link between non-compliance and a specific harm. The complexity of AI systems can make drawing a clear, direct line prohibitively difficult. The updated directive would place the burden of proof on the defendant to show that their non-compliance was not the cause of the harm.
A “right of access to evidence” that grants claimants the right to access “necessary and proportionate” information about high-risk AI systems (as designated by the proposed AI Act) that may have caused harm.
The White House Releases a “Blueprint for an AI Bill of Rights”: On Tuesday, the Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights” — a non-binding framework that lays out five principles meant to help guide AI development, deployment and use, and to shape public policy. Those principles are:
“You should be protected from unsafe or ineffective systems” (Safe and Effective Systems).
“You should not face discrimination by algorithms and systems should be used and designed in an equitable way” (Algorithmic Discrimination Protections).
“You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used” (Data Privacy).
“You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you” (Notice and Explanation).
“You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter” (Human Alternatives, Consideration, and Fallback).
Accompanying the blueprint is a “technical companion” that offers specific steps policymakers, employees, consumers, AI developers and others can take to ensure the five principles are observed. But, as the document acknowledges, the blueprint is a non-binding white paper and does not affect any existing policies, their interpretation or their implementation. When OSTP officials announced plans to develop a “bill of rights for an AI-powered world” last year, they said enforcement options could include restrictions on federal and contractor use of non-compliant technologies and other “laws and regulations to fill gaps.” Whether the White House plans to pursue those options is unclear, but affixing “Blueprint” to the “AI Bill of Rights” seems to indicate a narrowing of ambition from the original proposal.
White House Reportedly Plans More Expansive Chinese Tech Restrictions:The New York Times reports that the Biden administration is planning new restrictions meant to further limit China’s access to technologies used in AI development, including high-end computing hardware. According to the report, the White House plans to use a variety of measures to target China’s AI and semiconductor industry. Notably, this includes employing the foreign direct product rule — a powerful and far-reaching regulation that gives the Commerce Department the ability to limit sales of items using U.S.-origin technology, even if produced abroad — to restrict a number of Chinese companies’ and research labs’ access to high-end computing power. As we’ve covered in recent months, the Biden administration has been ramping up its efforts to hinder China’s AI and semiconductor industries — it has already blocked certain exports of U.S.-designed high-end chips and taken stepsto cut off access to chipmaking tools. The new rules will reportedly codify some of the early restrictions and expand others (potentially covering tools needed for memory production). While exact details are not yet available, observers say the restrictions likely represent “the U.S. government’s most significant effort to date.”
IARPA Wants to Use AI to Identify (and Protect) Anonymous Authors: The Intelligence Advanced Research Projects Activity (IARPA) has launched a program to develop AI tools capable of determining the authorship of written products. The Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program aims to create tools that can identify authorship based on unique stylistic features — so-called “linguistic fingerprints” — modify written products to protect an author’s identity, and explain to observers why it flagged text as attributable and why it made specific revisions. If successful, the program could presumably help U.S. intelligence agencies on multiple fronts, enabling them to identify malign anonymous authors while keeping their own authors safe. IARPA awarded contracts to six lead organizations, bringing together researchers from industry, non-profit and academic backgrounds. IARPA expects the program to last a total of 42 months.
In Translation CSET’s translations of significant foreign language documents on AI
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.