To A.I. or Not to A.I.

Illustration by Nicholas Konrad/The New York Times; photograph by Hamster3d/Getty Images

Dear Commons Community,

Frank Pasquale, a law professor at Brooklyn Law School, and Gianclaudio Malgieri,  a law professor at EDHEC Augmented Law Institute in France, have a guest essay in today’s New York Times raising alarms about artificial intelligence.  Entitled, “If You Don’t Trust A.I. Yet, You’re Not Wrong,” the essay considers the dangers and ethics of unbridled A.I. development.  Here is an excerpt:

“Americans have good reason to be skeptical of artificial intelligence. Tesla crashes have dented the dream of self-driving cars. Mysterious algorithms predict job applicants’ performance based on little more than video interviews. Similar technologies may soon be headed to the classroom, as administrators use “learning analytics platforms” to scrutinize students’ written work and emotional states. Financial technology companies are using social media and other sensitive data to set interest rates and repayment terms.

Even in areas where A.I. seems to be an unqualified good, like machine learning to better spot melanoma, researchers are worried that current data sets do not adequately represent all patients’ racial backgrounds.

U.S. authorities are starting to respond. Massachusetts passed a nuanced law this spring limiting the use of facial recognition in criminal investigations. Other jurisdictions have taken a stronger stance, prohibiting the use of such technology entirely or requiring consent before biometric data is collected. But the rise of A.I. requires a more coordinated nationwide response, guided by first principles that clearly identify the threats that substandard or unproven A.I. poses. The United States can learn from the European Union’s proposed A.I. regulation.

In April, the European Union released a new proposal for a systematic regulation of artificial intelligence. If enacted, it will change the terms of the debate by forbidding some forms of A.I., regardless of their ostensible benefits. Some forms of manipulative advertising will be banned, as will real-time indiscriminate facial recognition by public authorities for law enforcement purposes.

The list of prohibited A.I. uses is not comprehensive enough — for example, many forms of nonconsensual A.I.-driven emotion recognition, mental health diagnoses, ethnicity attribution and lie detection should also be banned. But the broader principle — that some uses of technology are simply too harmful to be permitted — should drive global debates on A.I. regulation.

The proposed regulation also deems a wide variety of A.I. high risk, acknowledging that A.I. presents two types of problems. First, there is the danger of malfunctioning A.I. harming people or things — a threat to physical safety. Under the proposed E.U. regulation, standardization bodies with long experience in technical fields are mandated to synthesize best practices for companies — which will then need to comply with those practices or justify why they have chosen an alternative approach.

Second, there is a risk of discrimination or lack of fair process in sensitive areas of evaluation, including educationemploymentsocial assistance and credit scoring. This is a risk to fundamental rights, amply demonstrated in the United States in works like Cathy O’Neil’s “Weapons of Math Destruction” and Ruha Benjamin’s “Race After Technology.” Here, the E.U. is insisting on formal documentation from companies to demonstrate fair and nondiscriminatory practices. National supervisory authorities in each member state can impose hefty fines if businesses fail to comply.”

Pasquale and Malgieri raise important issues but a central point of their essay is that the European Union is being more aggressive in examining A.I. concerns than the United States.  I would like to see more debate about A.I. at the national political level, but I don’t see it happening anytime soon.

Tony

Comments are closed.