Oren Etzioni: How to Regulate Artificial Intelligence?

Dear Commons Community,

Oren Etzioni, the chief executive of the Allen Institute for Artificial Intelligence, has an op-ed in today’s New York Times, that establishes three rules for regulating artificial intelligence as follows.

“First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.
Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford.

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information. Because of their exceptional ability to automatically elicit, record and analyze information, A.I. systems are in a prime position to acquire confidential information. Think of all the conversations that Amazon Echo — a “smart speaker” present in an increasing number of homes — is privy to, or the information that your child may inadvertently divulge to a toy such as an A.I. Barbie. Even seemingly innocuous housecleaning robots create maps of your home. That is information you want to make sure you control.

My three A.I. rules are, I believe, sound but far from complete. I introduce them here as a starting point for discussion. Whether or not you agree with Mr. Musk’s view about A.I.’s rate of progress and its ultimate impact on humanity (I don’t), it is clear that A.I. is coming. Society needs to get ready.”

I agree with Etzioni’s recommendations, however, they need to be expanded to address the employment issues. It is very likely that in the not too distant future say fifteen years, we will see widespread displacement of workers because of A.I. applications. This displacement will be beyond assembly line robotics that have already taken over many blue-collar jobs.  How do we regulate the transformation of labor especially in the white-collar and professional sectors.  What careers/positions do we retrain these workers for?

Tony

 

 

Comments are closed.