In A.I. Race, Microsoft and Google Choose Speed Over Caution!

Microsoft Vs Google: ChatGTP Triggers AI War Between Tech Giants

Dear Commons Community,

The New York Times has a featured article this morning examining the race among big tech companies to develop artificial intelligence applications regardless of whether or not they generate misinformation and dangerous content.  Here is an excerpt.

In March, two Google employees, whose jobs are to review the company’s artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.

Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.

The companies released their chatbots anyway. Microsoft was first, with an event in February to reveal an A.I. chatbot woven into its Bing search engine. Google followed about six weeks later with its own chatbot, Bard.

The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industry’s next big thing — generative A.I., the powerful new technology that fuels those chatbots.

That competition took on a frantic tone in November when OpenAI, a San Francisco start-up working with Microsoft, released ChatGPT, a chatbot that has captured the public imagination and now has an estimated 100 million monthly users.

The surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines set up over the years to ensure their technology does not cause societal problems, according to 15 current and former employees and internal documents from the companies.

The urgency to build with the new A.I. was crystallized in an internal email sent last month by Sam Schillace, a technology executive at Microsoft. He wrote in the email, which was viewed by The New York Times, that it was an “absolutely fatal error in this moment to worry about things that can be fixed later.”

When the tech industry is suddenly shifting toward a new kind of technology, the first company to introduce a product “is the long-term winner just because they got started first,” he wrote. “Sometimes the difference is measured in weeks.”

Last week, tension between the industry’s worriers and risk-takers played out publicly as more than 1,000 researchers and industry leaders, including Elon Musk and Apple’s co-founder Steve Wozniak, called for a six-month pause in the development of powerful A.I. technology. In a public letter, they said it presented “profound risks to society and humanity.”

Regulators are already threatening to intervene. The European Union proposed legislation to regulate A.I., and Italy temporarily banned ChatGPT last week. Regulators are already threatening to intervene. The European Union proposed legislation to regulate A.I., and Italy temporarily banned ChatGPT last week. In the United States, President Biden on Tuesday became the latest official to question the safety of A.I.

“Tech companies have a responsibility to make sure their products are safe before making them public,” he said at the White House. When asked if A.I. was dangerous, he said: “It remains to be seen. Could be.”

“Tech companies have a responsibility to make sure their products are safe before making them public,” he said at the White House. When asked if A.I. was dangerous, he said: “It remains to be seen. Could be.”

The issues being raised now were once the kinds of concerns that prompted some companies to sit on new technology. They had learned that prematurely releasing A.I. could be embarrassing. Five years ago, for example, Microsoft quickly pulled a chatbot called Tay after users nudged it to generate racist responses.

Researchers say Microsoft and Google are taking risks by releasing technology that even its developers don’t entirely understand. But the companies said that they had limited the scope of the initial release of their new chatbots, and that they had built sophisticated filtering systems to weed out hate speech and content that could cause obvious harm.

A.I. is here and we need to understand and control its deployment.  As this article indicates, major tech companies like Microsoft and Google may be too concerned with winning control of the market to do so.

Tony

Comments are closed.