Amazon, Meta agree to AI safeguards after Biden’s warning - Los Angeles Times
Advertisement

What seven AI companies say they’ll do to safeguard their tech

Facebook's Meta logo sign is seen at the company headquarters in Menlo Park, Calif.
Seven AI companies announced commitments to share information to improve risk mitigation with governments, civil society and academics — and report vulnerabilities as they emerge — in addition to testing their systems more rigorously.
(Tony Avelar / Associated Press)
Share via

President Biden said the United States must guard against threats from artificial intelligence as he detailed new company safeguards and promised additional government actions on the emerging technology.

“These commitments are real and are concrete. They’re going to help the industry fulfill its fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values,†Biden said Friday.

Executives from Amazon.com, Alphabet, Meta Platforms, Microsoft, OpenAI, Anthropic and Inflection AI — all of which committed to adopting transparency and security measures — joined Biden at the White House for the announcement.

Advertisement

Will artificial intelligence make jobs obsolete and lead to humankind’s extinction? Not on your life

Biden said the company measures are only the first step and pledged to take executive actions while working with Congress to enact rules and regulations governing AI. “We must be clear-eyed and vigilant about the threats,†he said.

The companies have agreed to put new artificial intelligence systems through internal and external tests before their release and have outside teams scrutinize them for security flaws, discriminatory tendencies or risks to consumer privacy, health information or safety. They have also promised to share information with governments, civil society and academia and to report vulnerabilities.

“These commitments, which companies will implement immediately, underscore three fundamental principles: safety, security and trust,†Biden said.

Advertisement

The Federal Trade Commission is investigating whether OpenAI’s conversational AI tool ChatGPT violates consumer protection laws.

Friday’s guidelines are the result of months of behind-the-scenes lobbying. Biden and Vice President Kamala Harris met in May with many of the executives present at Friday’s event, warning them that industry was responsible for ensuring its products are safe.

Biden’s aides say artificial intelligence has been a top priority for the president, who frequently brings up the topic in meetings with advisors. He has also directed Cabinet secretaries to examine how the technology might intersect with their agencies.

The package of safeguards formalizes and expands some measures already being undertaken at major AI developers, and the commitments are only voluntary. The guidelines do not require approval from specific outside groups before companies can release AI technologies, and they are required to only report — not eliminate — risks such as possible inappropriate use or bias.

Advertisement

Snack’s new AI feature lets chatbots handle the initial getting-to-know-you conversations with potential suitors. It’s as strange as it sounds.

“It’s a moving target,†White House Chief of Staff Jeff Zients said in an interview. “We not only have to execute and implement on these commitments, but we’ve got to figure out the next round of commitments as the technologies change.â€

Zients and other administration officials have said it will be difficult to keep pace with emerging technologies without legislation from Congress that imposes stricter rules and includes dedicated funding for regulators.

“They’re going to require some new laws, regulations and oversight,†Biden said Friday.

Before Friday’s event, AI companies said the steps would better manage the risks from a technology that is rapidly evolving and that has seen public interest explode in recent months.

Nick Clegg, president of global affairs at Meta, said in a statement that the voluntary commitments are an “important first step in ensuring responsible guardrails are established for AI and they create a model for other governments to follow.â€

Microsoft President Brad Smith said Friday’s commitments “help ensure the promise of AI stays ahead of its risks.†He said Microsoft supports other measures to track the most powerful AI models, including a licensing regimen, “know-your-customer†requirements and a national registry of high-risk systems.

Cover letters are notoriously hard to write. These job seekers decided to outsource the task to ChatGPT, an AI chatbot, with impressive results.

Kent Walker, Google’s president of global affairs, put Friday’s commitments in the context of other international efforts by the Group of Seven and Organization for Economic Cooperation and Development to “maximize AI’s benefits and minimize its risks.†He said that AI is already used in many of Google’s most popular products such as Search, Maps and Translate, and that the company designs its systems to be “responsible from the start.â€

Advertisement

The White House said it consulted the governments of 20 other countries before Friday’s announcement. But the pace of oversight is already lagging behind AI developments.

In Europe, the European Union’s AI Act is far ahead of anything passed by the U.S. Congress, but leaders there have recognized that companies will need to make voluntary commitments to safeguard their technology before the law is in place.

Advertisement