Saturday, December 14, 2024

Meta, Google and AI firms agree on security measures at Biden meeting

Seven leading AI companies in the United States have agreed to voluntary safeguards in the development of the technology, the White House announced Friday, pledging to manage the risks of new tools even as they compete over the potential of artificial intelligence.

Seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formalized their commitment to new standards for safety, security and trust in a meeting with President Biden at the White House on Friday afternoon.

In brief remarks from the Roosevelt Room at the White House, Mr. “We need to be clear and aware of the threats emerging technologies pose to our democracy and our values,” Biden said.

“It is a serious responsibility; We have to get it right,” he said, adding that the companies’ executives “And there’s enormous, enormous energy upside.”

The announcement comes as companies race to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human input. But technological leaps have fueled fears of the spread of misinformation and stark warnings of a “danger of extinction” as artificial intelligence becomes more sophisticated and humanized.

The voluntary protections are only an initial, temporary step as governments in Washington and around the world seek to put legal and regulatory frameworks in place for the development of artificial intelligence. The agreements include testing products for security risks and using watermarks to ensure consumers can spot AI-generated products.

But lawmakers are struggling to regulate social media and other technologies to keep up with the rapidly evolving technology.

The White House did not provide details of an upcoming presidential executive order intended to tackle another issue, such as limiting the ability of China and other competitors to acquire new artificial intelligence programs or the components used to develop them.

See also  Apple's event invitations for iPhone 15 launch could arrive tomorrow

The order is expected to include new restrictions on advanced semiconductors and restrictions on exports of large scale models. Securing them is difficult – most software can fit on a thumb drive and compress.

An executive order would likely spark more opposition from the industry than Friday’s voluntary commitments, which experts said were already reflected in the practices of the companies involved. The promises do not hinder AI companies’ projects or the development of their technologies. And as voluntary commitments, they are not enforced by government regulators.

Nick Clegg, head of global affairs at Facebook’s parent company Meta, said in a statement: “We are delighted to be making these voluntary commitments alongside others in the sector. They are an important first step in ensuring that responsible watchdogs are established for AI, and they create a model for other governments to follow.”

As part of safeguards, companies agree to security testing by independent experts; research on bias and privacy concerns; sharing information about risks with governments and other organizations; developing tools to combat societal challenges such as climate change; and transparency measures to identify AI-generated objects.

In a statement announcing the deals, the Biden administration said companies must ensure that “innovation does not come at the expense of Americans’ rights and security.”

“Companies developing these emerging technologies have a responsibility to ensure their products are safe,” the administration said in a statement.

Brad Smith, head of Microsoft and one of the executives who attended the White House meeting, said his company endorsed voluntary security.

“Moving quickly, the White House’s commitments create a foundation that will help ensure AI’s promise outpaces its risks,” said Mr. Smith said.

See also  NFL Week 9 Grades: CJ Stroud leads Texans to 'A-', Seahawks get 'F' after blowout loss to Ravens

Anna Mahanju, OpenAI’s vice president of global affairs, described it as “part of our ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance.”

For companies, the standards outlined Friday serve two purposes: an attempt to prevent or shape legislative and regulatory moves toward self-policing, and a signal that they are dealing with new technology thoughtfully and proactively.

But the rules they agree on are often the least common denominator and can be interpreted differently by each organization. For example, companies are committing to strict cybersecurity measures around data used to build language models from which generative AI programs are developed. But there’s no detail about what that means, and companies will be keen to protect their intellectual property anyway.

Even the most careful companies are vulnerable. Microsoft, Mr. One of the companies that attended a White House event with Biden last week fought the Chinese government’s orchestrated hacking of the private emails of U.S. officials dealing with China. It now appears that China stole, or somehow obtained, Microsoft’s “private key,” the key to authenticating emails — one of the company’s most closely guarded pieces of code.

Given such risks, the agreement is unlikely to slow efforts to legislate and impose restrictions on the emerging technology.

Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights, said more must be done to protect society from the dangers artificial intelligence poses.

“The voluntary commitments announced today are unenforceable, which is why it is imperative that Congress and the White House work together to immediately enact legislation that requires transparency, privacy protections, and comprehensive research into the broad range of risks posed by AI,” said Mr. Barrett said in a statement.

See also  Elon Musk faces temptation to take his 2018 Tesla private

European regulators are poised to adopt AI laws later this year, prompting many companies to promote US regulations. Several lawmakers have introduced bills that include licensing AI companies to publish their technologies, creating a federal agency to oversee the industry, and data privacy requirements. But members of Congress disagreed on the rules.

Lawmakers are grappling with how to address the rise of AI technology, with some focused on the risks to consumers and others deeply concerned about falling behind rivals, particularly China, in the race to dominate the field.

This week, the House Committee on Competition with China sent bipartisan letters to US-based venture capital firms demanding they account for their investments in Chinese AI and semiconductor companies. For months, various House and Senate panels have been questioning the AI ​​industry’s most influential entrepreneurs and critics to determine what kinds of legal protections and incentives Congress should explore.

Many of those witnesses, including OpenAI’s Sam Altman, have called on lawmakers to regulate the AI ​​industry, pointing to the potential for the new technology to cause unnecessary harm. But that regulation is moving slowly in Congress, where many lawmakers are still struggling to understand exactly what AI technology is.

In an effort to improve lawmakers’ understanding, Senator Chuck Schumer, Democrat of New York, began a series of sessions this summer to hear from government officials and experts about the benefits and risks of artificial intelligence in many fields.

Karoon Demirjian Contributed report from Washington.

Related Articles

Latest Articles