AI companies, including OpenAI, Alphabet (GOOGL.O), and Meta Platforms (META.O), have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content. This announcement was made by President Joe Biden on Friday. Biden acknowledged that these commitments are a positive step but emphasized that there is still more work to be done together.
During a White House event, Biden addressed concerns about the potential disruptive use of artificial intelligence. He emphasized the need to be vigilant about the threats posed by emerging technologies to U.S. democracy. The companies involved in these commitments, which also include Anthropic, Inflection, Amazon.com (AMZN.O), and Microsoft (MSFT.O), pledged to thoroughly test systems before release, share information on risk reduction, and invest in cybersecurity.
This move is seen as a victory for the Biden administration’s efforts to regulate AI technology, which has seen significant investment and consumer popularity. Microsoft expressed their support for the president’s leadership in bringing the tech industry together to make AI safer, more secure, and more beneficial for the public.
The popularity of generative AI, which creates new content using data, has led lawmakers worldwide to consider ways to mitigate the risks to national security and the economy. The U.S. is lagging behind the EU in addressing AI regulation. In June, EU lawmakers agreed on draft rules that would require systems like ChatGPT to disclose AI-generated content, distinguish deep-fake images from real ones, and implement safeguards against illegal content.
U.S. Senate Majority Chuck Schumer called for comprehensive legislation to advance and ensure safeguards on artificial intelligence. Congress is currently considering a bill that would require political ads to disclose the use of AI in creating imagery or other content. Biden, who met with executives from the seven companies at the White House, is working on developing an executive order and bipartisan legislation on AI technology.
As part of their commitments, the seven companies have pledged to develop a system to watermark all AI-generated content, including text, images, audios, and videos. This watermark will make it easier for users to identify deep-fake content and prevent the spread of misinformation or malicious content. The specifics of how the watermark will be evident in shared information are still unclear.
The companies also committed to protecting users’ privacy as AI technology develops and ensuring that the technology is free of bias and discrimination against vulnerable groups. They also expressed their intention to develop AI solutions for scientific problems such as medical research and climate change mitigation.
In conclusion, AI companies have made voluntary commitments to implement measures to make AI technology safer. These commitments are seen as a positive step towards regulating AI and addressing potential risks. The companies involved have pledged to test systems thoroughly, share risk reduction information, invest in cybersecurity, and develop a watermarking system for AI-generated content. The Biden administration is actively working on executive orders and legislation related to AI technology.