Seven large AI companies are committed to mitigating the risks of this technology



After a White House reporter asked the CEOs of big tech companies who were invited there whether they were real or the product of artificial intelligence (AI) — which day are replicas of Blade Runner– The American president burst into the room. “I am the artificial intelligence,” he saluted.

Aside from joking, and the possibility that Ultra Republicans will take it literally and launch an investigation to clarify whether Biden is a mutant, the frantic development of artificial intelligence raises suspicions of the developers of this technology, whose limits are unknown. More and more people are wondering when the day will be when artificial intelligence will be smarter than humanity itself.

Companies accept external reviews or let users know if the image, video, or text is AI

Brad Smith (Microsoft), Nick Clegg (META), Kent Walker (Google), Greg Brockman (OpenAI subsidiary of popular ChatGPT), Adam Celebski (Amazon Web Services), Dario Amodi (Anthropic), and Mustafa Suleiman (Inflection AI) made their voluntary commitment to mitigate the risks of this emerging technology before the President of the United States on Friday.

In part it is a way to reduce the fear raised among many that there was an unwise initial spread.

Industry-leading companies have pledged to allow independent security experts to test their systems before they are released to the public and to share data about the security of their systems with government and academics.

The companies also promised to develop mechanisms to alert users when an image, video, or text has been created by artificial intelligence, a method known as Watermark Watermark or digital.

If they don’t meet the requirements, the companies face potential penalties from the Federal Trade Commission (FTC). It would be classified as a deceptive practice that would contravene consumer protection law. Some of these companies have already started developing warning labels.

These commitments are real and tangible. In his speech, Biden said: They will help companies fulfill the basic obligation to citizens to provide products that are safe and reliable in their technology for the benefit of society.

The Chair noted that this is only a first step and that new laws and regulations are needed in this regard.

He insisted that companies should prioritize the security of their systems by sandboxing, managing “risks to our national security and ensuring best practices as well as privacy”. He also noted that companies agree that AI can help society tackle major challenges, from cancer to climate change, education or jobs.

He predicted Biden, a man already in his 80s. He added, “We’re going to see more technological changes in the next 10 years, or even less, than we’ve seen in the last 50 years. That was a great revelation for me, honestly.”

He claimed that “artificial intelligence will change the lives of people around the world and this group gathered here will be critical to developing this innovation responsibly.”

Read also





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *