Can 20 tech giants sign new agreements to prevent artificial intelligence from misleading the public?

Artificial intelligence is transforming this world, but it is unsettling which direction we will be drawn towards. On February 16, 2024, the technology industry was once again stirred by a new pause in this technique. Following the launch of ChatGPT, a chatbot that has ushered in a new wave of artificial intelligence skills, OpenAI has also released its first written video model Sora, which can directly input 60 seconds of video based on text, enough to blur the boundary between reality and virtuality, once again sparking global discussions.

On the same day, at the Munich Security Conference (MSC) held in Germany, 20 AI related industry companies signed a voluntary agreement, promising to attack this new technique, the highly skilled fraudulent information tampering with global recommendations.

According to the joint statement, the signing party includes the main players in this category, such as artificial intelligence model subsidiaries OpenAI, Anthropic, and Google, as well as social media platforms that can assist in their dissemination, such as TikTok and X, as well as Facebook and Instagram’s parent company Meta.

It is worth noting that another popular artificial intelligence image tokusatsu company, Midjournal, attended. Stop publishing, this San Francisco based startup company will not immediately respond to media comment requests.

List of relevant companies that have signed agreements so far.

2024 is an unprecedented election year, with over 4 billion people from over 40 countries joining the vote. According to data from Anchor Change, a consulting firm, Global is estimated to hold at least 83 general elections this year, marking the largest election year in at least 24 years.

In recent weeks, the general public in Bangladesh, Pakistan, Indonesia and other regions have joined the vote, and the United States, Russia, Ukraine, the European Parliament, India and other regions will also hold elections one after another.

In order to avoid false information about the new artificial intelligence technology Tian Shu being used for political goals, the above 20 companies have made 8 specific promises, including: mutual assistance in opening up relevant open source objects to increase the fraudulent nature of artificial intelligence Tian Shu that affects the selection; The idea of detecting the online circulation of such substance; Promote the education movement, improve public awareness, and cultivate media literacy to cultivate cross industry resistance to such substance.

However, the ambiguity and lack of mandatory binding force of the agreement’s promises have been criticized by critics.

Rachel Orey, Senior Deputy Director of the United States’ Two Party Strategy Central Committee, believes that the agreement is not as strong as people imagine.

Lisa Gilbert, Vice President of the non-governmental organization “Public People”, stated that the strength of the agreement is “not enough”. She believes that in order to avoid potential failures, artificial intelligence companies should “restrain their skills” before substantial and sufficient insurance measures are in place. She is particularly concerned about the latest release of mature video models.

However, Microsoft, the mutual partner of OpenAI, believes that the most fundamental challenge we face on the issue of technological progress is not the technology itself, but the animalistic nature. Microsoft President Brad Smith stated in a statement that overall, the promise of the latest agreement has improved industry testing and corresponding capabilities, making it more difficult for unscrupulous actors to use legal objects to create misleading substance.

Former Facebook data scientist Jeff Allen stated that the agreement with the Associated Press was a “positive step”, but he still hopes that social media companies will adopt other behaviors to attack peer information, such as establishing a substantive selection system that does not prioritize user engagement.

Vera Jourov á, Vice President of the European Commission, pointed out at a security rally in Munich that although the agreement is not comprehensive, it contains “very powerful and positive elements.” She also urged officials to take responsibility and not use artificial intelligence objects in fraudulent forms.

The phenomenon of artificial intelligence interference in recommendation has flashed.

In the United States, during last month’s New Hampshire primary election voting, some voters claimed to have absorbed the sound of artificial intelligence robots simulating the voice of US leader Biden, urging them not to vote. This has triggered resistance from detention and law enforcement, and some state legislators in the United States are drafting bills to standardize the political substance of artificial intelligence.

In Europe, a few days before the Slovak general election in November 2023, artificial intelligence experts impersonated a candidate to explore plans to increase beer prices and control the selection process. After being circulated on social media, it sparked government scrutiny.

Politicians are also beginning to test using this technique to be the same as voters. Last week, Pakistan held a People’s Assembly election, and two former prime ministers, Imran Khan and Nawaz Sharif, declared victory. Imran Khan, who is currently in prison, also used artificial intelligence to create a “live speech” video to announce the success news, arousing attention from the outside world. Since his imprisonment, Imran Khan has been using artificial intelligence to break down information similar to voters.

“We estimate that in 2024, the artificial intelligence system will shine with surprising uses, which developers may not have anticipated,” said Anthrophic, OpenAI’s biggest adversary in mutual assistance, as the use of new artificial intelligence techniques is filled with surprises and unexpected effects

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注