Sam Altman Request To Regulate AI - What Is The Biggest Fear Of AI Founders?

Sam Altman Request To Regulate AI
"My worst fears are that we cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that we want to work with the government to prevent that from happening."
(Image Description: OpenAI CEO Sam Altman discussing his “biggest fears” regarding AI at Senate Judiciary subcommittee hearing on May 16, 2023)

Those were the words of Sam Altman, the CEO of OpenAI when he spoke before the US Congress on May 16, 2023. Sam's company has developed the controversial AI application ChatGPT, which has changed the landscape of multiple facets of life. Mainly, the academic writing services in the UK have been greatly affected by the development of this application. Students and academics rely heavily on ChatGPT for crafting their day-to-day academic assignments and do not actively work on their academic projects. It corrupts the students' potential, and if it goes on the same way for a few years, we will have no skills and will have to hand over everything to robots for management and administration purposes – and it is downright scary!

Sam Altman's Biggest Fear:

Altman has been much vocal about the dangerous aspects of AI and its harmful impact on human lives on multiple platforms, including Twitter. You can read this Twitter thread where he has expressed his concerns regarding the anticipated development of "potentially scary" AI tools. These companies are trying to find out how they can integrate the AI into everyday lives of humans and are not failing at it. Does it sit right with the Artificial Intelligence ethics experts? No, because some experts have already declared this era the age of a "dystopic present".

Speaking before the US Congress, Sam said that the idea of the government's regulation of Artificial Intelligence would be pretty good, keeping in mind the potential risks AI poses to society. ChatGPT is the fastest-growing AI application of all time, and even the CEO of the company that built this app is terrified of the consequences it will pose to the future of the human race. He told the committee of senators the same thing, as stated below:
"My worst fears are that we… the technology industry, cause significant harm to the world. I think that could happen in a lot of different ways."

The Reaction of Mr Blumenthal to AI Voice Clone:

Democrat Senator Richard Blumenthal was shocked when he heard his voice copied by an AI voice clone trained on Richard's speeches. The session proceedings started with a recorded speech that sounded like Blumenthal's voice but was not really his voice. He was impressed with the results and remarked:

"What if it had provided an endorsement of Ukraine surrendering or Vladimir Putin's leadership?"


Imagine how dangerous this all can get if these tools are used by the hands of the wrong people and wrong reasons. These AI tools can easily be used for unethical, illegal and nefarious activities by anyone who wants to do so. Needless to say, the results of such activities will be catastrophic.

Blumenthal believed that AI companies should be required to test their applications and systems before launching them, and they must disclose all the associated risks with using these tools before their release. He was particularly concerned about how the AI systems would destabilise the jobs in the country – which it is already doing great.

How Can the US Government Possibly Regulate the AI?

Sam proposed that government should keep checks on AI to regulate its activities. He suggested:
"I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that".

He suggested the government form a US agency or a global corporation that issues the operating licenses to the mighty AI systems and applications. When the need arises, it confiscates the licenses.
"I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards," he said while answering a question from Sen. John Kennedy, R-La.

According to him, it will ensure the compliance of these AI tools with the safety standards. Otherwise, these AI tools will be used for malicious purposes more than they are used with good intent. Examples of negative uses include students using chatbots like ChatGPT to cheat on their academic documents, misleading people, violating copyright protections, taking away many jobs, spreading misinformation and much more.

Sam Speaking About the Dangers of AI:

According to a UBS study, ChatGPT is one of the fastest-growing user applications in history. Released only a few months ago, it quickly hit 100 Millom, active monthly users. It achieved in days what Instagram managed to achieve in almost three years and TikTok in nine months. ChatGPT has been a game changer in this regard, and the development of this AI tool has been remarkable.

Although Sam acknowledges and celebrates the global success of the AI application released by his company, he is also open to discussing the dangers associated with AI tools. He is intimidated by the potential harm AI can pose to daily human life, and the thoughts of impending danger keep him up at night. In an interview with ABC News, he said:

"I'm particularly worried that these models could be used for large-scale disinformation. Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks."

Albert Barkley

Hello, my name is Albert Barkley. I am working as education consultant with a UK based firm after completion of my PhD. I like to write on different social, tech and education trends.

Post a Comment

Previous Post Next Post