OpenAI CEO Sam Altman reveals what he thinks is ‘scary’ about AI

by

OpenAI CEO Sam Altman outlined examples of ‘scary AI’ to Fox News Digital after he served as a witness for a Senate subcommittee hearing on potential regulations on artificial intelligence.

‘Sure,’ Altman said when asked by Fox News Digital to provide an example of ‘scary AI.’ ‘An AI that could design novel biological pathogens. An AI that could hack into computer systems. I think these are all scary.’

‘These systems can become quite powerful, which is why I was happy to be here today and why I think this is so important.’

Altman appeared before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on Tuesday morning to speak with lawmakers about how to best regulate the technology. Altman’s OpenAI is the artificial intelligence lab behind ChatGPT, which was released last year.

ChatGPT is a chatbot that is able to mimic human conversation when given prompts by human users. People around the globe soon rushed to use the chatbot after its November release, launching ChatGPT as the fastest-growing user base with 100 million monthly active users in January. The release of the tech was quickly followed by other companies in Silicon Valley launching a race to build comparable or more powerful systems. 

With the proliferation of the tech and other artificial intelligence platforms, critics, as well as some fellow tech leaders and experts, have sounded the alarm on potential threats posed by artificial intelligence, including bias, misinformation and even the destruction of society. 

Altman said during the Senate hearing that his greatest fear as OpenAI develops artificial intelligence is that it causes major harmful disruptions for people. 

‘My worst fears are that we cause significant — we, the field, the technology industry — cause significant harm to the world,’ Altman said. ‘I think that could happen in a lot of different ways. It’s why we started the company.’

‘It think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,’ he added. ‘We want to work with the government to prevent that from happening.’

Altman said during the hearing that he welcomes government regulations and working with U.S. leaders on how to craft such rules.

‘As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,’ he said. ‘But we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides. It is essential that powerful AI is developed with democratic values in mind. And this means that U.S. leadership is critical.’

‘We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,’ Altman added. 

Fox News Digital’s Pete Kasperowicz contributed to this article. 

This post appeared first on FOX NEWS

–>

You may also like