Skip to main content

Protect public from AI risks, White House tells tech giants

At a meeting of prominent tech leaders at the White House on Thursday, vice president Kamala Harris reminded attendees that they have an “ethical, moral, and legal responsibility to ensure the safety and security” of the new wave of generative AI tools that have gained huge attention in recent months.

The meeting is part of a wider effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on important AI issues, the White House said.

Recommended Videos

Harris and other officials told the leaders of Google, Microsoft, Anthropic, and OpenAI — the company behind the ChatGPT chatbot — that the tech giants must comply with existing laws to protect the American people from misuse of the new wave of AI products. New regulations for generative AI are expected to come into force before too long, but the level to which they restrict the technology will depend to some extent on how the companies deploy their AI technologies going forward.

Please enable Javascript to view this content

Also on Thursday, the White House shared a document outlining new measures designed to promote responsible AI innovation. Action includes $140 million in funding for seven new National AI Research Institutes, bringing the total number of such institutes to 25 across the U.S.

Advanced chatbots like ChatGPT and Google’s Bard respond to text prompts and are capable of responding in a very human-like way. They can already perform a wide range of tasks very impressively, such as writing presentations and stories, summarizing information, and even writing computer code.

But with tech firms racing to put their chatbot technology front and center by integrating it into existing online tools, there are fears over the long-term implications of the technology for wider society, such as how it will impact the workplace or lead to new types of criminal activity. There are even concerns about how the technology, if it’s allowed to develop unchecked, could be a threat to humanity itself.

OpenAI chief Sam Altman said in March that he’s a “little bit scared” of the potential effects of AI, while a recent letter published by AI experts and others in the tech industry called for a six-month pause in generative-AI development to allow time for the creation of shared safety protocols.

And just this week, Geoffrey Hinton, the man widely considered the “godfather of AI” for his pioneering work in the field, quit his post at Google so that he could speak more freely about his concerns regarding the technology. The 75-year-old engineer said that as tech firms are releasing their AI tools for public use without being fully aware of their potential, it’s “hard to see how you can prevent the bad actors from using it for bad things.”

Even more alarmingly, in a recent CBS interview in which he was asked about the likelihood of AI “wiping out humanity,” Hinton responded: “That’s not inconceivable.”

But it should also be noted that most of those voicing concerns also believe that if handled responsibly, the technology could have great benefits for many parts of society, including, for example, health care, which would lead to better outcomes for patients.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
OpenAI cracks down on ChatGPT scammers
ChatGPT logo on a phone

OpenAI has made it clear that its flagship AI service, ChatGPT is not intended for malicious use.

The company has released a report detailing that it has observed the trends of bad actors using its platform as it becomes more popular. OpenAI indicated it has removed dozens of accounts on the suspicion of using ChatGPT in unauthorized ways, such as for "debugging code to generating content for publication on various distribution platforms."

Read more
With 400 million users, OpenAI maintains lead in competitive AI landscape
OpenAI's new typeface OpenAI Sans

Competition in the AI industry remains tough, and OpenAI has proven that it is not taking any coming challenges lightly. The generative AI brand announced Thursday that it services 400 million weekly active users as of February, a 33% increase in less than three months.

OpenAI chief operating officer, Brad Lightcap confirmed the latest user statistics to CNBC, indicating that the figures had not been previously reported. The numbers have quickly risen from previously confirmed stats of 300 million weekly users in December.

Read more
xAI’s Grok-3 is impressive, but it needs to do a lot more to convince me
Tool-picker dropdown for Grok-3 AI.

Elon Musk-led xAI has announced their latest AI model, Grok-3, via a livestream. From the get-go, it was evident that the company wants to quickly fill all the practical gaps that can make its chatbot more approachable to an average user, rather than just selling rhetoric about wokeness and understanding the universe.

The company will be releasing two versions of its latest AI model viz. Grok-3 and Grok-3 mini. The latter is trained for low-compute scenarios, while the former will offer the full set of Grok-3 perks such as DeepSearch, Think, and Big Brain.
What’s all the fuss about

Read more