Skip to main content

Study says AI hype is hindering genuine research on artificial intelligence

Monitor showing the 2025 AAAI study on AI.
AAAI / Digital Trends

A new AAAI (Association for the Advancement of Artificial Intelligence) study with hundreds of contributing AI researchers has been published this month, and the main takeaway is this: our current approach to AI is unlikely to lead us to artificial general intelligence

AI has been a buzzword for a good couple of years now, but artificial intelligence as a field of research has existed for many decades. Alan Turing’s famous “Computing Machinery and Intelligence” paper and the Turing test we still talk about today, for example, were published in 1950. 

Recommended Videos

The AI everyone talks about today was born from these decades of research but it’s also diverging from them. Rather than being a scientific pursuit, we now also have a deviating branch of artificial intelligence that you could call “commercial AI.” 

Efforts in commercial AI are led by big tech monopolies like Microsoft, Google, Meta, Apple, and Amazon — and their primary goal is to create AI products. This shouldn’t have to be a problem, but at the moment, it seems it might be.

Firstly, because most people never followed AI research until a couple of years ago, everything the average person knows about AI is coming from these companies, rather than the science community. The study covers this topic in the “AI Perception vs. Reality” chapter, with 79% of the scientists involved believing that the current perception of AI capabilities doesn’t match the reality of AI research and development.

In other words, what the general public thinks AI can do doesn’t match what scientists think AI can do. The reason for this is as simple as it is unfortunate: when a big tech representative makes a statement about AI, it’s not a scientific opinion — it’s product marketing. They want to hype up the tech behind their new products and make sure everyone feels the need to jump on this bandwagon.

When Sam Altman or Mark Zuckerberg say software engineering jobs will be replaced by AI, for example, it’s because they want to influence engineers to learn AI skills and influence tech companies to invest in pricey enterprise plans. Until they start replacing their own engineers (and benefit from it), however, I personally wouldn’t listen to a word they say on the topic.

It’s not just public perception that commercial AI is influencing, however. Study participants believe that the “AI hype” being manufactured by big tech is hurting research efforts. For example, 74% agree that the direction of AI research is being driven by the hype — this is likely because research that aligns with commercial AI goals is easier to fund. 12% also believe that theoretical AI research is suffering as a result.

So, how much of a problem is this? Even if big tech companies are influencing the kind of research we do, you’d think the extremely large sums of money they’re pumping into the field should have a positive impact overall. However, diversity is key when it comes to research — we need to pursue all kinds of different paths to have a chance at finding the best one.

But big tech is only really focusing on one thing at the moment — large language models. This extremely specific type of AI model is what powers just about all of the latest AI products, and figures like Sam Altman believe that scaling these models further and further (i.e. giving them more data, more training time, and more compute power) will eventually give us artificial general intelligence.

This belief, dubbed the scaling hypothesis, says that the more power we feed an AI, the more its cognitive abilities will increase and the more its error rates will decrease. Some interpretations also say that new cognitive abilities will unexpectedly emerge. So, even though LLMs aren’t great at planning and thinking through problems right now, these abilities should emerge at some point.

there is no wall

— Sam Altman (@sama) November 14, 2024

In the past few months, however, the scaling hypothesis has come under significant fire. Some scientists believe scaling LLMs will never lead to AGI, and they believe that all of the extra power we’re feeding new models is no longer producing results. Instead, we’ve hit a “scaling wall” or “scaling limit” where large amounts of extra compute power and data are only producing small improvements in new models. Most of the scientists who participated in the AAAI study are on this side of the argument:

The majority of respondents (76%) assert that “scaling up current AI approaches” to yield AGI is “unlikely” or “very unlikely” to succeed, suggesting doubts about whether current machine learning paradigms are sufficient for achieving general intelligence.

Current large language models can produce very relevant and useful responses when things go well, but they rely on mathematic principles to do so. Many scientists believe we will need new algorithms that use reasoning, logic, and real-world knowledge to reach a solution if we want to progress closer to the goal of AGI. Here’s one spicy quote on LLMs and AGI from a 2022 paper by Jacob Browning and Yann Lecun.

A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.

However, there’s no real way to know who is right here — not yet. For one thing, the definition of AGI isn’t set in stone and not everyone is aiming for the same thing. Some people believe that AGI should produce human-like responses through human-like methods — so they should observe the world around them and figure out problems in a similar way to us. Others believe AGI should focus more on correct responses than human-like responses, and that the methods they use shouldn’t matter.

In a lot of ways, however, it doesn’t really matter which version of AGI you’re interested in or if you’re for or against the scaling hypothesis — we still need to diversify our research efforts. If we only focus on scaling LLMs, we’ll have to start over from zero if it doesn’t work out, and we could fail to discover new methods that are more effective or efficient. Many of the scientists in this study fear that commercial AI and the hype surrounding it will slow down real progress — but all we can do is hope that their concerns are dealt with and both branches of AI research can learn to coexist and progress together. Well, you can also hope that the AI bubble bursts and all of the AI-powered tech products disappear into irrelevance, if you prefer.

Willow Roberts
Willow Roberts has been a Computing Writer at Digital Trends for a year and has been writing for about a decade. She has a…
Samsung might put AI smart glasses on the shelves this year
Google's AR smartglasses translation feature demonstrated.

Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rightfully so. It serves as the flagship launch vehicle for a reinvigorated Android XR platform, with plenty of hype from Google’s own quarters.
But it seems Samsung has even more ambitious plans in place and is reportedly experimenting with different form factors that go beyond the headset format. According to Korea-based ET News, the company is working on a pair of smart glasses and aims to launch them by the end of the ongoing year.
Currently in development under the codename “HAEAN” (machine-translated name), the smart glasses are reportedly in the final stages of locking the internal hardware and functional capabilities. The wearable device will reportedly come equipped with camera sensors, as well.

What to expect from Samsung’s smart glasses?
The Even G1 smart glasses have optional clip-on gradient shades. Photo by Tracey Truly / Digital Trends
The latest leak doesn’t dig into specifics about the internal hardware, but another report from Samsung’s home market sheds some light on the possibilities. As per Maeil Business Newspaper, the Samsung smart glasses will feature a 12-megapixel camera built atop a Sony IMX681 CMOS image sensor.
It is said to offer a dual-silicon architecture, similar to Apple’s Vision Pro headset. The main processor on Samsung’s smart glasses is touted to be Qualcomm’s Snapdragon AR1 platform, while the secondary processing hub is a chip supplied by NXP.
The onboard camera will open the doors for vision-based capabilities, such as scanning QR codes, gesture recognition, and facial identification. The smart glasses will reportedly tip the scales at 150 grams, while the battery size is claimed to be 155 mAh.

Read more
Apple hit with lawsuit over Apple Intelligence delay
Invoking Siri on iPhone.

Apple has been hit with a lawsuit over allegations of false advertising and unfair competition regarding the delayed launch of some of its Apple Intelligence features.

The tech company has made much of its AI-infused Apple Intelligence tools when they were first unveiled at its developer event in June 2024, and while some of the features have made their way to its various devices since then, the company recently revealed that some of the more advanced AI-powered tools -- including for its Siri virtual assistant -- would not be ready until 2026.

Read more
Apple’s hardware can dominate in AI — so why is Siri struggling so much?
Apple's Craig Federighi presents the Image Playground app running on macOS Sequoia at the company's Worldwide Developers Conference (WWDC) in June 2024.

Over the past year or so, a strange contradiction has emerged in the world of Apple: the company makes some of the best computers in the world, whether you need a simple consumer laptop or a high-powered workstation. Yet Apple’s artificial intelligence (AI) efforts are struggling so much that it’s almost laughable.

Take Siri, for example. Many readers will have heard that Apple has taken the highly unusual (and highly embarrassing) step of publicly admitting the new, AI-backed Siri needs more time in the oven. The new Siri infused with Apple Intelligence just isn’t living up to Apple’s promises.

Read more