Skip to main content

What is artificial intelligence? Here’s everything you need to know

Demystifying artificial intelligence: Everything you need to know about A.I.

Crazy singularities, robot rebellions, falling in love with computers: Artificial intelligence conjures up a multitude of wild what-ifs. But in the real world, A.I. involves machine learning, deep learning, and many other programmable capabilities that we’re just beginning to explore. Let’s put the fantasy stuff on hold (at least for now) and talk about real-world A.I. Here’s what it is, how it works, and where it’s going.

Recommended Videos

What is artificial intelligence?

A.I. seeks to process and respond to data much like a human would. That may seem overly broad, but it needs to be: Developers are baking in human-like smarts into a wide variety of applications. Generally, A.I. falls within three categories — which we would note there is still some disagreement as to what the exact definitions are, much less if they’re truly possible.

  • Narrow: Narrow A.I. (sometimes called “weak A.I.”) is where most of humankind’s work so far has been. As its name suggests, it is focused on executing a single task, and interactions with a narrow A.I. are limited. Examples of this are checking weather reports, controlling smart home devices, or giving us answers to general questions that are pulled from a central database (Wikipedia, etc.). Several narrow A.I.s can be strung together to offer a more comprehensive service: Alexa, Google Assistant, Siri, and Cortana are great examples, even current forms of the autonomous car. Narrow A.I. can’t think for itself; this is why sometimes you’ll get a nonsensical answer back — it lacks the ability to understand context.
  • General: General A.I. (or “strong A.I.”) is where we’re headed. Here, A.I. gains the ability to understand context and make judgments based on it. Over time, it learns from experience, is able to make decisions even in times of uncertainty or with no prior available data, use reason, and be creative. Intellectually, these computers operate much like the human brain. So far we’ve not been able to do it, although most believe we might be able to do so sometime this century.
  • Super: In the far distant future, A.I. may become intellectually superior to humans in every way. A.I. robots would be able to think for themselves, attain consciousness, and operate without any human involvement, perhaps at the direction of another A.I. This sounds like some real Skynet-like dystopia complete with the end of humanity as some warn, but it could also be the dawn of an era in innovation that might make previous advancements look pedestrian.

A.I. can also be classified by how it operates, which is particularly important when considering how complex an A.I. system is and its ultimate cost. If a company is creating an A.I. solution, the first question must be, “Will it learn through training or inference?”

  • Training: These A.I.s are designed to learn and improve over time, and adjust their data sets and certain parts of their processes to become more efficient. General and super A.I. platforms will be able to do this, however narrow A.I. typically does not, since the amount of processing power necessary is so great making it quite expensive.
  • Inference: Most narrow A.I.s are designed to look at data and draw conclusions in careful steps, a much cheaper and less computationally expensive method. For example, to answer the question “What was the score of yesterday’s games?” an A.I. might infer, “To answer this question, I must find data for yesterday’s game scores by searching list of reliable sports datasets, I’ll compare that data to favorite teams listed in settings, and report back the scores in audio.” While helpful to the end user, if the response wasn’t exactly what the user was looking for, the A.I. has little ability to adapt on its own over time. A human must get involved to make its responses more relevant.

As we’ve noted earlier, these definitions are only meant as a general guide (this Medium article is a great discussion on what we’ve just talked about), and some may have slightly different descriptions. But there are examples of current A.I. which are worth discussing.

Current forms of A.I.

C2Sense tiny artificial nose sensor
Jan Schnorr/C2Sense
Jan Schnorr/C2Sense

Voice assistants: Siri, Cortana, Alexa, and other voice assistants are growing more common, becoming the “face” of modern A.I. A growing subset here are chatbots, which manage messaging on websites and carry on online conversations.

Translation: This isn’t just about translating language. It’s also about translating objects, pictures, and sounds into data that can then be used in various algorithms.

Predictive systems: These A.I.s look at statistical data and form valuable conclusions for governments, investors, doctors, meteorologists, and nearly every other field where statistics and event prediction prove valuable.

Marketing: These A.I.s analyze buyers and their behavior, then choose tactics, products, and deals that best fit said behavior. There is a lot of crossover between these behind-the-scenes tools and voice assistants at the moment.

Research: Research A.I.s like Iris search through complex documents and studies for specific information, typically at higher speeds than Google’s search engine.

Awareness: These A.I.s watch for and report unusual events when humans can’t have an eye on them. One of the most complex examples of this is theft detection, which reports unusual behavior. A more exciting example, however, is self-driving cars, which use A.I. systems to scan for dangers and choose the appropriate course of action.

Editing software: These basic A.I.s look at pictures or text and locate ways that they could be improved.

Where A.I. is headed

Recently, neural networking expert Charles J. Simon recently opined on our pages about where he thinks A.I. is headed, which we recommend you read. While we won’t cut and paste the entire article here, we’ll point you to one specific section:

Most people look at the limitations of today’s A.I. systems as evidence that AGI [general A.I.] is a long way off.  We beg to differ. A.I. has most of AGI’s needed pieces already in play, they just don’t work together very well — yet.

This is a key point. As we’ve noted, A.I. is getting better — at least perceptually — by the fact that developers are stringing together several narrow A.I. platforms. But the platforms don’t talk with each other. For example, while Alexa might now be able to start your car, it can’t use the current weather conditions to adjust your car’s heater or air conditioning systems or start the defroster to make sure you’re ready to go as soon as you get in. But Simon argues that we may have the computational and developmental capability either already and don’t know it yet, or within the next decade.

Companies are spending massive amounts on money on A.I. right now, and as long as they’re willing to spend the billions (if not eventually trillions) to advance the technology, things are going to move quickly. But there are all kinds of roadblocks in the way — whether it be a recessionary economy, computational challenges, and even moral and philosophical hurdles to overcome — so the road to a real-world Skynet might be a long one.

Is A.I. dangerous?

Image used with permission by copyright holder

While we keep coming back to the obvious Skynet references, it’s time for a bit of a reality check. A.I.s are long strings of programmed responses and collections of data right now, and they don’t have the ability to makes truly independent decisions. That being the case, malice is definitely off the table for the time being. But that’s not to say human error could make them so.

For example, if a predictive A.I. tells a team that storms will spawn on the East Coast next week, the team can send resources and warnings there in preparation. But if storms actually appear in the Gulf of Mexico and hit the coast there, that prediction was inaccurate and may have endangered lives. No one would think the A.I. is somehow personally to blame for this; instead, they would look at the various data inputs and algorithm adjustments. Like other types of software, A.I.s remain complex tools for people to use.

At least for now, A.I. is, for the most part, harmless and if anything helpful to the world at large. But that could change in the distant future, and at that time we’ll need to have a serious discussion on just how much of our lives we’re willing to turn over to machines.

Ed Oswald
For fifteen years, Ed has written about the latest and greatest in gadgets and technology trends. At Digital Trends, he's…
Kia’s futuristic, affordable EV4 sedan will launch in 2025
kias futuristic affordable ev4 sedan will launch in 2025 653867 v2 1

Kia certainly sparked interest when it unveiled the concept model of the EV4 in 2023. The sedan’s futuristic design and electric range capacity, combined with the promise of affordability, showed that Kia was ready to make bold moves to diversify its EV lineup.

But two big questions came up: When would the EV4 actually launch, and would the smaller sedan/hatchback ever launch stateside, given American's preference for larger vehicles.

Read more
Hyundai believes CarPlay, Android Auto should remain as options
The 6.9-inch Sony digital media receiver installed in the dashboard of a vehicle.

Hyundai must feel good about the U.S. market right now: It just posted "record-breaking" November sales, led by its electric and hybrid vehicles.

It wouldn’t be too far of a stretch for the South Korean automaker to believe it must be doing something right about answering the demands of the market. And at least one recurring feature at Hyundai has been a willingness to keep offering a flexible range of options for drivers.

Read more
Dodge’s Charger EV muscles up to save the planet from ‘self-driving sleep pods’
dodges charger ev muscles up to save the planet from self driving sleep pods stellantis dodge daytona

Strange things are happening as the electric vehicle (EV) industry sits in limbo ahead of the incoming Trump administration’s plans to end tax incentives on EV purchases and production.

The latest exemple comes from Dodge, which is launching a marketing campaign ahead of the 2025 release of its first fully electric EV, the Daytona Charger.

Read more