Skip to main content

Good at StarCraft? DARPA wants to train military robots with your brain waves

Douglas Levere, University at Buffalo

The 1984 movie The Last Starfighter tells the story of a teenager whose calling in life seems to be nothing more than to play arcade games. Fortunately, he’s spectacularly good at it. The game he’s best at is a video game called, as the movie’s title would have it, Starfighter. In it, the player must defend their homestead, The Frontier, from the perils of Xur and the Ko-Dan Armada by way of a series of wireframe laser battles.

But there’s a twist. It turns out that Starfighter isn’t simply a game; it’s actually a kind of test. The war with Xur and the Ko-Dan Armada is real, and the arcade game — with its demands on rapid-fire reaction times on the part of players — is a stealth recruiting tool, intended to seek out the best of the best to become genuine starfighters.

More than 35 years after The Last Starfighter hit theaters, engineers from the University at Buffalo, New York, Artificial Intelligence Institute have received funding from DARPA, the U.S. Defense Advanced Research Projects Agency, to carry out research that’s… well, let’s just say that it’s extremely similar. They have built a real-time strategy game, currently unnamed, that’s reminiscent of existing games like StarCraft or Stellaris in style. In this game, players must use resources to build units and defeat enemies; manipulating large numbers of agents on-screen to complete their mission objectives.

But this isn’t any ordinary gaming experience. When people play the University at Buffalo’s new strategy game, they first have to agree to be hooked up to electroencephalogram (EEG) technology so that the game’s designers can record their brain activity. As they play, their eye movements are also tracked by way of special ultra high-speed cameras to see exactly how they respond to what they’re doing. This information, which can be teased out using machine learning algorithms, will then be used to develop new algorithms that can help train large numbers of future robots. In particular, the hope is that these insights into complex decision-making can improve coordination between large teams of autonomous air and ground robots. You know, should the game be brought to life.

Patrik Stollarz/Stringer/Getty Images

For anyone who grew up on movies like The Last Starfighter, this will seem strangely familiar. Although there’s a twist here, too. In The Last Starfighter (and other sci-fi stories which tread similar ground, such as Orson Scott Card’s Ender’s Game and Ernest Cline’s Armada), the goal is to train humans to have the kind of lightning fast reflexes that would normally be found in a machine. In this case, it’s different. The purpose of the University at Buffalo’s new gaming project isn’t to make players more machine-like.

Just the opposite, in fact. It’s all about trying to make machines that think more like humans.

Training tomorrow’s swarms today

“We’re trying to recruit [participants] who have strong gaming experience,” Souma Chowdhury, assistant professor of mechanical and aerospace engineering in the School of Engineering and Applied Sciences, told Digital Trends.

Chowdhury is one of the lead investigators on the project. He pauses and gives a nervous chuckle; the slightest hint of an apology creeping into his voice. “I myself do not have gaming experience,” he said. “I’m not a computer gamer at all. But many of our students are into games like crazy.”

“We’re trying to recruit [participants] who have strong gaming experience.”

Chowdhury’s own area of interest is swarm intelligence, a branch of computer science dating back to the late-1980s. Swarm intelligence is all about the collective behavior of decentralized, self-organized systems, both virtual and robotic. “It’s a real hot topic,” he said. “It’s becoming known that there are a lot of different applications which could be done by not using a single $1 million robot, but rather a large swarm of simpler, cheaper robots. These could be ground-based, air-based, or a combination of those two approaches.”

Some researchers in swarm robotics try and create swarms that can carry out complex procedures by hand-crafting the actions of every agent involved; the way you might coach each member of a dance troop so they can master a complex routine. Put them all together and you’ll get something that looks like emergent collaboration, although it’s actually a collection of individuals doing their own thing. The idea of using modern machine learning artificial intelligence is that it could give robot swarms the ability to more autonomously function as a meaningful collective.

Douglas Levere, University at Buffalo

But that’s easier said than done. Training one robot to do something requires a significant amount of training. Training a swarm, potentially with varying abilities, to complete tasks in complex, uncertain environments is a whole lot trickier. It means running tens of thousands of simulations, making the process extremely time-consuming and expensive. The idea driving this new project is that watching humans play the game will make it easier for machines to learn.

“Imagine walking into a classroom where there’s no teacher, and saying ‘let’s learn algebra,’” Chowdhury said. “You can learn just using exercises and textbooks. But it’s going to take a lot more time. If you have a teacher you can follow it’ll make it faster. In this case, we want to see how humans play this game and then use that to significantly speed up the A.I. in learning the behavior. Before it would be necessary to run 10,000 simulations to learn. Now we only need to run perhaps 1,000 simulations and augment this with data from humans.”

The researchers believe that, by observing the type of tactical or strategic decisions humans take when they play a strategy game, it will be possible to work out which features and events motivate these actions.

Teaching the machines

“The project is ongoing, at a pretty aggressive pace,” Chowdhury said. “We are around the halfway mark.”

At present, they’ve yet to start the data-gathering phase of the project, although Chowdhury has a good idea of the format that it will take. The plan is to carry out experiments with around 25 participants. Each participant will play between six and seven games with different randomized settings and levels of complexity. Unlike games such as StarCraft, which can last for hours, in this case each game will go on for only last only between five and ten minutes. That will be sufficient to measure decision-making strategies, and for these features of interest to be extracted using algorithms and scripts developed by the team.

“Humans can come up with very unique strategies that an A.I. might not ever learn.”

“At this point, it is difficult to comment on the amount or size of data that will be eventually collected,” Chowdhury said. However, the aim is reportedly to eventually scale up to 250 aerial and ground robots, working in highly complex situations. One example might be dealing with sudden loss of visibility due to smoke. The team plans to develop algorithms, modeled on human behavior, that will allow them to adapt to challenges such as this.

“Humans can come up with very unique strategies that an A.I. might not ever learn,” he continued. “A lot of the hype we see in A.I. are in applications that are relatively deterministic environments. But in terms of contextual reasoning in a real environment to get stuff done? That’s still at a nascent stage.”

Humans make the strategies

In Daniel Kahneman’s 2011 book Thinking, Fast and Slow, the Nobel-winning economist and psychologist describes two different modes of thought. The first system is fast and instinctive, the kind of thing we might call intuition. That might be locating the source of a specific sound, completing the phrase “war and…” or, yes, blasting Ko-Dan ships out of the air (or lack thereof) in Starfighter. The second system is slower, more deliberate, more logical. It’s centered on conscious thinking — which in this case might very well refer to forming strategies.

Chowdhury doesn’t cite Kahneman’s work when he discusses the project. But it’s hard not to be reminded of it. As he points out, machines are already capable of an impressive number of autonomous features. A $10,000 drone possesses some impressive smarts when it comes to navigating between locations. The same is true with agents in a strategy game. Units are often governed by low level rules which allow them to react to their surroundings. That could mean attacking or defending if they are confronted by an enemy. It might also mean being able to maintain formations as they move around the map. But in both cases what’s missing is the overarching strategy needed to execute tasks.

“You don’t need a human to do low level control, controlling each agent,” Chowdhury said. “That’s not what we’re interested in. They’re not controlling every single robot and where they’re going. The human role is more that of a supervisor or a tactician. A good analogy would be that, in a disaster response environment, you have a supervisor. They might have a team of 100 rescuers working under them. There’s a hierarchy, but the supervisor does not tell each of those team members exactly what they should do. The rescuers make a lot of independent decisions, but the supervisor creates the overall tactics. That’s what we want to build.”

If Chowdhury and his team get their way, the robot swarms of tomorrow will be a whole lot smarter. And they’ll have gamers to thank for it.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Global EV sales expected to rise 30% in 2025, S&P Global says
ev sales up 30 percent 2025 byd sealion 7 1stbanner l

While trade wars, tariffs, and wavering subsidies are very much in the cards for the auto industry in 2025, global sales of electric vehicles (EVs) are still expected to rise substantially next year, according to S&P Global Mobility.

"2025 is shaping up to be ultra-challenging for the auto industry, as key regional demand factors limit demand potential and the new U.S. administration adds fresh uncertainty from day one," says Colin Couchman, executive director of global light vehicle forecasting for S&P Global Mobility.

Read more
Faraday Future could unveil lowest-priced EV yet at CES 2025
Faraday Future FF 91

Given existing tariffs and what’s in store from the Trump administration, you’d be forgiven for thinking the global race toward lower electric vehicle (EV) prices will not reach U.S. shores in 2025.

After all, Chinese manufacturers, who sell the least expensive EVs globally, have shelved plans to enter the U.S. market after 100% tariffs were imposed on China-made EVs in September.

Read more
What to expect at CES 2025: drone-launching vans, mondo TVs, AI everywhere
CES 2018 Show Floor

With 2024 behind us, all eyes in tech turn to Las Vegas, where tech monoliths and scrappy startups alike are suiting up to give us a glimpse of the future. What tech trends will set the world afire in 2025? While we won’t know all the details until we hit the carpets of the Las Vegas Convention Center, our team of reporters and editors have had an ear to the ground for months. And we have a pretty good idea what’s headed your way.

Here’s a sneak peek at all the gizmos, vehicles, technologies, and spectacles we expect to light up Las Vegas next week.
Computing

Read more