Skip to main content

The funny formula: Why machine-generated humor is the holy grail of A.I.

In “The Outrageous Okona,” the fourth episode of the second season of Star Trek: The Next Generation, the Enterprise’s resident android Data attempts to learn the one skill it has previously been unable to master: Humor. Visiting the ship’s Holodeck, Data takes lessons from a holographic comedian to try and understand the business of making funny.

While the worlds of Star Trek and the real world can be far apart at times, this plotline rings true for machine intelligence here on Earth. Put simply, getting an A.I. to understand humor and then to generate its own jokes turns out to be extraordinarily tough.

How tough? Forget Go, Jeopardy!, chess, and any number of other impressive demos: According to some experts, building an artificial intelligence on the level of a top comedian may be the true measure of machine intelligence.

And, while we’re not there yet, it’s safe to say that we may be getting a whole lot closer.

Witscript cracks the code

Joe Toplyn is someone who doesn’t shy away from challenges. Toplyn, an engineer by training (with a large career gap in terms of actually practicing it), carved out a successful career for himself as a TV writer. A four-time Emmy winner, he’s been a head writer for the likes of David Letterman and Jay Leno. Several years ago, Toplyn became interested in the question of whether or not there is an algorithm (i.e., a process or set of rules that can be followed) that would help write genuinely funny jokes.

“People think it’s magic,” he told Digital Trends. “Some comedy writers or comedians, I think, try to portray what they do as performing magic. Well, it is like magic in the sense that a magic trick is constructed and designed, and there’s a way that it works that fools you into thinking that the magician has supernatural powers. But there’s really a logic to it.”

Comedian Joe Toplyn with Jay Leno
Image used with permission by copyright holder

This belief in a steely logic to joke-telling — honed while Toplyn was trying to teach his “magic” to aspiring, would-be comedians — ultimately led him to try building an A.I. able to generate off-the-cuff quips that fit into regular conversations. Called Witscript, the results add up to an innovative A.I. system that creates improvised jokes. A chatbot that uses Witscript to ad-lib jokes could, Toplyn said, help create likable artificial companions to help solve the “huge problem” of human loneliness. Think of it like PARO the robot seal with punch lines.

“It’s context-relevant,” Toplyn said of Witscript, which was recently presented at the 12th International Conference on Computational Creativity (ICCC 2021). “This sets it apart from other joke-generating systems that generate self-contained jokes that aren’t easy to integrate into a conversation. When you’re talking with a witty friend, chances are that their jokes will be integrated into a conversation in response to something you’ve said. It’s much less likely that your friend will just start telling a stand-alone joke like, ‘A man walks into a bar with a duck on his head …’”

The funny formula

This spontaneous quality comes from the joke-writing algorithms Toplyn himself developed.

“Basically, the way the basic joke-writing algorithm works is this: It starts by selecting a topic for the joke, which could be a sentence that somebody says to you or the topic of a news story,” he said. “The next step is to select what I call two ‘topic handles,’ the words or phrases in the topic that are the most responsible for capturing the audience’s attention. The third step is to generate associations of the two topic handles. Associations are what the audience is likely to think of when they think about a particular subject. The fourth step is to create a punch line, which links an association of one of the two topic handles to an association of the other in a surprising way. The last step is to generate an angle between the topic and the punch line: A sentence or phrase that connects the topic to the punch line in a natural-sounding way.”

Francesco Prandoni/Redferns via Getty Images

If all these handles and angles sound like hard work, the proof is — ultimately — in the pudding. Using 13 input topics, Witscript generated a series of jokes, which Toplyn then pitted against his own efforts. For a review board, he outsourced the judging to Amazon Mechanical Turk workers, who graded each freshly minted joke on a scale of one (not a joke) to  four (a very good joke). One of Witscript’s best efforts garnered a 2.87 rating (“That’s pretty close to being a joke,” Toplyn said) to his own 2.80 as student beat master. The Witscript joke? Riffing on a line about the 25th anniversary of the Blue Man Group performance art company, it quipped: “Welcome to the Bluebilee.”

While perhaps not quite yet ready to displace Dave Chappelle, Toplyn believes that Witscript proves that humor can, to a degree, be automated. Even if there’s still a long way to go. “As machines get better at executing those algorithms, the jokes they generate will get better,” he said.

However, he also struck a note of caution. “To generate [truly] sophisticated jokes the way an expert human comedy writer can, machines will need the common-sense knowledge and common-sense reasoning ability of a typical human.”

An A.I. comedy pioneer

This, as it turns out, may be the crux of the matter. Humor might seem frivolous, but for those who work in the fields of language, comedy, and artificial intelligence, it’s anything but.

“We use humor in a lot of different ways,” Kim Binsted, a professor in the Information and Computer Sciences Department at the University of Hawaii, told Digital Trends. “We use it to establish social rapport. We use it to define in-groups and out-groups. We use it to introduce ideas that we might not be willing to express seriously. Obviously, there’s nonlinguistic humor, but [linguistic humor] falls into a category of language use that is really powerful. It isn’t just a stand-up on stage who uses it to get a few laughs. It’s something that we use all the time [within our society.]”

“It is an enormous signifier of advanced intelligence because, in order to be truly funny, an A.I. needs to understand a whole lot about the world.”

When it comes to computational humor, Binsted is a pioneer. In the 1990s, she created one of (possibly the) first A.I. designed to generate jokes. Developed with Professor Graeme Ritchie, Binsted’s JAPE (Joke Analysis and Production Engine) was a joke-generating bot that could create question-and-answer puns. An example might be: “Q) What do you call a strange market?” “A) A bizarre bazaar.”

“It was great because it meant I could pick all the low-hanging fruit before anyone else,” she said modestly. “Which is pretty much what I did with puns.”

An A.I.-complete problem

Since then, Binsted has developed various other computational humor bots — including one able to dream up variations on “Yo mama” jokes. While Binsted’s work has since evolved to look at long-duration human space exploration, she still views joke-telling A.I. as a sort of holy grail for machine intelligence.

“It’s not one of these things like chess, where when A.I. was starting out, people said, ‘Well, if a computer can ever really play chess, then we will know it’s fully intelligent,’” she opined. “Obviously, that’s not the case. But I do think humor is one of those things where fluent humor using a computer is going to have to be genuinely intelligent in other ways as well.”

microphone in a bar
7713Photography

This is why joke-telling is such an interesting challenge for machines. It’s not because making an A.I. crack wise is as useful to humanity as, say, using machine intelligence to solve cancer. But it is an enormous signifier of advanced intelligence because, in order to be truly funny, an A.I. needs to understand a whole lot about the world.

“Humor depends on many different human skills, such as world knowledge, linguistic abilities, reasoning, [and more],” Thomas Winters, a computer science Ph.D. student researching artificial intelligence and computational humor, told Digital Trends. “Even if a machine has access to that kind of information and skills, it still has to have insight into the difficulty of the joke itself. In order for something to be funny, a joke also has to be not too easy and not too hard for a human to understand. A machine generating jokes should not use too obscure knowledge, nor too obvious knowledge with predictable punch lines. This is why computational humor is usually seen as an A.I.-complete problem. [It means] we need to have A.I that has functionally similar components as a human brain to solve computational humor, due to its dependency on all these skills of the human brain.”

Think of it like a Turing Test with a laugh track. Coming soon to a superintelligence near you. Hopefully.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
OpenAI’s Advanced Voice Mode can now see your screen and analyze videos
Advanced Santa voice mode

OpenAI's "12 Days of OpenAI" continued apace on Wednesday with the development team announcing a new seasonal voice for ChatGPT's Advanced Voice Mode (AVM), as well as new video and screen-sharing capabilities for the conversational AI feature.

Santa Mode, as OpenAI is calling it, is a seasonal feature for AVM, and offers St. Nick's dulcet tones as a preset voice option. It is being released to Plus and Pro subscribers through the website and mobile and desktop apps starting today and will remain so until early January. To access the limited-time feature, first sign in to your Plus or Pro account, then click on the snowflake icon next to the text prompt window.

Read more
OpenAI’s Sora doesn’t feel like the game-changer it was supposed to be
Sora's interpretation of gymnastics

OpenAI has teased, and repeatedly delayed, the release of Sora for nearly a year. On Tuesday, the company finally unveiled a fully functional version of the new video-generation model destined for public use and, despite the initial buzz, more and more early users of the release don't seem overly impressed. And neither am I.

https://x.com/OpenAI/status/1758192957386342435

Read more
Google’s new Gemini 2.0 AI model is about to be everywhere
Gemini 2.0 logo

Less than a year after debuting Gemini 1.5, Google's DeepMind division was back Wednesday to reveal the AI's next-generation model, Gemini 2.0. The new model offers native image and audio output, and "will enable us to build new AI agents that bring us closer to our vision of a universal assistant," the company wrote in its announcement blog post.

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google's new flagship AI model, you can expect to see it begin powering AI features across the company's ecosystem in the coming months. As with OpenAI's o1 model, the initial release of Gemini 2.0 is not the company's full-fledged version, but rather a smaller, less capable "experimental preview" iteration that will be upgraded in Google Gemini in the coming months.

Read more