Posts for Tag: On writing

Funny Computers

I wrote this for CFA Institute and first published it on the Enterprising Investor.

Have you tried the new thing at Starbucks? It’s a quick slap across the face for $7.

I’m not much of a fan of that chain, but I was looking forward to starting my morning there for a brief moment. I know that The Onion is well known for its contributions to fake news, but something about this particular product launch just feels real. It occupies a space beyond funny: discomfort as a service.

In past Weekend Reads columns, we’ve talked about projections for the slow march of artificial intelligence (AI) and its manifold implications. But I’ve never really zoomed in on one of my favorite areas of research: the quest to make authentically funny jokes.

Humor is among the best mysteries in language. It is situational, delicate, and instinctive. As researchers work to create bots that understand and interact with us in detail, humor is also a natural aspiration. Once it’s possible to create a bot that is “in on the joke,” the thinking goes, a lot of other tasks and objectives become approachable.

It’s been slow going. There is an unwritten rule in civil society that you do not attempt to explain jokes. And, of course, generating them algorithmically requires more than just that. A computer will either need a routine to follow or a way to generate one for itself. And that’s before considering how widespread depression is among stand-up comedians. One imagines there is no better way to create an intelligence that hates itself than to bestow upon it the “gift” of humor.

Still Trying

I have a handful of interesting papers to bring to your attention cataloging a few different approaches. But before you have too much fun, remember that the idea here is to educate as much as entertain. We could talk about automating more mundane things, but thinking about jokes is just way more amusing.

A recent paper by Justine T. Kao, Roger Levy, and Noah D. Goodman focuses on ways to predict the incongruity — and by extension, the humor — of words in context. Consider a magician getting so angry that he pulls his hare out. A human understands two incompatible meanings at once and probably chuckles. Without instruction, a computer highlights a grammatical error. This paper not only enumerates an approach to categorizing funny and unfunny sentences, but suggests a means of helping the computer understand why something is funny.

Understanding is one thing, but telling a joke is hard work. Can learning approaches with big data create reliable humor? Sasa Petrovic and David Matthews of the University of Edinburgh gave it a shot, and focused on jokes of the form “I like my X like I like my Y, Z.” A successful example of this formula: “I like my coffee like I like my war, cold.” One of the approaches they put forward is capable of generating funny jokes about 16% of the time. This compares unfavorably with their sample of human jokes — 33% funny — but perhaps there is something in the water in Edinburgh. One-in-three hilarity is significantly better than our own limited experience.

A 2015 paper with too many authors to name examined something that may be a little closer to home for many readers: The New Yorker Cartoon Contest. The results are perhaps predictable to readers of that magazine: Negative captions were scored to be the funniest. More evidence that there is something bitter about my home city.

I am reaching the limit of my ability to discuss this intelligently or amusingly, but can’t help suggesting the curious reader open Liane Gabora, Samantha Thomson, and Kirsty Kitto’s “A Layperson Introduction to the Quantum Approach to Humor.”

And Now for Something Completely Different

Back to Fun