Given the dearth of research devoted to AI and deep learning, the field was probably bound to produce some odd or questionably useful projects. For example, MIT’s Media Lab recently developed a few horror-based AI’s —“psychopath AI” Norman (as in Bates) and horror-writing Shelley (as in Mary) — trained on gruesome content and horror stories. While these projects could be conceived as jokes, they prove an important point: an AI is only as competent or useful as its dataset.
Last month, Stanford researchers Abel Peirson and Melten Tolunay published a paper introducing a deep-learning based “novel meme generation system.” For those unfamiliar, memes are a popular means for remixing content and have grown into something of an online subculture. Essentially, meme creators take a humorous image that begs for a humorous caption, and overlay said caption over the image.

A typical meme riffing on Dos Equis' "The Most Interesting Man In the World."
The paper – titled “Dank Learning” as a further homage to meme culture – describes how the authors fed the computer over 400,000 images, with accompanying captions. The AI applied a Python script to process the data, and then attempted to create its own memes. Interestingly, the team classified the AI-generated memes into several groups, including a newly captioned group of images the AI had previously seen, and a group of unseen images that the computer attempted to caption based on seen images.
The authors then showed a team of humans the AI’s product and had the team rate it on a numerical “hilarity” scale. According to the paper, the human reviewers thought the images were funny enough that they might’ve confused them for human-generated memes. The authors claim their research was successful because they proved “[generated memes] cannot be easily distinguished from naturally produced ones, if at all, using human evaluations.” But comparing seen images to unseen images, as in the image below, it’s clear that maybe two of the eight AI memes are actually funny, and in my estimation none of the unseen memes make much sense at all. Humor is subjective, of course, so your personal mileage may vary.

Like MIT’s horror-inspired bots, it’s difficult to tell if “Dank Learning” is a serious endeavor or a late-night, substance-fueled dorm room “project.” While a paper aimed at training AI’s to produce memes is bound to seem farcical, in the wide-open field of AI any findings could be useful. AI attempts to model or imitate human behavior, and humor is a significant and often useful part of that behavior.
The paper leaves the door open for future research, so here’s hoping that Dank Learning 2.0 teaches itself better jokes.
|