Don’t Calculate: Why Machines Need a Practical Sense of Humor

0


[ad_1]

Photo: Alessio Ferretti / Unsplash


  • We find language models at the root of many AI technologies, from the automatic prediction functions of SMS apps to chat bots and automated storytelling.
  • Machines, however, are proficient at puns – even if mastering it is elusive – but stumble upon irony.
  • Just as a good friend might advise against drunken voting, a machine with enough emotional intelligence to understand irony can advise against vicious tweeting.

Last summer, during the short break between the first and second bans, my wife and I slipped into the cinema to see the new Christopher Nolan film “Tenet”. Like “Memento” on steroids, this promised a head-racking time travel yarn in which shadowy figures use cod science to rewind through time. The next day, I summarized our experience in an email to another Nolan fan:

“By the way, we watched ‘Tenet’ yesterday. Our brain is still bent. But we plan to see it again last week so that we will understand at some point. “

Gmail helped underline my choice of “last” in blue to politely signal that I might need to reconsider. Because how could a future-oriented plan work in the past if I could not travel into the past too? Although Gmail doesn’t understand my intent, it shows an impressive understanding of everyday language by recognizing my conflict between times and times. To do this, it uses powerful machine learning techniques that train its language model (LM) to mimic the natural rhythms of language.

We see LMs as the root of many AI technologies, from the automatic prediction capabilities of SMS apps to chat bots, speech recognition, machine translation, and automated storytelling. They can be used to fill in gaps in Cloze tests, to weigh the linguistic acceptance of different formulations of the same idea, or to generate the most likely completions for a particular prompt. LMs are all about shape, not substance, but when the surface shapes are so expressive, the deeper substance is implicitly modeled as well. So it’s entirely possible that Gmail is writing large chunks of your email for you – certainly the parts that are most ritualized or most predictable – while occasionally tipping into the specific details that the model itself cannot predict.

LMs bring true fear expressed by George Orwell in a famous 1946 polemical essay on language. Orwell worried that English was so clogged with simple clichés, inviting idioms, and stale metaphors that writers would have to struggle against the pull of convention to say all that is alive and fresh. As he put it, “Ready-made sentences … will make your sentences for you, even think your thoughts for you.”

He would no doubt be appalled, like Gmail Smart Compose function contributes to the IKEAfication of language and stands on the side of those who argue that some LMs are little more than “stochastic parrots”. These models may be a pretty elaborate marriage of statistics and William S. Burroughs’ cut-up method, but remember how that method was ultimately used to shatter stereotypes and jump out of the wildest avenues of language and don’t jump in headlong.

And so it is with LMs: A machine can use an LM to enforce or undermine linguistic orthodoxy, making predictable decisions here and highly improbable ones there. It can be used to spot unlikely sequences like my “see it again” last Week “and to propose normative corrections such as”next Week. “But it can also have the opposite effect and find convincing ways to make the stale and conventional appear fresh again.

Think of word games, the earliest form of verbal humor that children and children’s machines learn to master. Let’s say you are optimistic about your COVID-19 booster and describe the launch as “a jab, well done”. An LM will assign this wording a low probability and its phonetic neighbor “a.” A much higher probability work well done ”, as Gmail predicts“ next ”and not“ last ”as the most likely word after“ planning a reunion ”.

But your pun is more likely to be seen as a deliberate attempt at humor, as it has both phonetic similarity (jab / job) and statistical divergence (each formulation has a very different probability of occurrence). If a machine can also relate the word “jab” to the broader context of vaccination by using other statistical models – the same models would recognize a “jab” pun in a boxing context – it can confidently claim that your choice of words both are conscious and sensible: You wanted to say “jab”, whereby “jab” can be taken at face value and as a substitute for “job”.

A machine can also use this process in reverse to swap a word in a familiar environment, such as a popular idiom, for a word that sounds the same and has a much lower chance (according to our LM) of being seen in that environment, and that has a solid statistical basis in the larger context. The key to the pun is recoverability: our replacements need to be recognized for what they are and then easily undone.

So how do you go from a machine sense of pun to a machine sense of, shall we say, irony? Machines are competent in the former, even if mastering it is elusive, but stumble over the latter. However, a quick reflection reveals the relationship between these two forms of humor. The ironic echo of two to one may be conceptual rather than phonetic, but an ironic echo should be recoverable as well.

When I traded next Per last in the context of a time travel film, I relied on the semantic relationship between the two and on the fact that “last” is the plot of. reflects principle. Irony can operate at a higher level of knowledge, words, and the world where both the benefits and risks are higher but the basic mechanisms are the same. It is the types and sources of knowledge that differ, so an ironic machine needs more extensive and expensive training to master its raw materials.

The next obvious question: what is the sense of irony of a machine for us, the users of the machine? The strongest argument can be made for the automatic detection of irony, as well as sarcasm, as these have a dramatic impact on how a user’s mood is perceived, either in online reviews, on social media and emails, or in our face-to-face interactions with them Machine. To do its job properly, a machine must understand our intent, but irony can do for sentiment analysis what a magnet can do with a compass, making it difficult to pinpoint true north. For example, if a machine wants actionable insights from an online product review, it needs to know whether a positive attitude is sincere or ironic.

In an email environment, a machine sense of irony can gauge whether an incongruence is intentional or accidental, and if the latter is the case, cause the machine to propose a solution. But its real value is not in removing that little blue line, but in how it enables a machine to help and advise its users. Perhaps the context isn’t clear enough to support irony and needs a little more pep pull out the humor? Even if the irony is properly anchored, is it perhaps not suitable for its addressee, who may be a person with no joke in their own emails or a fairly large distributor who is at high risk of misunderstanding or accidental insults?

Just as a good friend might advise against drunk driving and drinking voting, a machine with enough emotional intelligence to capture and use irony can advise against angry email and snappy tweeting. Just think of the careers ruined by Smart Aleck tweets at 3 a.m. that go rancid in daylight. A pat on the shoulder or a forced break might have saved this day. Keeping ourselves from our impulses may still be the best and most practical reason to give machines a sense of humor.

A sense of irony and how a witty flipping of a jaded nostrum can mitigate the degradation of implicit criticism, give real weight to an intervention, and make machines a joy even when we are targeted. To get to this point, we need to move their sense of humor from the playground of puns to the realm of ideas, to turn the undeserved wisdom of convention on its head.

As this playground metaphor suggests, the path from word play to higher forms of humor is a well-rounded education in all the things that make us human. There’s a good reason the lonely are looking for partners with a GSOH (a good sense of humor) in dating profiles. Jokes are fun, but the most important thing for us humans is what they are based on – an understanding of others, a willingness to laugh at yourself and the fluency with norms that are set out as rules.

Tony Veale is Associate Professor of Computer Science at University College Dublin with a focus on Computational Creativity. He is co-author of Twitterbots: making machines that make sense and author of Your mind is my command: build AIs with humor.

This article was first published by MIT Press reader and has been republished here with permission.

[ad_2]

Share.

Comments are closed.