AI is boring and stupid and maybe that's OK

Sometimes boring and stupid is still really useful.

We’re now just a bit over two years into the AI revolution. The chorus of hype and promises — both empty and real — remains deafening. The capabilities of AI increase every day. Yet right now, the most impressive thing about AI may be how quickly we’ve trained our primitive monkey brains to recognize how incredibly boring it is.

In months, really, we’ve retrained our brains. We've gone from being utterly dazzled by the magic of AI to seeing the machine behind the curtain, moving the levers. Not long ago we watched incredulously as AI conjured credible art out of thin air, held lengthy conversations with us on the fly, and generated new lyrics for Taylor Swift songs anytime we liked. Then, after a moment of magic, we started to shrug.

We learned in an astonishingly short time to recognize AI-generated content for what it is: Boring. Trite. Been there, done that, got the AI-generated T-shirt.

Inspiring, isn't it?
Inspiring, isn't it?

AI as a capability may be fascinating, but the output of it is nearly always dull and plodding. Recognizable as the work of a machine and marked by predictability and sameness. The AI-generated art proliferating across blog posts and the pages of LinkedIn Thought Leaders has already become a cliché. When we see someone pass off AI-generated text as their own, we know.

We know.

We’ve learned to see it, and just as we can’t help but see the green screen effect in an old sci-fi movie, we’ll only get better at spotting it, until eventually it's all we can see.

Boring by design

AI is built to be boring. That’s literally how it works: it predicts the most likely next word or pixel, based on the gazillion words and pixels fed to it. The problem is you, you’ve likely seen some version of the words and pixels fed to it before. Not every permutation, but enough to recognize the lack of originality.

And so it goes with the most visible use cases for AI today: We see AI-generated art and reject both the processed sameness of it as quickly as we recoil from the six-fingered uncanny valley.

We gloss over AI-generated text the same as we’ve learned to gloss over text that’s clearly overly optimized for SEO. Whether written by machines or for machines, our human brains want text written for humans, quirks and non-sequiturs and all. We want humor and humanity and something that makes us think in a way we haven’t thought before.

Creativity doesn’t exist without unpredictability. Good doesn’t exist with humanity. Not yet at least.

So, in a remarkably short time we’ve trained ourselves to see the machines working, and we recognize their work is largely... boring.

And yet, this is not a problem for AI. Or for people. Because a lot of what we do every day really is boring. Repetitive. Predictable. The sort of thing the machines literally trained on.

Boring can be useful

The world is full of things that exist because of a need to fulfill some requirement, not because they need to bring us joy. Sometimes we just need to get something done in order to get to something more important to us.

We write an awful lot of boilerplate and scaffolding on our way to something else, and then we write a lot of other things just to check a box. Boilerplate code. Boilerplate legal documents. Boilerplate grants and guidance and RFPs and SOWs and a whole alphabet soup of other documents and filings and records and reports.

We don’t always need these things to be good, we just need them to be done. This is where AI can really shine. It’s great at creating all that boilerplate and scaffolding, and then it can just as easily turn around and consume it all and spit out a nice, concise summary.

If we really over-achieve we can build a future where mountains of content aren't written or read by humans much at all. And maybe that’s not a bad thing. I mean, a lot of this stuff was never all that human-centered or human-readable to begin with.

The worrisome part is that nobody reads this stuff now, so who's going to read it and catch errors in the future? Any editor (or coder) will tell you: it’s easy to gloss over errors when something is already written.

So what if errors sneak in? AI can catch those, too.

Sure. Maybe. But will they catch them before real people are impacted? I don't think so. We know real people are impacted already, starting with the people who created the works AI is regurgitating. But as more and more AI-generated text creeps into more places – especially highly regulated places – what will it bring along for the ride?

This is especially worrisome in my field – government technology – where what's written determines which people receive needed benefits. Which Veterans get services. Which mothers are able to feed their children today. Which small businesses will survive thanks to a government loan. Literally, an entire world hinges on fine text and how it's read and interpreted. This is the kind of text AI excels at generating and we are terrible at reading.

My very smart friend Mark Headd recently wrote up some pretty smart use cases for AI and LLMs in government. Bill Hunt, another very smart friend also working in government, has a pretty strong opinion that today’s AI models aren’t ready. I think it’s possible that they’re both right.

There's an air of inevitability to generative AI – a vibrating sense of a genie in the air, refusing to go back into the bottle, sending a siren call to both those eager to summon and those fearful of what comes next.

What comes next is happening now, and a lot of it is happening in highly regulated and highly senstive spaces. Government systems are using AI to detect fraud and waste, to help agencies with regulatory compliance and, undoubtedly, a lot of other things. With a lot of questions about what happens when there are false positives, or false negatives.

AI generated content may be boring and trite, but that doesn't mean people aren't finding it incredibly useful, today, right now. And many of the most touted, most exciting use cases remain largely on the horizon.

We can't wait to reach that horizon before we start figuring out how to handle editorial control, quality checking, rules compliance and a host of other factors for anything AI-generated.

We’re going to have to find a way to ensure that AI is producing something like what human intelligence would have produced and that it is not introducing a bunch of errors. Getting AI to generate boilerplate is the easy part. Figuring out how we ensure that it’s correct will be the bear we have to wrestle.

This is not the first time we've had this problem.

The assembly line

True fact: The precursor to modern assembly lines – arguably the first assembly lines – could be found in 19th century meatpacking plants. There you'd find overhead trolleys moving heavy carcasses from worker to worker for processing. At the time "processing" generally meant large knives and bone saws, so conditions were dire. Defects and accidents were largely ignored – even though they could be deadly.

When automotive assembly lines in the early 20th century spurred a manufacturing revolution across industries, defects and errors still abounded, especially when plant owners made the the machines run faster to drive greater productivity. They got the idea from English textile companies, who had done for years with their looms – much to the horror of Luddites, who really just wanted a living wage and respect for workers. But we were talking about errors....

It wasn't until well into the 20th century that manufacturers really began to master rigorous multi-stage, multi-faceted quality control. Even with huge advancements in the field, quality in many industries has remained spotty and sporadic.

And that was when inspectors and testers only had to move at the speed of the line, which was limited at each step by what one person could do. Today, with AI we've automated the creation of nonsense at mass scale. Our traditional quality control checks don't match and won't scale.

We already know that AI-generated code lowers quality. We've discovered that it increases racial bias in hiring. Microsoft's AI reportedly generates violent sexual images while Google had to stop humans from generating images of other humans after tripping over themselves trying to correct AI stereotypes and bias. Across the industry, people are calling out failures to limit bias, misinformation and controversial content. The hazard of mimicking human content is that humans say some pretty messed up stuff.

So given both the mediocrity of generated text and the high likelihood of damaging errors in generated content, how do we ensure safety and validity in highly regulated services where when compliance, correctness and contractual obligations are on the line? Who's going to check this stuff?

I don’t think we can trust humans to be the answer. Even if they want to ensure accuracy and have a strong vested interest, I don't think they'll be able to keep up with fact-checking and error-checking all the content AI can generate.

So of course the answer is to have the AI check itself. Just like grizzled veteran editors taught newcomers to read stories from bottom to top to break the brain's flow and catch errors, we'll think up creative ways to make AI better at catching it's own mistakes. But of course each layer of AI cleverness creates another layer of AI to check. It a hall of mirrors. A matryoshka. A hall of mirrors endlessly reflecting matryoshkas.

For now, we find ourselves hacking through an ever-deepening forest of generated content, hoping we can spot the bear lurking in the forest before it mauls us. We're not good at this, and we're not going to be good at this. That forest of AI-generated content is boring as hell. And yet, somehow, we will find a path through it. It will be OK.

Unless we use up all the electricity in the world first. In the meantime, here's a picture of Joe Biden and Donald Trump enjoying some quality time together.

Reality is the only word in the English language that should always be used in quotes.
Reality is the only word in the English language that should always be used in quotes.
Published