At the start of Atlas of AI (2021), Kate Crawford retells the story of Clever Hans, a horse who captured the public interest at the start of the 20th century for his remarkable feats of intellect. Hans was trained by his owner, a former math teacher, to tap out answers to questions. Hans proved to awestruck crowds that he could add, subtract, multiply, divide, tell time, and even read! By 1904, Hans had the New York Times crowing: “Berlin’s Wonderful Horse; He Can Do Almost Everything but Talk.”

A century before Hans, crowds were wowed by “the Mechanical Turk,” a life-sized model of a bearded man dressed in robes and a turban and seated at a chess board. The miraculous machine toured Europe and America, check-mating challengers like Napoleon Bonapart and Benjamin Franklin.

Today, we gawk at the extraordinary creative powers of a chatbot from San Francisco, a phenomenon just as unnerving as a reasoning German horse or a mechanical chess master. But when people ask me about the future of legal technology, I most often respond by returning to stories about the past, like these. Not because the history of technology is an endless repetition (that’s usually an excuse that people offer to avoid thinking) — but because history is full of insights into how the potential and meaning of certain events gets lost in an effort to explain or understand them too rapidly.

The past is prologue to the future of legal technology.

The Mechanical Turk was a lie — an illusion. There was a chess master hunched up underneath the gears, controlling the machine’s movements.

Clever Hans was a bit more complicated. It seems no one involved was trying to deceive anyone. But skeptics found that Hans was only clever when his questioners knew the answers themselves. When the ones asking questions didn’t know the answers, Hans went astray.

Further study showed “the questioner’s posture, breathing, and facial expression would subtly change around the moment Hans reached the right answer.” Hans wasn’t reading German — he was reading human body language. 

At the time, the public concluded that Hans was a fraud, and he was forgotten, much like the mechanical chess player.

Is the current iteration of artificial intelligence also just an illusion — a spectacle for newspaper headlines that will fade once the novelty wears off?

Intelligence as a metaphor.

When speaking of the latest technologies, we use the language of minds: we talk of artificial ‘intelligence’, machine ‘learning’, and ‘neural’ networks. We use the word ‘memory’ to describe digital data storage. The ultimate fantasy this language points toward is the android: when the thing transforms into a being, with a fully-formed and self-aware mind. Like us, but better.

Of course, this fantasy is also a nightmare. If AI is like us but better, then we are, by definition, redundant.

In the face of these dreams and anxieties, some scholars warn our language is skewed at the root. Our minds do not function like data processors, neurons aren’t circuits, and our memory isn’t stored in bits and bytes. We’re in danger of mistaking the metaphor for the thing itself.

Recently, the linguist and philosopher Noam Chomsky, who I first corresponded with when I was a high school student in Texas, penned a New York Times op-ed pointing out that humans minds and chatbots undergo almost opposite functions. The bots hoover up massive amounts of information and distill it into plausible examples; our minds take in limited amounts of information and create broader explanations. If we call both processes by the same name, we’re missing the crucial nuance.

Or there’s the critique of philosopher Hubert Dreyfus, who noted early on that we were equating “intelligence” with symbolic manipulation. Our minds are bound up in factors that AI software doesn’t have, like a body, a childhood, and a cultural practice.

AI is more than just parlor tricks.

It’s not for me to litigate the nature of consciousness, but if skeptics like Chomsky and Dreyfus are right, AI won’t ever attain minds like ours. Like Hans and the Mechanical Turk, rather than being intelligent, the programs will instead maintain a plausible appearance of intelligence. 

But if we dismiss artificial intelligence as clever algorithmic parlor tricks, we are missing the point, as our predecessors in the eighteenth and nineteenth centuries did. 

Hans and the Mechanical Turk weren’t doing what amazed spectators thought they were — but they were both more than hoaxes. They were remarkable achievements in engineering and training. The chess machine deployed levers and magnets to allow someone inside to double their actions externally, like a pre-industrial mech suit. Hans the horse presented a remarkable ability to read the subtle desires of a completely different species (humans) and give them what they wanted. The ancestors of modern horses and humans have been evolving in different directions for the last 60 million years, but Hans was able to understand something about us we didn’t even know about ourselves.

Of course, that’s not what amazed audiences. What drew crowds and sold newspapers was the illusion that a machine or a horse could think like humans do. In the same way, stories about using AI for a particular every-day benefit are often buried under more spectacular tales of a chatbot showing some sign of self-awareness or sentience — or a desire to be called by a specific personal name. 

Once again, we’re emphasizing the wrong thing. The true wonder of what’s happening goes under the radar.

The lesson of Hans isn’t that we’re such special creatures because we can say “two plus two equals four.” We’re special creatures because we can teach others — human or otherwise — to say “four” when we ask about “two plus two.”

A lot of time and energy in the law is spent on providing signals of understanding. We use unique abbreviations, formulations, and formats to indicate knowledge, like knowing that GAL references a guardian ad litem, not a place that Cesar invaded (that was GAUL). It’s unclear what the fundamental difference is between a human that indicates an understanding of the reference to GAL and a clever machine that indicates an ‘understanding’ of the reference to GAL. 

A quieter Star Trek character.

When we start talking about the future of artificial intelligence, we tend to fixate on flashy fictional characters like Data from Star Trek: Next Generation. But a more relevant analogy requires us to look a little deeper: the disembodied voice on starships typically just called “computer.” 

“Computer” doesn’t go on wild adventures, take up poker, or fall in love with other cast members. But it does send and receive valuable information in natural language, and is a clutch partner in orienting and problem solving for our crew. 

New AI has the power to radically shift the labor market and shape the development of all industries. But it’s not in any position to make legal minds redundant. Natural language processing — no matter how powerful — is different from thought. It’s a tool for thought to wield.

That means, as Filevine CEO Ryan Anderson noted in a recent LEX Summit speech, AI will not replace the lawyers who learn to adapt

Staying rooted in the tech-storm.

Technology will continue to develop rapidly, hitting us with news stories that promise new fantasies and stir up old anxieties. But amid the chatter, I want to remember the cycles of hype and disappointment from the past, and learn to focus instead on the steady growth of science, ingenuity, and engineering. 

I spend less time worrying about android lawyers, and more on deploying new breakthroughs to create tools for you: tools that anticipate what you might be looking for, find what you need rapidly, offer up actionable insights, expand your reach into new practice areas and languages, connect your firm more meaningfully with your clients, and allow you to harness new forms of profitability.

For my part, I think that we don’t have to be overly terrified or excited about the novelty of AI in law. We can simply focus on becoming more skillful at prompting and training the digital partners that we can help build, to ensure stronger legal work, higher productivity, and better representation for your clients.