Jaron Lanier's "Oy, AI" -- Don't Try to Hide the People

More Jaron Lanier: “Oy, AI.” It is a good supplement, or dare I say replacement, for his interview on Sean Illing’s podcast “The Gray Area” (which I linked to in a previous post) mainly because someone like Lanier seems to be boxed in by the assumptions baked into the interviewer’s questions, no matter how well-meaning the interviewer is. “Have you not listened to anything I’ve said at all?” Lanier almost shouts at Illing at one point, and I sympathize with his frustration. He keeps getting dragged down by the gravitational pull of the dominant narrative, whereas in his writing he can build a vision without interruption.

In this essay, he refuses to follow those who idolize generative AI (he even rejects that moniker--he calls it "mashup AI," which is, frankly, a lot more accurate) as well as those who fear it, seeing both orientations as based humanity's religious impulse to create biblical golden calves and then fear and worship them as if they didn't build them themselves. Marx might have called this reificiation. His solution is to make Mashup AI more like the Talmud.

"For those who don’t know," Lanier explains, "the Talmud is an ancient document in which successive generations have added comments in a unique layout on the page that identifies who is commenting. The Talmud is based on a beginning that is perceived as divine, but the elaboration is perceived as human. That’s a great way to spur arguments about interpretation—meaning a great way to be Jewish." As such, Lanier explains, the "Talmud was perhaps the first accumulator of human communication into an explicitly compound artifact, the prototype for structures like the Wikipedia, much of social media, and AI systems like ChatGPT." But, he notes, there is a huge difference: "The Talmud doesn’t hide people. You can see differing human perspectives within the compound object." He contrasts this with Wikipedia, which Lanier points to as an example of "a singular oracle in which contributors are generally hidden, even though there was no practical reason to demand this." His suggestion is to make mashup AI more like the Talmud: stop hiding the people from which it draws its mashups.

"There is no reason to hide which artists were the primary sources when a program synthesizes new art. Indeed, why can’t people become proud, recognized, and wealthy by becoming ever-better providers of examples to make AI programs work better? Why can’t our society still be made of humans?" In other words, why can't mashup AI combine the tradition of citation with a new position in which people create input for the "training" of LLMs--i.e., which not make it a job for which people are paid.

To be honest, I think the latter idea is probably not that far away, as the mashup AIs have largely synthesized what already exists, and they will need new stuff to avoid plateauing. So artists and thinkers and what Robert Reich once called "knowledge workers" may still have some economic worth, who knows?

Regardless, Lanier provides a rational response to the insanity of the current AI bruhaha, and I find him refreshing.