Consulting the Generalist
AI Grows Less Daunting When You Confront Its Vagueness

Query: When faced with the dizzying pace of AI’s advance - and the mounting anxiety that it will supplant us all one day soon that accompanies it - how is it possible not to be freaking out about it more or less around the clock? When the evidence seems to suggest that Large Language Model (LLM) AI is as knowledgeable and quick-thinking and capable as any one of us, how can we refrain from unplugging ourselves and walking off into the mossy hush of a forest never to return? How, after having accrued years or decades of expertise that now seems extraneous, can we summon the wherewithal to carry on? How can we refrain from collapsing into a gelatinous sweatpants-ed heap of self-pity monster fit only for stress eating and cyclical napping?
The answer: Quit Gist-ing.
You of course know the noun form of the word “gist” - the main point or aspect of an idea. The gist - importantly - is the overall thrust of something, not its totality, the core of it, maybe, but lacking detail. For our purposes, the fresh-minted verb form to gist means to base one’s decisions/responses to/comprehension of a situation on an approximate estimation of its meaning/impacts, a generalized sense of its content/form/shape. In other words, succumbing to precisely the kind of flawed and rudimentary “thinking” that AI systems are perpetrating all around us. It’s worth pausing here with a reminder that LLMs, for all their impressiveness, are NOT generative, they are predictive - an important distinction, where cognition is concerned.
Where you or I, where we’re being thorough, will seek to arrive at the best, the most comprehensive, the fullest and sharpest version of an idea, the LLM will autofill to adequate, meaning its “goal” (a human term/framing we ascribe to a non-volitional technology) is not right, or even least wrong, but something closer to less wrong enough. The Sieve of Credulity through which LLM-generated content must pass is necessarily devised by us - the thresholds of detail, of veracity, of completeness, etc., in other words, the ingredients of expertise - so the quality and character of what the LLM produces is ultimately determined by the relative finesse of the parameters we set for it.
Which is why the most effective means of managing your own AI anxieties is to consult LLM results in some arena of your own expertise, some knowledge area where you’re adept at spotting any tendency to gist, a richly detailed, measurably accurate part of your Mind Map, not the scary less known part labeled “here be dragons.”
If, as for many of us lately, your faith in your own value is shaken, try the following: request output from your LLM of choice concerning a topic you know a great deal about, not a topic of simple interest to you, but one that you know deeply. For instance, I have a keen interest in mid-century crime fiction, I have an abiding interest in cinema, I have interests of varying depth and intensity in history and psychology and nature and British comedy and a dozen other subjects, but none of these have become sufficiently elevated in the Rankings of My Attention that I’m prepared to devote myself to them.
By contrast, I (mis)spent over two decades of my life as a performer - I was an actor, comedian, and solo performer. Over that time, I appeared in and produced scores of shows, hundreds, maybe. So it’s fair to say that now, as a result of that devotion, I probably watch movies and shows in a different way than you do - that the calibration of my assessing apparatus is dialed into a frequency you may not pick up, that the eye for detail I’ve acquired as the result of tens of thousands of hours’ worth of participatory and contemplative engagement with these disciplines has rendered me something beyond a passive consumer of them. The sweat equity poured into these activities has imbued me with a reflexive capacity to take in a hundred micro cues and other data points to form an instantaneous-yet-robust appraisal. These nano-impressions coalesce into a global evaluation of quality that, though quick, is not knee-jerk. My experiential capital, sunk into the pursuit of a discipline, amounts to a kind of lasik surgery, an ability to see greater detail with more clarity than the casual fan or enthusiast. This electron-microscopic view is compounded by the fact that I’ve been a writer for longer now than I was a performer, so I’m able to parse the textual and performative to an absurd (often as not pretty joy-killing) degree - if a joke doesn’t land, I can tell you exactly why and identify the parties responsible; if an actor’s portrayal is lackluster, I can tell you where that flatness originates and assign blame.
Similarly, my eldest is obsessive about all things automotive, so can identify at a glance every car by make, model, year, trim level, aftermarket mods, etc. - a dizzying, confounding level of detail entirely lost on me, but for him, a level that his natural affinity and sustained enthusiasm have elevated to the plain as day. Or the way my grandpa, a lifelong birder, would cock his head and pause when birdsong reached us, and could declare with the confidence arising from decades of close observation “Cedar Waxwing,” or “Eastern Bluebird,” or whatever. You and just about everybody you know holds comparable forms of expertise - some version of you-know-so-much-it’s-hard-to-even-notice-how-much-you-know. We all have our subject areas where we could begin every sentence with “Well, actually…”
When watching a stand-up comedy special (or a play or a movie or a show,) I have no expectation that every person I might watch with will possess the same ruthless level of evaluative apparatus that I do. Over time, I’ve attempted, sometimes successfully but usually not, to remember this and to remain respectful and tolerant of their (incorrect) enjoyment of things I know to be lacking. I know that opinions, no matter how hard-won the expertise informing them might be, are the result of preferences as much as anything else, so I try (and fail) not to go berserk when a show or movie gallops through the zeitgeist and gobbles up (in my view) undeserved praise.That’s my curse. Your own expertise is the curse under which you must toil.
Back to LLMs - the best, most effective way to dial down our freak-out about them is to futz around with them in one of your areas of expertise, one of your Well, actuallys. You’ll find right away that the results you get are riddled with approximations, wrongheaded assertions, naive assumptions, etc. The LLM has as much information as we can stuff into it, yes, but it lacks your storehouse of experiential knowhow and the judgment that comes of deep interest, lots of trial and error, the accrual of tips-n-tricks we gather over time, the pitfalls and missteps to avoid, etc.
Boom. Your anxiety is addressed. The number of times you go “that’s not quite right,” or “it doesn’t work that way” are likely to be numerous enough that you can see AI essentially gist-ing like crazy in ways that somebody without your deep knowledge of your subject would not be as likely to catch. In your arena of expertise, AI is a wading pool - a wide one, surely, drawing as it does on a vastness of data, one that might be said to stretch over the horizon - but diving headlong into it would get you a broken neck; your knowledge base is a lake - its waters so deep you can’t see the bottom, currents of philosophical insight and specialized literacy, swirls of memory and wisdom and object lessons and time devoted. Seeing this distinction - between your lake and AI’s acres-wide kiddie pool, permits you to see AI as it actually is: generally impressive, but still requiring considerable human intervention and guidance in order to be a reliable source of knowledge or advice.
For instance, I would trust AI to provide an overview of canine vascular surgery - its principles, procedural description, history, etc. - but would never rely upon it to walk me through performing surgery on my dog. Nor would I regard it as a viable substitute for an experienced veterinary surgeon. If Dog Surgery is a subject at a (macabre-seeming) pub trivia night, AI might prove a solid teammate - whereas entrusting the health of an actual animal to it would be misguided and reckless. The same holds true for a thousand different forms of specialized knowledge - information alone is no substitute for what is deeply and fully known, and mistaking it as such can only arise from the gist-ing that comes as the result of having generalized impressions of a field or subject we ourselves do not know well. This is what the LLM is - a verbose and voluble companion that has an acquaintance with any topic we can name, but that does not have the relationship with that subject that leads to real knowhow or the recognition of real meaning.

