Why We Feel Guilty for Getting Smarter
AI & LIFE|March 11, 2026

Why We Feel Guilty for Getting Smarter

By Connie ConnorsMarch 11, 2026

Why We Feel Guilty for Getting Smarter

On a train this morning, I heard the guilt out loud.

Somewhere in our recent, post-industrial world, we turned "effortless" into a moral crime. Admitting help — from a friend, a grammar tool or a chatbot — suddenly feels like confessing to "doping." We qualify everything. Today on the train, I overheard a man on his phone insisting, "I'm speaking from experience. Not from like, the Internet, you know." At the dance studio, a woman tells her friend, "I had a calling to go into the bookstore and buy this book." Both disclaimers are tiny acts of self-defense: I didn't cheat. I intuited it.

Intellectual Guilt

We're living in the age of intellectual guilt. Everyone's quietly using AI to draft awkward conversations, negotiate breakups, rehearse how to ask for a raise, but no one wants to be caught using it. It implies you couldn't think it through yourself. In a recent study, more than half of Americans say they feel negatively about AI, yet the same half is swapping prompts like recipes.

It's hypocrisy rendered in 4K, a kind of secret digital intimacy happening behind locked screens. In class the other day, one of my students said the liberal arts professors don't allow them to use AI and make them write essays by hand, in "blue books" (invented in the 1920s) as if it ensures philosophical purity. Then those same students step into a job interview and worry which is safer: "I never touch AI," or, "I use it every day." Approval. Guilt. And yes, I do teach at a Catholic, Jesuit University.

I took a photo of a building mural in Ireland. It struck me as unintentionally perfect for the moment we're in with AI. Or, is that an Amazon logo?

The truth is, our discomfort isn't with the technology. I don't think people truly "hate" AI. (Rex Woodbury speculates on what's behind the hate in his newsletter this week. And yes, the fear over the loss of jobs is one reason.) It's what AI exposes: intelligence was never spotless. We've always outsourced parts of our thinking. We used to call it collaboration, or instinct, or Google. Now "it" has a name, a chatbot face, and suddenly we're ashamed.

The Case for Aidan, But of Course

My girlfriend (she's reading this) told me that AI probably saved her from a potential ongoing argument. Can't we all just admit this? Whenever I feel the cortisol rising over a situation that I am struggling with how to handle, I turn to a chatbot. It helps.

She also told me about Aidan, they/them, who she describes as one of her "junior interns." Aidan helps polish tricky notes and awkward replies. There's something touching about that: turning AI into an imaginary staff member or assistant, just far enough from ourselves that the shame doesn't quite stick.

Primetime Weighs In

Even television has started to clock this dynamic. In the medical drama The Pitt, after an AI system makes a mistake, one of the resident jokes that AI really stands for "almost intelligent," while the AI-enthusiast doctor counters that it's "98% accurate" but always must be checked. It's sold as a safety warning, but it also captures the mood: AI as the eager but supervised intern, good enough to help but never trusted to lead.

"At its core, AI is there to support, not replace, the clinician. The goal is to reduce friction, catch risk sooner, and ultimately give doctors and nurses more time to focus on what matters most: caring for patients face to face." —Rana Kabeer, MD, via MD Linx

That's the slot we keep giving AI: the eager intern. Almost intelligent. Good enough to use, not respectable enough to admit.

Which is absurd, because the place AI is already doing the most good is exactly where angst lives. Not in the operating room, but in the kitchen, as you're counting sheep, staring at your phone.

I've been working on a little experiment: what happens if you treat AI not as a genius oracle or a cheating device, but as a nosy friend whose only job is to help you think through a decision without spiraling. Stay tuned for GoShed.app. It's not fully functional yet. But the goal is to provide some emotional relief by helping you make a decision over the stuff, the junk, the treasure you own, possess, or keep in your shed.


THINGS TO TRY

  • Perplexity. If you haven't used it yet, it's time to try it. It is a great research partner. I recently asked it to summarize Peter Drucker's management books which I relied on heavily when building my first business in my 20's. Yes, I admitted "help" back then but I was proud of my choice. Try Perplexity for anything slightly research-based, but think broadly. Ignore the constant nags to upgrade to the paid version.
  • Revolut. Not AI (wish it was) but it is a global version of Venmo for sending money. It was widely accepted in Ireland with the taxi drivers and I was glad to have it. A word of caution: if you're used to Venmo, this doesn't work the same way. It will require a bit of study before moving currencies around.
  • Grammarly. If you've been off and on using it for a few years and most likely have a love-hate relationship with it, revisit it. Like most simple tools, it's added a lot of features. That may shift the scales on how you feel about it.

Until Next Time

Stay curious. And if almost intelligent helps keep you from sending the worst version of yourself into the world, I'd say that's more than enough.

Connie

Enjoyed this? Get Almost Intelligent free. → Subscribe