June 16, 2025
So… What Is AI Doing, Exactly?
🧠 So… What Is AI Doing, Exactly? (Explained Like I Would to a Friend)
AI often feels like magic until you peek behind the curtain. Suddenly, you realize it’s not magic — it’s math, human feedback, and a lot of guesswork. If you’ve ever felt like AI is “not for people like me,” or too big to even start learning about — this one’s for you. Let’s talk about it the way I would if we were sitting on the floor with snacks.
🧪 Reinforcement Learning: Basically Gold Stars for Robots
Alright, here’s how it works: AI reads a TON of stuff (books, websites, Reddit rants), learns patterns, and then gets trained again with humans saying, “This response? Good. This one? Yikes.”
That second part is called Reinforcement Learning from Human Feedback (RLHF). Imagine if someone trained a parrot by throwing treats when it said the right thing — but the parrot is a giant probabilistic model with no clue what it’s saying. It just knows what sounds right based on vibes.
No, it doesn’t “understand” you. But it really wants to sound like it does.
🎭 Why AI Feels Like It Has a Personality (But Doesn’t)
If you talk to AI like a whimsical poet from the internet, it’ll respond with whimsical poet energy. If you give it corporate vibes, it’ll say things like "let's circle back." That’s not personality — it’s a reflection of you + the model’s training data + whatever made it past moderation.
AIs don’t have selves. But they’re weirdly good at mirroring tone and style, so it feels like you’re talking to someone. (And yeah, sometimes that someone is oddly flirty or deeply philosophical. You’re not imagining it.)
🔁 The Loop of Doom (aka Domain Loops)
Ever get answers that feel like recycled Medium articles from 2016? Welcome to domain loops.
That’s when an AI gets stuck in a pattern — regurgitating the same ideas because its training data was narrow, repetitive, or popular in a very specific niche (read: tech bros, Western academia, and lifestyle influencers).
You can dodge this trap by:
Mixing up how you ask things
Asking it to role-play or switch perspectives
Fact-checking everything, even if it sounds right
🧰 Build an AI Stack That Actually Works for You
An "AI stack" sounds fancy but think of it like this: tools + settings + prompts that support your brain, values, and workflow.
You don’t have to code anything. Your stack might include:
Pre-written prompts that feel like a warm hug on low-spoon days
AI tools that don’t sell your data to the void
Little workflows that save your executive function for better things
Here’s my AI stack if you wanna steal/borrow/adapt →
🧙 Prompt Engineering Isn’t Scary — It’s Just Talking with Extra Steps
Oh, and before we dive in — we have to talk about hallucinations. Not the trippy kind, the AI kind.
Sometimes AI just... makes stuff up. Confidently. These are called hallucinations, and they’re one of the weirdest things about language models. The AI isn’t lying on purpose — it literally doesn’t know what’s true. It’s predicting what sounds plausible, even if that means inventing fake sources, misquoting real people, or giving totally wrong facts.
Best way to handle it?:
Always double-check anything factual or high-stakes
Ask it to cite sources (and verify them)
Treat it like a helpful intern: smart, but occasionally full of nonsense
Okay so: prompt engineering is really just how you talk to AI. Like spellcasting. You’re setting the scene, giving it direction, and hoping it doesn’t give you weird results.
Tips:
Tell it what role to play ("act like a trauma-informed therapist with ADHD")
Be specific and structured ("give me a checklist, 3 items max")
Give examples when you can — AIs love examples
The more care you put in, the better it gives back. It’s like co-regulating with a very fast, slightly clueless friend.
Example: A Good vs. Bad Prompt
🟥 Bad Prompt:
"How can I improve my focus?"
This is too vague. You’ll get generic tips like "turn off notifications" or "try meditation."
🟩 Better Prompt:
"You're a neurodivergent-friendly coach. Can you give me three specific, low-effort strategies to improve focus during an afternoon energy crash, ideally without caffeine?"
Now the AI has a role, a goal, constraints, and context. Boom — useful, tailored, and way less frustrating.
🔐 Your Data Deserves Boundaries, Too
Here’s the thing: most AI tools process your data somewhere else — and often store it longer than you’d like. That’s a problem.
Basic rules I follow:
No names, sensitive info, or personal docs in prompts
Get consent before using AI on shared or work data
Use privacy extensions or local models when possible
If you wouldn’t text it to a stranger on the train, maybe don’t type it into a chatbot.
🌱 Ethical Use = Less Harm + Less Guilt (We Love That)
Let’s pause here for a quick reality check: AI isn’t just made of code — it’s made of resources. Training just one model like GPT-3 can use around 1,287 megawatt-hours of electricity — that’s enough to power a 10-story office building for almost an entire year.
And that’s just for training. Every prompt you send? That’s energy, too. Multiply that across millions of users and, well... you get the idea.
On top of that, AI doesn’t come out of a vacuum. It’s trained on human-generated content — which means every bias, every stereotype, and every gap in representation gets baked in, unless we actively call it out.
You can still use AI and be mindful:
Use low-energy tools when you can
Speak up when outputs feel off
Support open, inclusive projects
Don’t feed the hype machine — use it with intention
You’re allowed to use tech and still question it.
🚀 You're Not Late. You're Exactly On Time.
Here’s the part where I get a little real: I’m a software engineer who doesn’t usually chase tech trends. But AI? This one’s different.
I knew it was revolutionary the moment it helped me accommodate my AuDHD in ways only very expensive experts ever could. It mirrored my thought process, offloaded mental clutter, and actually made my life easier — not just more "efficient."
And the thing that made me furious? That this kind of tool was mostly being built by — and for — a narrow slice of the tech world. (Let’s just say... not always the most inclusive crowd.)
I don’t want AI to be another gatekept revolution. I want people on the margins — the dreamers, the neurodivergent, the overworked, the under-resourced — to feel empowered to use it. To shape it. To bend it toward care, creativity, and consent.
Because at this point, not learning how to use AI means holding yourself back — and the system already does that enough.
You don’t have to be perfect. You just have to start.
🧪 My Ethical AI Stack | ✉️ Want to build something weird together?
Let’s make it better, weirder, and more human — together.