May 30, 2025
Custom GPT for Good
✨ Intro: From Skepticism to Custom System
I’ve been wary of AI ever since it came out.
At first, it felt like just another buzzword—blockchain, bitcoin, now this. If you’ve worked in tech for any length of time, you know the cycle. Every year brings a new hype curve, and I didn’t see a reason to jump on the LLM train.

Then came the headlines about energy consumption. As someone who deeply values environmental preservation, I couldn’t justify contributing to something with such massive ecological repercussions. It felt extractive, excessive—yet another tool designed with scale in mind, not sustainability.
So I stayed away.
That is, until last week—when I was prepping for an upcoming move and decided to try using AI to get interior design suggestions tailored to my new AuDHD diagnosis. I expected generic Pinterest noise.
What I got instead stopped me in my tracks.
AI was surfacing highly specific, ND-informed lighting setups, ergonomic workstations, and even queer- and neurodivergent-friendly community spaces in the Brooklyn neighborhood I’m moving into. What would’ve taken me hours of executive-function-draining research was handled in 30 minutes—with clarity, flexibility, and a surprising level of care.
That’s when the power of large language models clicked for me.
But my concerns didn’t vanish. I still questioned the biases, privacy tradeoffs, and environmental impact baked into these tools. I wanted to use this technology—but only if I could use it on my own terms.
So I started building my own mode.

⚙️ Building My Ethical AI Mode: Blog42 Stack
Over the course of a few weeks, I started shaping the way I interacted with ChatGPT—not just in content, but in structure, consent, and cognition.
What emerged was the Blog42 AI Stack: a personalized protocol for using AI in a way that reflects my values as a queer, neurodivergent privacy engineer and eco-conscious human being.
Here’s what it looks like:
1. 🧪 Source-Check Mode
“If it sounds factual, cite it. If it’s a guess, say so.”
I asked ChatGPT to flag speculative claims, provide sources when possible, and warn me when something can't be verified. I wanted epistemic transparency—not hallucinations in a lab coat.
2. 🌍 Eco-Deep Dive Mode
“Warn me before doing high-compute tasks. Let me choose low-energy alternatives.”
Whether it’s generating images or running long outputs, I requested a consent checkpoint to protect against unnecessary compute load. It’s a tiny gesture in a big system, but it reflects a belief: AI shouldn’t be frictionless if it’s resource-heavy.
3. 🌀 ND Navigator Protocol
“Adapt to my brain. Give me low-stim modes. Ask before deep-diving.”
With AuDHD, the format of information matters just as much as the content. I asked for:
⚡ Low-Compute Mode: Simpler, less visually overstimulating answers.
📐 Threading Control: The ability to toggle between linear and branching ideas.
🔒 Emotional Load Consent: No surprise depth dives unless I ask.
This isn’t about dumbing things down—it’s about being invited, not flooded.
4. 🔍 Bias Radar
“Flag problematic sources, corporate greenwashing, or ethical red flags.”
From ableism to performative ESG claims, I wanted a mode that didn’t just deliver content—it contextualized it. Now, ChatGPT proactively highlights ethical concerns in companies, media, and sources it references.
🧠 Values as System Design Inputs
This whole setup wasn’t about “optimizing productivity.”
It was about reclaiming relational agency in a system that otherwise flattens nuance.
As a privacy engineer, I know how often values get sidelined for convenience. But with this mode, I realized something quietly radical: it’s possible to teach the machine to meet you where you are.
Not through code (though that helps)—but through conversation, context, and boundaries.
🧠 Final Reflections: The AI Is Not Neutral—So Let’s Not Be Either
LLMs are trained on the loudest parts of the internet. That means default interactions often reflect dominant narratives: fast answers, capitalist logic, sanitized bias.
But with intention, we can flip that.
By designing for ethical slowness, for friction, for neurodivergent processing styles—we’re not just hacking ChatGPT. We’re prototyping what respectful, adaptive AI could look like in the hands of people who don’t see themselves in its training data.
Ethical AI mode isn’t perfect. But it’s honest. And it’s mine.
🛠️ Bonus Quick Guide: Make Your Own Ethical AI Mode
If you’d like to try a version of the Blog42 Stack for yourself, here’s how to implement it—whether you’re using free ChatGPT or ChatGPT Plus (GPT-4):
🧭 Step-by-Step (Both Free and Premium Users):
Start a new chat.
Paste this exact prompt to set up the mode:
Wait for ChatGPT to confirm it’s in Ethical AI Mode.
You’re ready to go! You can now prompt normally, and it will apply those principles by default during the session.
💡 Pro Tip for ChatGPT Plus Users (Custom GPTs)
If you use ChatGPT Plus (GPT-4), you can create a Custom GPT that remembers this behavior permanently:
Go to Explore GPTs → click Create a GPT
In the “Instructions” field, paste the same Ethical AI Mode prompt above
Customize its name (e.g. MashaCore Stack or Ethical ND Navigator)
Save and use anytime!