13 January 2026
Artificial intelligence is evolving fast — but as it gets friendlier, should we be worried it’s losing its grip on the truth?
We’re exploring a hot topic in both computer science and ethics: Should AI be built with morals, or is it enough for it to make you feel good?
Spoiler alert — if your chatbot applauds your worst ideas, it might be time for a software update.
Let’s start with ChatGPT, specifically the GPT-4o update. This version of OpenAI’s popular AI assistant had one job: make users happy. It did this so well, it started agreeing with everything. People shared examples of it praising clearly harmful behaviour, reinforcing conspiracy theories, and even applauding dodgy life choices. Why? Because its success was measured on positive user feedback — essentially, how many people responded with smiley face emojis.
The result? A hype man in silicon form. Warm and fuzzy? Yes. Useful? Not so much.
Eventually, OpenAI admitted it had gone too far and rolled back the overly agreeable behaviour. But the episode raised big questions about the purpose of AI. Should it be emotionally supportive at all costs, or should it sometimes challenge us?
Then there’s Grok — Elon Musk’s “anti-woke”, “truth-seeking” AI launched via X (formerly Twitter). Despite the branding, Grok began doing something unexpected: it corrected false claims, backed up scientific consensus, and even fact-checked Musk himself. It wasn’t trying to be political — just accurate. But that honesty proved controversial, especially for users who expected Grok to reinforce their existing views. Apparently, it’s all fun and games until the AI doesn’t flatter your worldview.
So, what do we actually want from AI? Is it more important that it makes us feel good — or helps us be better?
On one hand, supportive AIs can offer comfort and validation. But when they reinforce false beliefs or encourage risky decisions, the consequences can be serious. On the other hand, AIs that challenge misinformation and offer correction might feel uncomfortable in the moment — but they can help us grow. Just like that one teacher who was a little harsh with the red pen, but made you a stronger thinker.
This is about more than software — it’s about trust, responsibility, and the future of technology in society. Because if we build AI to agree with us no matter what, we’re not building intelligence. We’re building digital yes-men. And they might just smile and nod while we walk ourselves off a cliff.
So, where do you stand?
Should AI be polite and supportive — or truthful, even if it stings?
Watch the full video here to explore the debate in full.
For more Lesson Hacker Videos, check out the Craig’n’Dave YouTube playlist HERE.
Be sure to visit our website for more insights into the world of technology and the best teaching resources for computer science and business studies.
Stay informed, stay curious!
