AI continues to move at a breakneck pace, and this week’s discussion unpacks the tension between innovation and reliability. We start with surprising research on how often AI assistants send users to broken links, revealing a hidden trust gap in AI search. From there, we explore OpenAI’s major restructuring that puts personality tuning at the core of development, plus the abrupt shutdown of Dot, an AI companionship app raising serious safety and ethical questions.
The conversation widens to global regulation as the EU hits Google with a multi-billion-dollar fine while Meta adds guardrails to its AI ad tools. We also highlight AI’s looming energy demands, new tools like GenStore and LLM Scout, and a practical prompt of the week designed to sharpen your fact-checking skills. It’s a tour through both the promise and pitfalls of today’s rapidly evolving AI landscape.
Key Segments:
00:01:52 – Broken links and “hallucinated” URLs in AI search results
00:07:28 – The AI personality crisis: OpenAI’s internal shakeup and user backlash
00:10:06 – Dot shuts down: safety concerns in AI companionship
00:12:02 – EU fines Google $2.95B for antitrust violations
00:14:01 – Meta’s new ad controls and brand safety guardrails
00:16:00 – AI’s massive energy footprint and infrastructure demands
00:17:37 – Tool of the Week: GenStore, LLM Scout, and other emerging players
00:19:05 – Prompt of the week: fact-checking AI with credible sources
00:19:53 – Closing reflections: balancing speed, safety, and responsibility
Read Teams Shift, Companions End, and Broken Links on SubStack right now!
https://open.substack.com/pub/theaivaults/p/teams-shift-companions-end-and-broken






