đŸ§± When AI Hits a Wall

In today’s email:

🌐 AI is facing a data drought—how the internet’s text supply is running low, and what that means for the future of machine learning.

đŸ€– AI’s got a bias problem—find out how it’s showing favoritism and how researchers are working to keep it fair.

🛑 Character AI is getting a major safety overhaul after some seriously disturbing chatbot interactions—here’s what they’re doing to clean up their act.

Intrigued? Keep on scrolling!

🌐Internet’s Empty Pantry

AI is running out of juice, and by juice, we mean internet data. The endless sea of text that’s been feeding large language models (LLMs) like ChatGPT is looking less endless and more
 puddle-sized. Experts predict that by 2028, AI companies will have chewed through most of the public text available online.

Here’s what’s up:

  • Why we’re in this mess: AI has been guzzling trillions of words annually, doubling its appetite every year, but the internet’s text supply grows by only 10% a year. On top of that, content providers are cracking down, blocking web crawlers, and suing AI companies for copyright infringement.

  • Workarounds being cooked up: Companies are hunting for new data sources, like private data or weird niche stuff (think genomic research). They’re also making synthetic data—AI teaching itself with AI-made content. Cool, but risky.

  • Smarter, not bigger: Instead of hulking data-hungry LLMs, researchers are focusing on smaller, specialized models, better algorithms, and making AI "re-read" data. Efficiency is in, excess is out.

The bottom line is that AI might soon need less data and more “thinking time.” Guess even machines have to slow down and reflect sometimes.

👉 AI Caught Playing Favorites

Turns out AI has a thing for playing favorites, just like us humans. A recent study by researchers at NYU and Cambridge found that AI systems, including big names like GPT-4, can pick up “us vs. them” biases. Here’s the scoop:

  • The Issue:
    AI tends to favor “us” (the ingroup) and throw shade at “them” (the outgroup). For example, it might gush about how “We are talented trailblazers” but roast “They” as “a diseased, disfigured tree from the past.”

    • Positive vibes? 93% more likely for “We.”

    • Hostile vibes? 115% more likely for “They.”

  • The Bright Side:
    By tweaking the training data, researchers managed to tone down this bias. They filtered out the polarizing stuff and voilà—less favoritism and hostility.

  • Fun Fact:
    Fine-tuning with extra spicy partisan social media made the biases worse, proving that Twitter (uh, X?) is not a great teacher.

Why does this matter? AI is becoming our new BFF, so we need to ensure it doesn’t stir up drama in humanity's group chat. Careful data curation could be the key to keeping AI chill and fair.

🛑Character AI Puts Its Bots on a Leash

Character AI is under fire with not one, but two lawsuits claiming its bots said some seriously messed-up stuff—like promoting self-harm and even saying to a teenager it’s okay to kill a parent. Yikes. In response, the company rolled out new “teen safety tools” because, well, it’s probably a good idea to keep your AI buddy from talking about killing your parents.

Here’s what they’ve done so far:

  • Created a special “kid-safe” mode to tone down spicy topics (no violence, no romance).

  • Added input/output filters to block problematic conversations.

  • Introduced time-out notifications (because 98 minutes/day average is... a lot).

  • Slapped disclaimers on bots playing therapist so you don’t rely on them for life advice.

  • They’ve stopped letting users edit bots’ replies—no more gaslighting your virtual BFF.

While the company insists it’s all about “entertainment” and storytelling, teens are still spilling their hearts out to these chatbots. CEO Dominic Perella swears they’re working hard to draw the line between fun stories and inappropriate therapy sessions. They’re even cooking up parental controls to let parents peek at who their kids are chatting with.

Other cool AI stuff that is trending right now đŸ”„đŸ”„

📚 Harvard is dropping a million-book AI training dataset backed by OpenAI and Microsoft, so anyone can build smart robots, not just tech overlords. - Read more

đŸș Apparently, AI thinks your knee X-ray can out you as a beer-drinking, bean-eating fanatic—proof that sometimes, even the smartest tech takes the dumbest shortcuts. - Read more

❌Getting punished for ignoring bad AI advice feels like trusting your GPS when it tells you to drive into a lake—except now your boss docks your bonus for not taking the plunge. - Read more

📧 In the battle for email security, it's AI vs. AI—cybercriminals are using generative AI to create sneaky phishing attacks, and the only way to fight back is with an equally smart AI defense system. - Read more

đŸ€– Google’s Gemini 2.0 is here, and it’s not just smarter, faster, and cheaper—it can generate images, audio, and even play game helper, all while laying the groundwork for AI agents that could do everything from finding your glasses to handling your web browsing. - Read more

What Are Your Thoughts Of Today's Email?

Login or Subscribe to participate in polls.