Hi there, this is your daily Neuronix.

📰 In today's Neuronix:

  • 🧠 Google debuts Gemma 4 open-weight models LINK

  • ⚙️ Microsoft adds a ‘mid-class’ AI model LINK

  • ⚖️ Judges are turning to AI in court work LINK

  • 🛡️ Anthropic’s takedowns hit thousands of repos LINK

  • 🚸 Call to ban AI ‘slop’ from YouTube Kids LINK

  • 12 other news & articles you might like

  • 🧰 6 trending tools

  • 📚 3 trending papers & reports

🧠 Google debuts Gemma 4 open-weight models LINK

  • Google released Gemma 4, a new family of open‑weight models it touts as “byte for byte” the most capable among open models.

  • The models are commercially usable under the Apache 2.0 license.

  • Developer access is available via Hugging Face and Ollama on day one.

⚙️ Microsoft launches ‘mid-class’ AI model as compute limits bite LINK

  • The new mid‑tier model aims to balance capability and cost as GPU supply remains tight.

  • It broadens Microsoft’s AI lineup beyond top‑end systems to better fit mainstream enterprise workloads.

  • The move is designed to sustain Copilot and Azure AI adoption without runaway infrastructure spend.

⚖️ Judges are increasingly using AI to draft rulings and prepare for hearings LINK

  • Courts across the U.S. are testing AI to summarize filings, draft orders, and prep for hearings.

  • Scholars and attorneys warn about transparency, bias, and litigants’ ability to challenge AI‑assisted work.

  • Judicial systems are crafting guidelines to define acceptable use and disclosure of AI tools.

🛡️ Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident LINK

  • In a bid to remove leaked Claude source code, Anthropic triggered mass takedowns that swept up unrelated repositories.

  • The company said the removals were accidental and affected projects were restored.

  • The incident highlights collateral risks of broad takedown actions for open‑source developers.

🚸 AI "slop" is flooding YouTube Kids—and more than 200 groups and experts are calling for a ban LINK

  • Over 200 child advocates and experts urged YouTube to ban AI‑generated videos from its Kids platform.

  • They argue the low‑quality, algorithmic clips are harmful to children while driving ad revenue.

  • The push increases pressure on Google to impose stricter guardrails on AI content for kids.

🌐 Other news & articles you might like

  • Microsoft’s new ‘superintelligence’ game plan is all about business LINK

  • Amazon adds 3.5% fuel and logistics surcharge for sellers LINK

  • Tesla shares drop after disappointing Q1 deliveries LINK

  • Visa is bringing AI to credit card charge disputes LINK

  • Claude’s new limits are frustrating its most devoted users LINK

  • AI tractor startup collapses after burning $240M LINK

  • Microsoft hit ‘audacious’ Copilot goals after Wall Street input LINK

  • What happened when they installed ChatGPT on a nuclear supercomputer LINK

  • More students are switching majors due to AI: poll LINK

  • Nexon: Arc Raiders is a ‘Trojan Horse’ for AI-built AAA with smaller teams LINK

  • Users say Adobe Creative Cloud rewrote hosts file LINK

  • LinkedIn is illegally searching your computer, campaign alleges LINK

  • Gemma 4: Google’s new open‑weight LLM family; commercially usable (Apache 2.0) with strong efficiency claims. LINK

  • Droidrun: Natural‑language agent to automate iOS apps via XCUITest. LINK

  • Hermes Agent: Autonomous AI agent framework with memory and self‑improvement (setup walkthrough). LINK

  • Visa Dispute Management AI: Visa’s new AI tools help merchants and issuers triage and resolve chargeback disputes. LINK

  • Gen‑1: Video‑to‑video generative model demo for stylizing and transforming footage. LINK

  • Gemma 4 on Ollama: Run Gemma 4 locally via a simple ollama pull. LINK

  • Policy analysis argues mainstream AI assistants may promote cultural homogenization and narrow worldviews. LINK

  • Full Fact investigation finds TikTok’s moderation can penalize posts that debunk misinformation due to automated rules. LINK

  • Incident report details a supply‑chain attack via a compromised LiteLLM package that led to ~4TB of data exfiltration at Mercor. LINK

Keep Reading