Hi there, this is your daily Neuronix.
📰 In today's Neuronix:
🕵️♂️ Lawsuit calls Perplexity’s ‘Incognito Mode’ a sham LINK
🛡️ Hackers weaponize Claude Code leak on GitHub LINK
🇯🇵 Microsoft to invest $10B in Japan for AI and cyber LINK
🇨🇳 China moves to regulate digital humans, curb addictive apps LINK
➕ 10 other news & articles you might like
🧰 4 trending tools
📚 3 trending papers & reports
🕵️♂️ Lawsuit says Perplexity’s ‘Incognito Mode’ is a sham that shares chats with ad platforms LINK
A new complaint alleges Perplexity’s “Incognito Mode” misleads users while their chats are shared with Google and Meta to boost ad revenue.
Plaintiffs claim “millions” of private chats were transmitted and seek class-action status, damages, and an injunction.
The case intensifies scrutiny of privacy disclosures and data flows across AI chat products.
🛡️ Claude Code leak exploited to push info‑stealer malware on GitHub LINK
Attackers used repositories tied to the Claude Code source leak as lures, delivering info‑stealer malware disguised as developer tools.
The campaign leverages cloned repos and malicious binaries to harvest credentials and other sensitive data.
Security teams urge developers to verify sources, signatures, and checksums before installing “leaked” tools.
🇯🇵 Microsoft to invest $10 billion in Japan to expand AI and cybersecurity LINK
Microsoft will spend $10B to grow AI infrastructure and cybersecurity capabilities in Japan, including additional data center capacity.
The initiative is framed as a public‑private effort to accelerate digital transformation and defense readiness.
It underscores intensifying regional investment as providers race to localize compute, talent, and partnerships.
🇨🇳 China unveils rules to regulate digital humans and bans addictive services for minors LINK
New measures target AI‑generated avatars and tighten controls on services deemed addictive for children.
Platforms and vendors will face stricter obligations around content, safety, and identity standards.
The policy continues Beijing’s push to shape AI usage via youth protection and compliance enforcement.
🌐 Other news & articles you might like
Claude Code leak raises privacy alarms over data collection LINK
UK’s leading AI research institute ordered to make ‘significant’ changes LINK
Economists see stronger links between AI and jobs LINK
Microsoft warns Copilot isn’t for serious advice, says Tom’s Hardware LINK
AI architects warn on job losses, but few listen LINK
Courts test legal shields as cases target Meta and Google LINK
Reddit’s r/programming bans LLM content LINK
Experts say Anthropic’s next model could reshape cybersecurity LINK
Colorado right‑to‑repair law faces pushback from tech giants LINK
Report: Banks seeking SpaceX IPO allocation asked to subscribe to Grok LINK
🧰 Trending tools
Gemma 4 (Open‑weight models): Google’s next‑gen open‑weight LLMs for fast, lightweight deployment and fine‑tuning. LINK
Microsoft AI Agent Governance (open source): Runtime security framework to govern and secure AI agents during execution. LINK
Cursor Coding Agent Platform: An AI agent experience to generate code, analyze projects, and fix errors via natural language. LINK
DeepSeek Next‑Gen Model (preview): Upcoming AI model positioned to benefit Huawei’s ecosystem and hardware stack. LINK
📚 Trending papers & reports
Anthropic researchers identify distinct, causally influential “emotion” circuits emerging inside a large language model. LINK
Researchers demonstrate a low‑cost, portable AI‑powered eye scanner to expand community screening access. LINK
MIT study argues the near‑term AI job apocalypse is overstated, with adoption costs slowing displacement. LINK


