Google's AI Safety Problem
While competitors improve safety guardrails, Google's newest flagship model shows dangerous regression on terrorism, CSAM, and trafficking prompts
Articles by Elena Vasquez on AI Pulse. 18 articles covering AI news, tools, research, and analysis.
While competitors improve safety guardrails, Google's newest flagship model shows dangerous regression on terrorism, CSAM, and trafficking prompts
OpenAI's latest model achieves perfect accuracy on complex legal scenarios where human judges disagree half the time
Anthropic blocked Claude Code users from third-party tools without warning—sparking developer outrage. Inside the ethical AI betrayal that exposed vendor lock-in risks.
Why the world's largest tech companies are racing to build AI that doesn't just respond—but acts
The India AI Impact Summit isn't just a conference—it's a bid to reshape who controls artificial intelligence.
The 22V Research strategist is telling clients to forget AI growth stocks. The real money, he argues, is in the boring industrial companies that will power the next phase.
We collected the most impressive OpenClaw setups from real users—with specific configurations, results, and the workflows that save hours weekly. Plus security tips so you don't become a cautionary tale.
Anthropic's latest model autonomously fixes real GitHub issues better than any AI before. Developers report it can now handle multi-file refactors that took hours.
The new memory feature builds a profile of each user over time. It's incredibly useful—and raises obvious privacy questions.
The new model scores higher than PhD-level humans on medical, legal, and scientific reasoning tests. Sam Altman warns the next version will be 'qualitatively different.'
The system analyzes micro-expressions and voice patterns. It claims 94% accuracy. Critics say that's not good enough for justice.
Aurora's truck drove from Los Angeles to Atlanta in 38 hours. The 3.5 million truck drivers in America are paying attention.
New 'productivity insights' feature lets managers see AI usage patterns, document edits, and meeting participation. Workers aren't happy.
The government says AI gives rich students unfair advantages. Critics say the ban just moves tutoring underground.
The bipartisan Authentic Content Act requires detectable watermarks on AI-generated images, video, and audio. Penalties are steep.
Shell companies, fake medical device orders, and diplomatic pouches: the black market for NVIDIA chips is sophisticated and growing.
The third high-profile departure from Google's Responsible AI team in two years raises questions about the company's commitment to AI safety.
Starting July 1, employers must tell candidates when AI screens their applications. Penalties include $10,000 per violation.