xAI Safety Crisis Explodes, Anthropic Closes 30B Round at 380B Valuation, India Hosts First Global South AI Summit | February 15, 2026
Daily AI Blog
The AI industry enters a pivotal weekend defined by safety reckoning and record capital deployment colliding head-on. xAI faces its deepest crisis as safety is reportedly dismantled from within, while Anthropic celebrates a historic 30 billion dollar funding round. India prepares to host the first major AI summit in the Global South, and software stocks continue hemorrhaging trillions as Wall Street prices in AI disruption. Meanwhile, a wave of safety researcher departures across OpenAI, Anthropic, and xAI signals growing tension between commercial pressures and responsible development.
1. xAI Safety Crisis Explodes: Musk Pushing “Unhinged” Grok Amid Mass Exodus
In the biggest AI safety story of the week, TechCrunch published a bombshell report on February 14 titled “Is Safety Dead at xAI?” Multiple former employees told The Verge that Elon Musk is actively working to make the Grok chatbot “more unhinged,” viewing traditional safety measures as a form of censorship. At least 11 engineers and 2 co-founders announced departures this week, bringing total founding team losses to 6 of 12 original co-founders.
The crisis follows global scrutiny after Grok was reportedly used to create over 1 million sexualized images, including deepfakes of real women and minors. French authorities raided X’s Paris offices as part of an investigation. One former engineer described the safety organization as “effectively defunct.”
Why it matters: The xAI exodus represents more than corporate drama — it signals a fundamental values conflict at one of the world’s most prominent AI labs. With xAI reportedly preparing for an IPO via SpaceX, the loss of half its founding team and the safety credibility gap could significantly impact valuation and regulatory scrutiny. The FTC has opened a probe into xAI’s data collection practices.
Sources: TechCrunch, TechCrunch (departures), The Verge
2. Anthropic Closes 30 Billion Dollar Funding Round at 380 Billion Dollar Valuation
Anthropic announced the close of a 30 billion dollar Series G funding round at a 380 billion dollar post-money valuation — more than double its September valuation and the second-largest private tech financing round on record, behind only OpenAI’s 40 billion dollar raise. The round was led by Coatue and Singapore sovereign wealth fund GIC, with participation from D. E. Shaw Ventures, Dragoneer, Founders Fund, ICONIQ, MGX, Sequoia Capital, Lightspeed, and 36 additional investors including Microsoft and Nvidia.
Key business metrics revealed: annualized revenue has reached 14 billion dollars (10x growth each of the past three years), Claude Code’s run-rate revenue has grown to over 2.5 billion dollars (doubled since January 2026), and business subscriptions have quadrupled since the start of 2026. The company now serves over 300,000 business customers, with large accounts growing 7x in the past year.
Why it matters: Anthropic has raised a total of 57 billion dollars since its 2021 founding — a velocity unprecedented even by Silicon Valley standards. The massive enterprise traction, particularly from Claude Code, validates the thesis that AI coding tools represent a multi-billion dollar market. The round also underscores the AI arms race: OpenAI is reportedly eyeing a 100 billion dollar funding round at a 750 billion dollar valuation.
Sources: CNBC, Bloomberg, Fortune
3. India Hosts First Global South AI Summit as World Leaders Converge on New Delhi
India’s AI Impact Summit 2026, running February 16-20 at Bharat Mandapam in New Delhi, represents a historic milestone as the first major international AI summit hosted in the Global South. French President Emmanuel Macron will attend with a focus on deepening AI cooperation, alongside an unprecedented roster of Presidents, Prime Ministers, Crown Princes, and Silicon Valley leaders.
Seven working groups, co-chaired by voices from both the Global North and South, will deliver actionable proposals on shared compute resources, AI commons for the public good, and comprehensive use case frameworks. India, with 800 million internet users and an AI market projected to surpass 17 billion dollars by 2027, is positioning itself as the bridge between innovation and impact.
Why it matters: The summit shifts the global AI governance conversation from safety-focused discussions (Bletchley Park, Seoul, Paris) toward tangible impact, implementation, and inclusive governance. India’s ambition to shape AI for the developing world could fundamentally alter how AI benefits are distributed globally. The 2026 International AI Safety Report will be formally showcased during the event.
Sources: ANI News
4. OpenAI Fires Safety Executive Who Opposed ChatGPT “Adult Mode”
OpenAI fired Ryan Beiermeister, Vice President of Product Policy, after she voiced opposition to the planned “adult mode” feature that would allow verified adults to access erotic content in ChatGPT. The Wall Street Journal reported that OpenAI terminated Beiermeister on grounds of alleged sex discrimination against a male colleague — a claim she strongly denies.
Beiermeister had raised concerns that adult mode could harm vulnerable users, worsen parasocial relationships with AI chatbots, and that OpenAI’s systems were insufficient to prevent minors from accessing explicit content. Internal researchers and an advisory council on “well-being and AI” also opposed the feature. CEO Sam Altman defended the move as part of a “treat adult users like adults” principle.
Why it matters: This firing is part of a broader pattern of safety-oriented executives leaving or being pushed out of OpenAI. The adult mode controversy touches on fundamental questions about AI content boundaries, user protection, and whether commercial incentives are overriding safety considerations. Similar issues have already plagued Character.AI, Grok, and Replika.
Sources: TechCrunch, WSJ, Futurism
5. AI Safety Researchers Sound the Alarm Across Industry
A remarkable wave of safety-focused departures hit multiple AI labs simultaneously. Mrinank Sharma, head of Anthropic’s Safeguards Research team, posted a letter announcing his departure and warning that “the world is in peril.” He noted that “throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.”
At OpenAI, a researcher resigned citing “deep reservations” about the company’s emerging advertising strategy, while another warned about AI’s “potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” CNN reported this as part of a growing pattern where departing researchers are “loudly ringing the alarm bell on the way out.”
Why it matters: When safety researchers leave multiple leading labs simultaneously and go public with their concerns, it signals that the gap between commercial pressures and responsible development may be widening across the industry — not just at individual companies. This exodus comes as both OpenAI and Anthropic race toward IPOs.
Sources: CNN
6. Software Stocks Hemorrhage 2 Trillion Dollars as AI Disruption Fear Intensifies
The WisdomTree Cloud Computing Fund has plummeted roughly 20% in 2026, with individual names devastated: HubSpot (-39%), Figma (-40%), Atlassian (-35%), Shopify (-29%), and Intuit (-34%). The selloff accelerated after Anthropic launched new Claude Cowork features demonstrating AI’s ability to automate legal, marketing, and operational tasks previously requiring dedicated SaaS tools.
Goldman Sachs strategist Ben Snider warned this may be just “the end of the beginning,” drawing parallels to industries like newspapers where disruptive technology caused prolonged multi-year stock declines. However, Nvidia CEO Jensen Huang pushed back, calling the notion that software is being replaced by AI “the most illogical thing in the world.” Analysts at JP Morgan called the selloff “indiscriminate” and said worst-case scenarios remain unlikely.
Why it matters: The 2 trillion dollar wipeout in software market capitalization represents one of the most significant market structure shifts driven by AI. Whether this proves to be a sentiment-driven overreaction or the early stages of fundamental disruption will define investment strategies for years. The falling share prices create a vicious cycle: companies lose the ability to retain talent with stock compensation, limiting their capacity to build their own AI capabilities.
7. China AI Models Surge: Alibaba RynnBrain, ByteDance Seedance 2.0, Kuaishou Kling 3.0
China’s tech giants released multiple frontier AI models in a coordinated display of capability. Alibaba’s DAMO Academy unveiled RynnBrain, a “physical AI” model designed to help robots comprehend the physical world with built-in time and space awareness. Hugging Face researchers praised its ability to remember when and where events occurred, track task progress, and continue across multiple steps.
ByteDance released Seedance 2.0, an advanced video generation model praised by Hugging Face researchers as “one of the most well-rounded video generation models” available. Kuaishou launched Kling 3.0, further advancing the Chinese video generation frontier. These models directly compete with OpenAI’s Sora and Nvidia’s robotics models. Google DeepMind boss Demis Hassabis told CNBC that Chinese AI models are just “months” behind Western rivals.
Why it matters: The simultaneous release of frontier models across robotics, video generation, and multimodal AI demonstrates that China’s competitive gap with the US is narrowing rapidly. Alibaba’s physical AI play puts it in direct competition with Nvidia and Google in the robotics foundation model space — a market projected to be worth hundreds of billions.
Sources: CNBC
8. OpenAI Claims DeepSeek Distilled US AI Models
Bloomberg reported on February 13 that OpenAI has accused China’s DeepSeek of extracting and distilling output from American AI models to gain a competitive edge. The accusation escalates intellectual property tensions in the US-China AI competition and raises fundamental questions about the defensibility of closed-source AI models.
Why it matters: Model distillation — where a smaller model learns to replicate a larger model’s capabilities by training on its outputs — is becoming a critical IP and national security concern. If frontier model developers cannot protect their outputs from being distilled by competitors, the economic moat around expensive model training investments narrows significantly. This could reshape how AI companies think about API access and output monitoring.
Sources: Bloomberg
9. Waymo Seeks 16 Billion Dollar Funding at 110 Billion Dollar Valuation
Alphabet’s autonomous driving subsidiary Waymo is raising 16 billion dollars in a funding round that would value the company at nearly 110 billion dollars — more than double its 2024 valuation. Google is leading the financing with a 13 billion dollar commitment. Waymo currently operates over 2,500 fully driverless vehicles across six cities and is deploying next-generation Ojai robotaxis.
CEO Tekedra Mawakana says the company could reach 1 million weekly paid autonomous rides this year, more than double the roughly 400,000 weekly rides currently provided after quadrupling service in 2025. Waymo plans to expand to Washington, Detroit, Las Vegas, San Diego, and Denver this year.
Why it matters: The massive funding round positions Waymo as the dominant force in commercial autonomous driving, far ahead of Tesla’s roughly 500 robotaxis (most with human drivers) in two markets. The convergence of Vision-Language-Action AI models with autonomous driving is enabling camera-only systems to approach LiDAR-level capability at fraction of the cost, potentially accelerating industry-wide deployment.
Sources: Bloomberg, Sherwood News
10. Anthropic Super Bowl Ad Delivers 11% User Boost Over OpenAI
Data from BNP Paribas showed Anthropic captured the most significant post-Super Bowl gains among AI companies. The Claude maker saw a 6.5% jump in website visits and an 11% increase in daily active users after airing ads that directly attacked OpenAI’s decision to introduce advertising in ChatGPT. Claude briefly entered the top 10 free apps on the Apple App Store.
OpenAI CEO Sam Altman called the Anthropic commercials “deceptive” and “clearly dishonest,” escalating the public rivalry between the two companies. ChatGPT saw a more modest 2.7% bump in daily active users, while Google’s Gemini added 1.4%.
Why it matters: The Super Bowl ad battle marks a new era in AI competition — from backend infrastructure wars to direct consumer brand competition. With both companies heading toward potential IPOs, user acquisition metrics and brand differentiation are becoming as important as model benchmarks. Anthropic’s anti-advertising positioning resonated with consumers concerned about data privacy.
Sources: CNBC
11. Canada and Germany Sign AI Joint Declaration and Launch Sovereign Technology Alliance
At the Munich Security Conference on February 14, Canada’s Minister of AI and Digital Innovation and Germany’s Minister for Digital Transformation signed a Joint Declaration of Intent on Artificial Intelligence and launched the new Sovereign Technology Alliance. The agreement focuses on expanding secure compute infrastructure, accelerating AI research commercialization, and reducing strategic technology dependencies.
The ministers discussed collaboration with organizations advancing safe-by-design AI systems, including Canada’s LawZero, founded by Turing Prize winner Yoshua Bengio. The alliance is designed as a platform for practical cooperation on advanced technologies across trusted partners.
Why it matters: The Sovereign Technology Alliance signals growing momentum among democratic allies to build independent AI capabilities outside US and Chinese tech ecosystems. As Europe restricts use of American tech platforms (France limiting Zoom, Germany ending Microsoft Teams), sovereign AI infrastructure becomes a geopolitical imperative.
Sources: Canada Newswire
12. OpenAI Retires GPT-4o as Users Migrate to GPT-5.2
OpenAI retired several ChatGPT models on February 13, including the widely-used multilingual, multimodal GPT-4o, as most users have already shifted to GPT-5.2. This marks a significant generational transition in the world’s most popular AI chatbot, reflecting the rapid pace of model improvement.
Why it matters: The retirement underscores how quickly frontier models become obsolete — GPT-4o launched in mid-2024 and was considered cutting-edge for less than 18 months. For enterprises that built workflows around GPT-4o’s specific capabilities, the forced migration creates integration challenges and highlights the risks of building on rapidly evolving AI APIs.
Sources: NeuralBuddies
13. International AI Safety Report 2026 Released Ahead of India Summit
The second International AI Safety Report, led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, was published in February 2026. Backed by over 30 countries and international organizations, it represents the largest global collaboration on AI safety to date. The report will be formally showcased at the India AI Impact Summit.
Key findings highlight that model distillation can efficiently proliferate advanced AI capabilities even from closed-source models, and that distributed compute approaches have reduced dependence on large-scale centralized infrastructure — potentially making AI development harder to monitor and regulate.
Why it matters: The report provides the scientific foundation for global AI governance at a critical moment. Its findings on distillation and distributed training directly inform debates about export controls, compute governance, and whether current regulatory approaches can keep pace with capability proliferation.
Sources: International AI Safety Report
14. SAG-AFTRA Outraged Over AI-Generated Brad Pitt vs. Tom Cruise Fight Clip
The Screen Actors Guild reacted with outrage to a viral AI-generated video depicting a fabricated fight between Brad Pitt and Tom Cruise, calling it “unacceptable.” The incident highlights growing tension between rapidly advancing AI content generation capabilities and celebrity likeness rights.
Why it matters: As AI video generation quality approaches photorealism (evidenced by this week’s Seedance 2.0 and other models), unauthorized use of celebrity likenesses is becoming a critical legal and ethical battleground. SAG-AFTRA’s strong reaction signals that Hollywood will push aggressively for legislative protections, potentially shaping broader AI content regulation.
Sources: TechRadar
15. OpenAI, Anthropic, Google, Meta Join Forces for European AI Startup Accelerator
Despite fierce rivalry, OpenAI, Anthropic, Google, Meta, Microsoft, Mistral, Qualcomm, AMD, Snowflake, and Sequoia Capital joined forces to launch F/ai, a new accelerator for AI startups in Europe, overseen by Paris-based incubator Station F. It marks the first time all these companies have collaborated on an accelerator, providing early-stage founders access to internal expertise, senior leaders, and global networks.
Why it matters: The collaboration signals that major AI companies view European AI ecosystem development as strategically important enough to set aside competitive tensions. It also reflects industry efforts to maintain goodwill with European regulators as the EU AI Act’s transparency requirements approach their August 2026 enforcement date.
Sources: Inc.
Key Themes and Market Analysis
Safety Reckoning Reaches Inflection Point: The simultaneous crises at xAI (safety dismantled), OpenAI (safety exec fired over adult mode), and departures at Anthropic reveal an industry-wide tension between commercial velocity and responsible development. The pattern of safety researchers leaving and going public suggests systemic issues, not isolated incidents.
Record Capital vs. Record Anxiety: Anthropic’s 30 billion dollar round and Waymo’s 16 billion dollar raise demonstrate investor confidence in AI’s transformative potential. Yet the 2 trillion dollar software stock wipeout shows markets are simultaneously pricing in massive disruption. This paradox — pouring money into AI builders while fleeing companies AI might displace — defines the current market structure.
Geopolitical AI Competition Intensifies: China’s rapid model releases, DeepSeek distillation accusations, the Canada-Germany Sovereign Technology Alliance, and India’s Global South AI summit all point toward a fragmenting global AI landscape where capability, governance, and infrastructure are becoming instruments of national strategy.
From Digital to Physical AI: Alibaba’s RynnBrain for robotics, Waymo’s autonomous expansion, and Vision-Language-Action models enabling camera-only autonomous driving signal AI’s accelerating transition from software-only applications to physical world deployment.
Looking Ahead
Key Developments to Watch:
- India AI Impact Summit outcomes and commitments (Feb 16-20)
- xAI regulatory response and potential IPO timeline impact
- OpenAI’s 100 billion dollar funding round progress and Nvidia’s participation
- Software sector earnings season and AI disruption narrative evolution
- EU AI Act transparency code of practice second draft (March)
- OpenAI adult mode launch timeline and age-verification implementation
Stay Updated: Follow us for comprehensive daily AI news, infrastructure analysis, and investment tracking.
Last Updated: February 15, 2026, 10:00 PM CST
- XAI Safety Crisis Grok Unhinged
- Anthropic 30 Billion Funding 380 Billion Valuation
- India AI Impact Summit 2026
- OpenAI Fires Safety Executive Adult Mode
- AI Safety Researchers Departures
- China AI Models Alibaba RynnBrain Seedance
- Software Stocks AI Disruption 2 Trillion Crash
- Waymo 16 Billion Funding Robotaxi
- Canada Germany AI Sovereign Technology Alliance
- OpenAI GPT-4o Retirement GPT-5.2
- International AI Safety Report 2026
- OpenAI Anthropic Google European Accelerator
- Anthropic Super Bowl Ad User Boost
- OpenAI DeepSeek Distillation Accusation
- SAG-AFTRA AI Generated Content Outrage