Anthropic

Anthropic is a US AI safety company founded in 2021, best known for the Claude family of language models. It was founded by former OpenAI executives (including Dario and Daniela Amodei) with a focus on AI safety research alongside model development.

Claude Models (2026)

  • Claude Opus 4.5 — dominated “AI Twitter” hype around coding in early 2026; nathan-lambert’s primary model for coding and philosophy; known for strong reasoning, voice capabilities, and extended thinking mode (inference-time scaling)
  • Claude models emphasise: safety, coding capability, and constitutional AI alignment techniques

Culture

Anthropic’s culture is described as coding-focused, with model development heavily oriented toward software engineering use cases. The company maintains a reputation for safety-conscious research practices relative to other frontier labs.

A landmark legal case: Anthropic lost a $1.5B lawsuit in 2026 for torrenting (rather than purchasing) books to use as training data. The key distinction the court drew was the acquisition method — torrenting vs. licensing — rather than the use of copyrighted material per se. This is one of the first major judicial rulings that imposes direct financial liability on AI training data decisions. The ruling is expected to push the industry toward licensed corpora and synthetic-data pipelines.


Source: fridman-lambert-raschka-2026-state-of-ai