Anthropic, a startup funded by Google, has launched its highly anticipated AI chat assistant, Claude. The startup was founded by ex-OpenAI employees and has been working with partners like Quora, DuckDuckGo, and Notion to develop Claude’s capabilities. According to Autumn Besselman, head of people and comms at Quora, users describe Claude’s answers as detailed and easily understood, and they like that exchanges feel like natural conversation.
Claude is built to help users with search, collaborative writing, coding, summarization, Q&A and much more. It can be accessed through a chat interface and is capable of a wide variety of conversational and text-processing tasks. One of Claude’s key points of differentiation is that it’s built to produce less harmful outputs than many of the other AI chatbots that came before it. The company describes Claude as a “helpful, honest, and harmless AI system.”
Anthropic is now offering Claude via API to support businesses and nonprofits. Pricing for API access has not yet been revealed. Claude is “much less likely to produce harmful outputs, easier to converse with, and more steerable — so you can get your desired output with less effort.” It can take direction on personality, tone and behavior making it a prime candidate for customer service and other business solutions that engage with customers.
The company is currently offering two versions of Claude: Claude and Claude Instant. Claude is a high-performance model while Claude Instant is lighter, less expensive and much faster. Anthropic describes itself as “working to build reliable, interpretable, and steerable AI systems,” created Claude using a process called “Constitutional AI,” which it says is based on concepts such as beneficence, non-maleficence, and autonomy.
According to an Anthropic paper detailing Constitutional AI, the process involves a supervised learning and a reinforcement learning phase: “As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them.”
Anthropic was founded in 2021 by researchers who OpenAI. It gained attention last April when it suddenly announced a whopping $580 million in funding mostly came from Sam Bankman-Fried and the folks at FTX. Anthropic has also been tied to the Effective Altruism movement which former Google researcher Timnit Gebru called out recently in a Wired opinion piece as a “dangerous brand of AI safety.”