Anthropic, one of the world’s biggest sellers in the world, has a powerful family of generating models called Claude. These models can perform a variety of tasks, from Captioning images and writing electronic posts to solving mathematics and coding challenges.
With the growth of the anthropic model ecosystem so quickly, it can be difficult to keep track of which claud models do what. To help, we have joined a Claude guide, which we will keep up with new models and updates arrive.
Claude models
Claude models are named after literary works of art: Haiku, Sonnet and Opus. The last are:
- Claude 3.5 haikuAn easy model.
- Claude 3.7 sonnetA reasonable model of hybrid reasoning. This is currently the model of the anthropic flag.
- CloseA great model.
Counter -assessment, Claude 3 opus – the largest and most expensive model of anthropic offers – is the least capable Claude model at the moment. However, this is safe to change when the antropi releases an up -to -OPUS version.
Recently, Anthropic released Claude 3.7 Sonnet, its most advanced model so far. This model it is different from Claude 3.5 Haiku and Claude 3 OPUS because it is a hybrid pattern of reasoning, which can give both real -time and more considered answers, thoughts on the questions.
When using Claude 3.7 sonnet, users can choose whether to activate the model’s reasoning skills, which make the model “think” for a short or long period of time.
When the reasoning is on, the 3.7 sonnet Claude will spend somewhere in a few seconds in a “Thinking” stage before answering. During this phase, the model is destroying the user speed in smaller parts and controlling his answers.
Claude 3.7 Sonnet is the first model of that of the anthropic to “reason”, a technique that many laboratories have returned as the traditional method of improving it.
Even with his disabled reasoning, Claude 3.7 Sonnet remains one of the best models of the technology industry.
In November, Anthropic released an improved version – and more expensive – of his light model, Claude 3.5 haiku. This model exceeds the anthropic’s opus claude 3 in some standards, but cannot analyze images like Claude 3 OPUS or CLAUDE 3.7 Sonnet Can.
All Claude Models that have a standard 200,000-tune context window-can also follow Multistep instructions, use tools (eg, stock tracers) and produce structured production in formats like JSON.
A context window is the amount of data that a model like Claude can analyze before generating new data, while the marks are separate pieces of raw data (such as “fan”, “Tas” and “ICT “in the word” fantastic “). Two hundred thousand signs are equivalent to about 150,000 words, or a 600 -page novel.
Unlike many main generating models, anthropic cannot access the Internet, it means that they are not particularly excellent in answering current events questions. They also can’t generate images – just simple line diagrams.
As for the main differences between Claude Models, Sonet Claude 3.7 is faster than Claude 3 OPUS and better understands the nuanced and complex guidelines. Haikh struggles with sophisticated instructions, but is faster than the three models.
Claude model price
Claude models are available through API of Anthropic and Managed Platforms such as Amazon Bedrock and Vertex’s Google Cloud.
Here is the anthropic price of API:
- Claude 3.5 haiku costs 80 cents per million entry signs (750,000 words), or $ 4 per million signs of production
- Claude 3.7 sonnet costs $ 3 per million entry signs, or $ 15 per million signs of production
- Close costs $ 15 per million entry signs, or $ 75 per million signs of production
Anthropic offers fast caching and collection to provide additional execution savings.
Quick Caching allows developers to maintain “fast contexts” specific contexts that can be reused in API calls in a model, while the asynchronous group processes of asynchronous clusters of low -priority model conclusion (and then cheaper ).
Claude plans and apps
For individual users and companies seeking to interact simply with Claude models through web, Android and iOS applications, Anthropic offers a free Claude plan with norm boundaries and other use restrictions.
Improvement in one of the company’s reconciliation removes those boundaries and unblocks new functionality. Current plans are:
Claude Pro, which costs $ 20 a month, comes with 5x higher rate limits, advantages and observations of future features.
Being in business, the team which costs $ 30 per user per month-adds a dashboard to control user billing and management and repos data integrations such as code basics and client relationship management platforms (eg, Salesforce). An interruption enables or deactivates quotes to verify the claims made by it. (Like all models, Claude Halucinon occasionally.)
Both pro and team subscribers receive projects, a feature that bases the results of Claude on the basis of knowledge, which may be style guides, interview transcripts, etc. These customers, along with free -level users, can also get into artifacts, a workspace where users can edit and add content such as code, applications, website models and other documents created by Claude.
For clients who need even more, there are Claude Enterprise, which allows companies to charge the owner’s data in Claude so that Claude can analyze the information and answer the questions about it. Claude Enterprise also comes with a larger context window (500,000 signs), Github integration for engineering teams to synchronize their GitHub warehouses with Claude, and projects and artifacts.
A careful word
As is the case with all the generating models of it, there are risks associated with the use of claude.
Models occasionally make errors when summarizing or answering questions because of their tendency to halucinate. They are also trained in public data online, some of which may be protected by copyright or under a restrictive license. Anthropic and many other sellers of him argue that the doctrine of fair use protects them from copyright claims. But this has not stopped data owners from filing a lawsuit.
Anthropic offers policies to protect certain clients from the battles of the courtroom resulting from the challenges of just use. However, they do not resolve the ethical dispute of the use of models trained in illegal data.
This article was originally published on October 19, 2024. It was updated on 25 February 2025 to include new details about Claude 3.7 Sonnet and Claude 3.5 haiku.