anthropic.cr
anthropic.cr
A Crystal client for the Anthropic Messages API.
Installation
Add to your shard.yml:
dependencies:
anthropic:
github: crys-ai/anthropic.cr
Then run shards install.
Quick Start
require "anthropic"
client = Anthropic::Client.new
response = client.messages.create(
model: Anthropic::Model.sonnet,
messages: [Anthropic::Message.user("Hello!")],
max_tokens: 1024
)
puts response.text
Configuration
By default, the client reads ANTHROPIC_API_KEY from environment:
client = Anthropic::Client.new
Or pass it explicitly:
client = Anthropic::Client.new(api_key: "sk-ant-...")
Custom base URL and timeout:
client = Anthropic::Client.new(
api_key: "sk-ant-...",
base_url: "https://custom.api",
timeout: 60.seconds
)
Models
Type-safe model selection with the Model enum:
Anthropic::Model.opus # Claude Opus 4.5 (latest)
Anthropic::Model.sonnet # Claude Sonnet 4.5 (latest)
Anthropic::Model.haiku # Claude Haiku 3.5 (latest)
# Or use specific versions
Anthropic::Model::ClaudeOpus4_5
Anthropic::Model::ClaudeSonnet4_5
Anthropic::Model::ClaudeOpus4
Anthropic::Model::ClaudeSonnet4
Anthropic::Model::ClaudeSonnet3_5
Anthropic::Model::ClaudeHaiku3_5
Messages API
Basic message
response = client.messages.create(
model: Anthropic::Model.sonnet,
messages: [Anthropic::Message.user("What is Crystal?")],
max_tokens: 1024
)
puts response.text
With system prompt
response = client.messages.create(
model: Anthropic::Model.sonnet,
messages: [Anthropic::Message.user("Hello!")],
max_tokens: 1024,
system: "You are a helpful assistant who speaks like a pirate."
)
Multi-turn conversation
messages = [
Anthropic::Message.user("What's the capital of France?"),
Anthropic::Message.assistant("The capital of France is Paris."),
Anthropic::Message.user("What's its population?"),
]
response = client.messages.create(
model: Anthropic::Model.sonnet,
messages: messages,
max_tokens: 1024
)
With parameters
response = client.messages.create(
model: Anthropic::Model.opus,
messages: [Anthropic::Message.user("Write a haiku")],
max_tokens: 100,
temperature: 0.9,
top_p: 0.95,
stop_sequences: ["\n\n"]
)
Response
response.id # "msg_01XFDUDYJgAACzvnptvVoYEL"
response.model # "claude-sonnet-4-5-20251101"
response.role # Anthropic::Message::Role::Assistant
response.stop_reason # Anthropic::Messages::Response::StopReason::EndTurn
response.text # Combined text from all content blocks
# Token usage
response.usage.input_tokens # 25
response.usage.output_tokens # 42
response.usage.total_tokens # 67
Error Handling
begin
response = client.messages.create(...)
rescue ex : Anthropic::AuthenticationError
puts "Invalid API key"
rescue ex : Anthropic::RateLimitError
puts "Rate limited, retry later"
rescue ex : Anthropic::OverloadedError
puts "API overloaded, retry later"
rescue ex : Anthropic::InvalidRequestError
puts "Bad request: #{ex.error_message}"
rescue ex : Anthropic::APIError
puts "API error: #{ex.status_code} - #{ex.error_message}"
end
CLI
A simple CLI is included for testing:
# Basic usage
crystal run examples/cli.cr -- message "Hello!"
# With options
crystal run examples/cli.cr -- message "Hello!" -m opus -t 2048 -v
# Show help
crystal run examples/cli.cr -- -h
# Model values
# Aliases: opus, sonnet, haiku
# Enum names: see `--help` output (e.g. claude_opus4_5)
# Invalid input handling
# Invalid model/option values print "Error: ..." + usage and exit with code 1
# Options
-m MODEL Model alias or enum name
-t TOKENS Max tokens
-s SYSTEM System prompt
-v Verbose (show token usage)
-h Help
Development
# Install dependencies
shards install
# Run all checks (format, lint, test)
bin/hace all
# Individual checks
crystal tool format
bin/ameba
crystal spec
License
MIT
Repository
anthropic.cr
Owner
Statistic
- 1
- 0
- 1
- 0
- 4
- about 5 hours ago
- February 4, 2026
License
MIT License
Links
Synced at
Sat, 07 Feb 2026 20:12:23 GMT
Languages