anthropic-cr v0.4.0
anthropic-cr
An unofficial Anthropic API client for Crystal. Access Claude AI models with idiomatic Crystal code.
Status: Feature Complete - Full Messages API, Batches API, Models API, Files API, tool runner, web search, extended thinking, structured outputs, citations, prompt caching, Schema DSL, and beta Anthropic-hosted features such as skills, MCP servers, and containers. API design inspired by official Ruby SDK patterns.
Note: A large portion of this library was written with the assistance of AI (Claude), including code, tests, and documentation.
Features
- ✅ Messages API (create and stream)
- ✅ Streaming with Server-Sent Events (
streamandopen_stream) - ✅ Tool use / function calling
- ✅ Schema DSL - Type-safe tool definitions (no more JSON::Any)
- ✅ Typed Tools - Ruby BaseTool-like pattern with struct inputs
- ✅ Tool runner (automatic tool execution loop)
- ✅ Web Search - Built-in web search via server-side tool
- ✅ Agent Tools - BashTool, TextEditorTool, ComputerUseTool for agentic workflows
- ✅ Web Fetch - Built-in web page fetching via server-side tool
- ✅ Memory - Persistent memory tool for cross-conversation context
- ✅ Code Execution - Sandboxed code execution via server-side tool
- ✅ Strict Mode - Enforce strict schema validation on tool definitions
- ✅ Extended Thinking - Enable Claude's reasoning process (including adaptive thinking)
- ✅ Redacted Thinking - Parse and preserve redacted thinking blocks in multi-turn
- ✅ Context Management - Beta auto-compaction, clear tool uses, clear thinking, compaction streaming delta
- ✅ MCP Servers - Beta
mcp_serversparameter for server-side MCP server definitions - ✅ Container/Skills - Beta
containerparameter for skills-based tool use - ✅ Tool Search - BM25 and Regex tool search for deferred tool loading
- ✅ Legacy Tool Versions - October 2024 tool versions (BashToolLegacy, TextEditorToolLegacy, ComputerUseToolLegacy)
- ✅ Skills API - Full CRUD for skills and skill versions (beta)
- ✅ Extended Tool Fields - Beta
allowed_callers,defer_loading,input_examples,eager_input_streaming - ✅ Effort Control - Control output effort level via
output_config - ✅ Inference Geo - Data residency control for inference requests
- ✅ Structured Outputs - Type-safe JSON responses via beta API
- ✅ Citations - Document citations with streaming support
- ✅ Beta Namespace -
client.beta.messagesmatching Ruby SDK - ✅ Vision (image understanding)
- ✅ System prompts and temperature control
- ✅ Message Batches API (create, list, retrieve, results, cancel, delete)
- ✅ Models API (list and retrieve)
- ✅ Auto-pagination helpers
- ✅ Enhanced streaming helpers (text, tool_use_deltas, thinking, citations)
- ✅ Comprehensive error handling with automatic retries
- ✅ Type-safe API with full compile-time checking
- ✅ Files API (upload, download, delete)
- ✅ Token counting API
- ✅ Prompt caching with TTL control
- 🚧 AWS Bedrock & Google Vertex support (future)
Installation
-
Add the dependency to your
shard.yml:dependencies: anthropic-cr: github: amscotti/anthropic-cr -
Run
shards install
Quick Start
require "anthropic-cr"
# Initialize the client (uses ANTHROPIC_API_KEY from environment)
client = Anthropic::Client.new
# Create a message
message = client.messages.create(
model: Anthropic::Model::CLAUDE_SONNET_4_6,
max_tokens: 1024,
messages: [
{role: "user", content: "Hello, Claude!"}
]
)
puts message.text
# => "Hello! I'm Claude, an AI assistant..."
Usage Examples
Basic Message
client = Anthropic::Client.new(api_key: "sk-ant-...")
message = client.messages.create(
model: Anthropic::Model::CLAUDE_SONNET_4_6,
max_tokens: 1024,
messages: [{role: "user", content: "What is Crystal?"}]
)
puts message.text
puts "Used #{message.usage.input_tokens} input tokens"
Streaming
Use stream for simple event-by-event handling:
client.messages.stream(
model: Anthropic::Model::CLAUDE_HAIKU_4_5,
max_tokens: 1024,
messages: [{role: "user", content: "Write a haiku about programming"}]
) do |event|
if event.is_a?(Anthropic::ContentBlockDeltaEvent)
print event.text if event.text
STDOUT.flush
end
end
Use open_stream when you want richer helpers like text, collect_text, and final_message while the stream is open:
client.messages.open_stream(
model: Anthropic::Model::CLAUDE_HAIKU_4_5,
max_tokens: 1024,
messages: [{role: "user", content: "Write a haiku about programming"}]
) do |stream|
print stream.collect_text
final_message = stream.final_message
puts "\nStop reason: #{final_message.try(&.stop_reason)}"
end
Tool Use with Schema DSL (Recommended)
The Schema DSL provides clean, type-safe tool definitions without verbose JSON::Any syntax:
# Define a tool with Schema DSL
weather_tool = Anthropic.tool(
name: "get_weather",
description: "Get current weather for a location",
schema: {
"location" => Anthropic::Schema.string("City name, e.g. San Francisco"),
"unit" => Anthropic::Schema.enum("celsius", "fahrenheit", description: "Temperature unit"),
},
required: ["location"]
) do |input|
location = input["location"].as_s
unit = input["unit"]?.try(&.as_s) || "fahrenheit"
"Sunny, 72°#{unit == "celsius" ? "C" : "F"} in #{location}"
end
# Use it
message = client.messages.create(
model: Anthropic::Model::CLAUDE_SONNET_4_6,
max_tokens: 1024,
messages: [{role: "user", content: "What's the weather in Tokyo?"}],
tools: [weather_tool]
)
if message.tool_use?
message.tool_use_blocks.each do |tool_use|
result = weather_tool.call(tool_use.input)
puts result
end
end
Schema DSL supports: string, number, integer, boolean, enum, array, and nested object types.
Typed Tools (Ruby BaseTool-like)
For type-safe inputs, define structs and use the Anthropic.tool macro:
# Define input struct with annotations
struct GetWeatherInput
include JSON::Serializable
@[JSON::Field(description: "City name, e.g. San Francisco")]
getter location : String
@[JSON::Field(description: "Temperature unit")]
getter unit : TemperatureUnit?
end
enum TemperatureUnit
Celsius
Fahrenheit
end
# Create typed tool - handler receives typed struct!
weather_tool = Anthropic.tool(
name: "get_weather",
description: "Get weather for a location",
input: GetWeatherInput
) do |input|
# input.location is String, not JSON::Any!
unit = input.unit || TemperatureUnit::Fahrenheit
"Sunny, 72° in #{input.location}"
end
Structured Outputs
Get type-safe JSON responses with defined schemas:
# Define output struct
struct SentimentResult
include JSON::Serializable
getter sentiment : String
getter confidence : Float64
getter summary : String
end
# Create schema from struct
schema = Anthropic.output_schema(
type: SentimentResult,
name: "sentiment_result"
)
# Use with beta API
message = client.beta.messages.create(
betas: [Anthropic::STRUCTURED_OUTPUT_BETA],
model: Anthropic::Model::CLAUDE_SONNET_4_6,
max_tokens: 512,
output_schema: schema,
messages: [{role: "user", content: "Analyze: 'Great product!'"}]
)
# Parse directly to typed struct
result = SentimentResult.from_json(message.text)
puts result.sentiment # Type-safe access
puts result.confidence # No .as_f casting needed!
Web Search
Let Claude search the internet for current information:
message = client.messages.create(
model: Anthropic::Model::CLAUDE_SONNET_4_6,
max_tokens: 2048,
server_tools: [Anthropic::WebSearchTool.new],
messages: [{role: "user", content: "What are the latest developments in Crystal programming?"}]
)
puts message.text # Response includes web search results
Configure web search with domain limits or location:
# Limit to specific domains
search = Anthropic::WebSearchTool.new(
allowed_domains: ["github.com", "crystal-lang.org"],
max_uses: 3
)
# Location-aware search
# Note: user_location affects search RANKING, use system prompt for Claude awareness
search = Anthropic::WebSearchTool.new(
user_location: Anthropic::UserLocation.new(city: "San Francisco", country: "US")
)
# Use with system prompt: "The user is located in San Francisco, California."
Extended Thinking
Enable Claude's reasoning process for complex problems:
message = client.messages.create(
model: Anthropic::Model::CLAUDE_SONNET_4_6,
max_tokens: 8192,
thinking: Anthropic::ThinkingConfig.enabled(budget_tokens: 4000),
messages: [{role: "user", content: "Solve this logic puzzle..."}]
)
# Response includes both thinking and final answer
message.content.each do |block|
case block
when Anthropic::ThinkingContent
puts "Thinking: #{block.thinking}"
when Anthropic::TextContent
puts "Answer: #{block.text}"
end
end
Adaptive Thinking (Opus 4.6+)
With Claude Opus 4.6, you can use adaptive thinking where the model decides how much to think:
message = client.messages.create(
model: Anthropic::Model::CLAUDE_OPUS_4_6,
max_tokens: 16384,
thinking: Anthropic::ThinkingConfig.adaptive,
messages: [{role: "user", content: "Explain quantum computing"}]
)
Effort Control
Control how much effort Claude puts into a response:
message = client.messages.create(
model: Anthropic::Model::CLAUDE_OPUS_4_6,
max_tokens: 16384,
thinking: Anthropic::ThinkingConfig.adaptive,
output_config: Anthropic::OutputConfig.new(effort: "high"),
messages: [{role: "user", content: "Write a detailed analysis..."}]
)
Effort levels: "low", "medium", "high", "max"
Inference Geo
Control where your request is processed for data residency:
message = client.messages.create(
model: Anthropic::Model::CLAUDE_OPUS_4_6,
max_tokens: 16384,
inference_geo: "us",
messages: [{role: "user", content: "Hello!"}]
)
Vision
message = client.messages.create(
model: Anthropic::Model::CLAUDE_SONNET_4_6,
max_tokens: 1024,
messages: [
Anthropic::MessageParam.new(
role: Anthropic::Role::User,
content: [
Anthropic::TextContent.new("Describe this image"),
Anthropic::ImageContent.base64("image/png", base64_data)
] of Anthropic::ContentBlock
)
]
)
System Prompts & Parameters
message = client.messages.create(
model: Anthropic::Model::CLAUDE_OPUS_4_5,
max_tokens: 2048,
system: "You are a helpful coding assistant specializing in Crystal.",
temperature: 0.7,
messages: [{role: "user", content: "How do I create a HTTP server?"}]
)
Message Batches (Phase 2)
Process multiple messages in a single batch for cost-effective, high-throughput use cases:
# Create batch requests
requests = [
Anthropic::BatchRequest.new(
custom_id: "req-1",
params: Anthropic::BatchRequestParams.new(
model: Anthropic::Model::CLAUDE_HAIKU_4_5,
max_tokens: 100,
messages: [Anthropic::MessageParam.user("What is 2+2?")]
)
),
Anthropic::BatchRequest.new(
custom_id: "req-2",
params: Anthropic::BatchRequestParams.new(
model: Anthropic::Model::CLAUDE_HAIKU_4_5,
max_tokens: 100,
messages: [Anthropic::MessageParam.user("What is the capital of France?")]
)
),
]
# Create and monitor batch
batch = client.messages.batches.create(requests: requests)
puts batch.id # => "msgbatch_..."
# Check status
status = client.messages.batches.retrieve(batch.id)
puts status.processing_status # => "in_progress" | "ended"
# When ended, get results
client.messages.batches.results(batch.id) do |result|
puts "#{result.custom_id}: #{result.result.message.try(&.text)}"
end
Models API (Phase 2)
List and retrieve available Claude models:
# List all models
response = client.models.list
response.each do |model|
puts "#{model.display_name} (#{model.id})"
end
# Retrieve specific model
model = client.models.retrieve(Anthropic::Model::CLAUDE_SONNET_4_6)
puts model.display_name # => "Claude Sonnet 4.6"
Tool Runner (Beta)
Automatic tool execution loop - no manual handling required:
# Define tools
calculator = Anthropic.tool(...) { |input| calculate(input) }
time_tool = Anthropic.tool(...) { |input| Time.local.to_s }
# Create runner (in beta namespace, matching Ruby SDK)
runner = client.beta.messages.tool_runner(
model: Anthropic::Model::CLAUDE_SONNET_4_6,
max_tokens: 1024,
messages: [Anthropic::MessageParam.user("What time is it? Also calculate 15 + 27")],
tools: [calculator, time_tool] of Anthropic::Tool
)
# Iterate through conversation (tools executed automatically)
runner.each_message { |msg| puts msg.text unless msg.tool_use? }
# Or just get the final answer
final = runner.final_message
puts final.text
Skills API (Beta)
Manage reusable skills that can be attached to containers for agentic workflows:
# Create a skill by uploading files
skill = client.beta.skills.create(
files: [
Anthropic::FileUpload.new(
io: File.open("skill/SKILL.md"),
filename: "my-skill/SKILL.md",
content_type: "text/markdown"
),
Anthropic::FileUpload.new(
io: File.open("skill/tool.py"),
filename: "my-skill/tool.py",
content_type: "text/x-python"
),
],
display_title: "My Skill"
)
# List skills
skills = client.beta.skills.list(limit: 10)
skills.data.each { |s| puts "#{s.display_title} (#{s.id})" }
# Retrieve a skill
skill = client.beta.skills.retrieve("skill_abc123")
# Create a new version
client.beta.skills.versions.create(
skill_id: skill.id,
files: [
Anthropic::FileUpload.from_path(
"updated/tool.py",
filename: "my-skill/tool.py"
)
]
)
# List versions
versions = client.beta.skills.versions.list(skill_id: skill.id)
versions.data.each { |v| puts "Version #{v.version} from #{v.created_at}" }
# Delete (must delete all versions first)
versions.data.each do |v|
client.beta.skills.versions.delete(skill_id: skill.id, version: v.version)
end
client.beta.skills.delete(skill.id)
Note: Each skill requires a SKILL.md file with YAML frontmatter:
---
name: my-skill
description: A brief description of what this skill does.
---
# My Skill
Detailed documentation about the skill...
Model Constants
Anthropic::Model::CLAUDE_OPUS_4_6 # Latest Opus (4.6)
Anthropic::Model::CLAUDE_OPUS_4_5 # Opus 4.5
Anthropic::Model::CLAUDE_SONNET_4_6 # Latest Sonnet
Anthropic::Model::CLAUDE_HAIKU_4_5 # Latest Haiku
# Or use shorthands
Anthropic.model_name(:opus) # => "claude-opus-4-6"
Anthropic.model_name(:opus_4_5) # => "claude-opus-4-5-20251101"
Anthropic.model_name(:sonnet) # => "claude-sonnet-4-6"
Anthropic.model_name(:haiku) # => "claude-haiku-4-5-20251001"
Examples
See the examples/ directory for complete working examples:
Phase 1 (Core API):
01_basic_message.cr- Simple message creation02_streaming.cr- Real-time streaming responses03_tool_use.cr- Function calling with tools04_vision.cr- Image understanding05_system_prompt.cr- System prompts and temperature06_error_handling.cr- Error handling and retries
Phase 2 (Advanced Features):
07_list_models.cr- List and retrieve models08_batches.cr- Message batches (batch processing)09_tool_runner.cr- Automatic tool execution loop10_pagination.cr- Auto-pagination helpers
Phase 2.5 (Enhanced):
11_schema_dsl.cr- Type-safe Schema DSL for tool definitions12_web_search.cr- Web search server-side tool13_extended_thinking.cr- Extended thinking / reasoning14_citations.cr- Document citations with streaming15_structured_outputs.cr- Type-safe JSON responses (Schema DSL + Typed Structs)16_tools_streaming.cr- Tool input streaming17_web_search_streaming.cr- Web search with streaming18_typed_tools.cr- Ruby BaseTool-like typed inputs19_files_api.cr- File uploads and management20_chatbot.cr- Interactive chatbot example21_token_counting.cr- Token counting for context management22_prompt_caching.cr- Prompt caching for efficiency23_auto_compaction.cr- Automatic context compaction24_advanced_streaming.cr- Advanced streaming patterns withopen_stream25_ollama.cr- Ollama local model integration26_opus_46.cr- Claude Opus 4.6 (adaptive thinking, effort, inference geo)27_agent_tools.cr- Agent tools (bash, text editor, computer use, web fetch, memory)28_advanced_features.cr- Redacted thinking, cache_control on tools, metadata, extended tool fields, CompactionDelta29_beta_params.cr- MCP servers, container/skills, tool search tools, beta parameters30_skills_api.cr- Skills API (CRUD, versions, container integration)31_open_stream.cr- Richer block-scoped streaming withopen_stream
Run examples with:
crystal run examples/01_basic_message.cr
Development
# Install dependencies
shards install
# Run tests
crystal spec
# Format code
crystal tool format
# Run linter
./bin/ameba
# Type check
crystal build --no-codegen src/anthropic-cr.cr
Contributing
- Fork it (https://github.com/amscotti/anthropic-cr/fork)
- Create your feature branch (
git checkout -b my-new-feature) - Commit your changes (
git commit -am 'Add some feature') - Push to the branch (
git push origin my-new-feature) - Create a new Pull Request
Contributors
- Anthony Scotti - creator and maintainer
anthropic-cr
- 1
- 0
- 0
- 1
- 5
- 21 days ago
- January 12, 2026
MIT License
Sun, 15 Mar 2026 23:52:55 GMT