openai
OpenAI Crystal Client
A comprehensive Crystal client for the OpenAI API, auto-generated from the official OpenAPI specification.
Features
- Full coverage of the OpenAI API (2480 types, 220 endpoints)
- Type-safe request and response objects with JSON serialization
- Server-Sent Events (SSE) streaming support for chat completions
- Multipart form data support for file uploads
- Comprehensive error handling with typed exceptions
- Provider extension system for OpenAI-compatible APIs (DeepSeek, Groq, etc.)
- Auto-generated from OpenAPI spec v2.3.0
Installation
-
Add the dependency to your
shard.yml:dependencies: openai: github: dsisnero/openai -
Run
shards install
Usage
Basic Setup
require "openai"
# Create a client with your API key
client = OpenAI::Client.new(api_key: ENV["OPENAI_API_KEY"])
# Or use a custom base URL (for Azure OpenAI, proxies, etc.)
client = OpenAI::Client.new(
api_key: ENV["OPENAI_API_KEY"],
base_url: "https://your-resource.openai.azure.com"
)
Chat Completions
# Using helper methods (recommended)
request = OpenAI::CreateChatCompletionRequest.new(model: "gpt-4o")
.add_system_message("You are a helpful assistant")
.add_user_message("Hello, how are you?")
response = client.create_chat_completion(request)
puts response.choices.first.message.content
Using ChatMessages Module
# Using the ChatMessages factory module
request = OpenAI::CreateChatCompletionRequest.new(
model: "gpt-4o",
messages: [
OpenAI::ChatMessages.system("You are a helpful assistant"),
OpenAI::ChatMessages.user("What is 2 + 2?"),
] of OpenAI::ChatCompletionRequestMessage
)
response = client.create_chat_completion(request)
puts response.choices.first.message.content
Parsing Messages from JSON
# Parse messages from JSON (useful for loading conversations)
json_messages = %([
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Hi there!"}
])
request = OpenAI::CreateChatCompletionRequest.new(model: "gpt-4o")
.add_messages_from_json(json_messages)
response = client.create_chat_completion(request)
puts response.choices.first.message.content
Streaming Chat Completions
request = OpenAI::CreateChatCompletionRequest.new(model: "gpt-4o")
.add_user_message("Tell me a story")
request.stream = true
client.create_chat_completion_stream(request) do |chunk|
chunk.choices.each do |choice|
if content = choice.delta.content
print content
end
end
end
puts
Embeddings
request = OpenAI::CreateEmbeddingRequest.new(
model: "text-embedding-3-small",
input: "The quick brown fox jumps over the lazy dog"
)
response = client.create_embedding(request)
embedding = response.data.first.embedding
puts "Embedding dimension: #{embedding.size}"
Image Generation
request = OpenAI::CreateImageRequest.new(
model: "dall-e-3",
prompt: "A white siamese cat",
n: 1,
size: OpenAI::CreateImageRequestSize::N1024X1024
)
response = client.create_image(request)
puts response.data.first.url
Error Handling
begin
response = client.create_chat_completion(request)
rescue ex : OpenAI::AuthenticationError
puts "Invalid API key: #{ex.message}"
rescue ex : OpenAI::RateLimitError
puts "Rate limited: #{ex.message}"
rescue ex : OpenAI::BadRequestError
puts "Bad request: #{ex.message}"
rescue ex : OpenAI::APIError
puts "API error: #{ex.message}"
end
OpenAI-Compatible Providers
This client supports OpenAI-compatible APIs through a provider extension system. Providers allow you to override types and methods that differ from the OpenAI spec.
Using the DeepSeek Provider
require "openai"
require "openai/providers/deepseek"
# Create a DeepSeek client
client = OpenAI::Providers::DeepSeek::Client.new(
api_key: ENV["DEEPSEEK_API_KEY"]
)
# List models (uses DeepSeek-compatible types)
models = client.list_models
models.data.each { |m| puts m.id }
# Chat completions work the same as OpenAI
request = OpenAI::CreateChatCompletionRequest.new(model: "deepseek-chat")
.add_system_message("You are a helpful assistant")
.add_user_message("Hello!")
response = client.create_chat_completion(request)
puts response.choices.first.message.content
# Streaming also works
request.stream = true
client.create_chat_completion_stream(request) do |chunk|
chunk.choices.each do |choice|
if content = choice.delta.content
print content
end
end
end
Creating a Custom Provider
To add support for another OpenAI-compatible API, create a provider module:
# src/openai/providers/my_provider.cr
require "../client"
require "../types"
module OpenAI
module Providers
module MyProvider
BASE_URL = "https://api.myprovider.com/v1"
# Override types that differ from OpenAI
# Example: MyProvider doesn't return `created` timestamp for models
class Model
include JSON::Serializable
property id : String
@[JSON::Field(emit_null: false)]
property created : Int64? # Make optional
property object : String
property owned_by : String
def initialize(
@id : String = "",
@created : Int64? = nil,
@object : String = "model",
@owned_by : String = ""
)
end
end
class ListModelsResponse
include JSON::Serializable
property object : String
property data : Array(Model)
def initialize(
@object : String = "list",
@data : Array(Model) = [] of Model
)
end
end
# Create a client that inherits from OpenAI::Client
class Client < OpenAI::Client
def initialize(
api_key : String = ENV["MY_PROVIDER_API_KEY"]? || "",
base_url : String = BASE_URL,
timeout : Time::Span = OpenAI::Client::DEFAULT_TIMEOUT,
organization : String? = nil
)
super(
api_key: api_key,
base_url: base_url,
timeout: timeout,
organization: organization
)
end
# Override methods that need different return types
def list_models : ListModelsResponse
path = "/models"
response = request("GET", path)
ListModelsResponse.from_json(response.body)
end
end
end
end
end
Provider Extension Guidelines
- Only override what's different - Most endpoints work without changes
- Use
@[JSON::Field(emit_null: false)]- For optional fields not in the provider's response - Inherit from
OpenAI::Client- Get all standard functionality for free - Use
protectedmethods -request()andrequest_stream()are available to subclasses - Add conversion methods - For interoperability with standard OpenAI types
Known Provider Differences
| Provider | Differences |
|---|---|
| DeepSeek | Model.created not returned in /models |
| Groq | TBD |
| Together | TBD |
| Anthropic | Different API structure (not directly compatible) |
API Coverage
The client covers all OpenAI API endpoints organized by category:
| Category | Endpoints | Description |
|---|---|---|
| Chat | 12 | Chat completions, including streaming |
| Completions | 1 | Legacy text completions |
| Embeddings | 1 | Text embeddings |
| Images | 3 | Image generation and editing |
| Audio | 3 | Speech, transcription, translation |
| Files | 5 | File upload and management |
| Fine-tuning | 13 | Model fine-tuning |
| Assistants | 23 | Assistants API |
| Models | 3 | Model listing and info |
| Moderations | 1 | Content moderation |
| Batch | 4 | Batch processing |
| Uploads | 4 | Large file uploads |
| Vector Stores | 16 | Vector storage for RAG |
| Evals | 12 | Model evaluations |
Development
Regenerating the Client from OpenAPI Spec
The client is auto-generated from the OpenAPI specification. To regenerate after updating the spec:
Quick Rebuild
crystal run build.cr
Full Rebuild with Validation
# 1. Remove old generated files (keeps providers/)
rm -rf src/openai/api src/openai/types src/openai/client.cr src/openai/error.cr \
src/openai/multipart.cr src/openai/streaming.cr src/openai/types.cr
rm -rf spec/openai
# 2. Regenerate from OpenAPI spec
crystal run build.cr
# 3. Format generated code
crystal tool format src/openai/ spec/
# 4. Verify compilation
crystal build src/openai.cr --no-codegen
# 5. Run tests
crystal spec
# 6. Test examples
crystal run examples/completion.cr --no-codegen
Customizing the Generator
The build.cr generator can be modified for custom needs:
# Key areas in build.cr:
# Type mapping (line ~380)
# - Modify how OpenAPI types map to Crystal types
# Endpoint parsing (line ~1000)
# - Customize how API methods are generated
# Chat helpers (line ~1730)
# - Add/modify convenience methods for chat
# Client generation (line ~2200)
# - Customize HTTP client behavior
Project Structure
src/openai/
client.cr # Main HTTP client (generated)
types.cr # All generated types (2480+)
error.cr # Error hierarchy (generated)
multipart.cr # Multipart form encoding (generated)
streaming.cr # SSE streaming support (generated)
api/ # API methods by category (generated)
chat.cr
embeddings.cr
...
types/ # Type references by category (generated)
chat.cr
embeddings.cr
...
providers/ # Provider extensions (NOT generated, manually maintained)
deepseek.cr
...
build.cr # Code generator
openapi.documented.yml # OpenAPI specification
Running Tests
# Install dependencies
shards install
# Run all tests
crystal spec
# Run specific test file
crystal spec spec/openai/types/chat_spec.cr
Running Examples
# Set API keys
export OPENAI_API_KEY="your-openai-key"
export DEEPSEEK_API_KEY="your-deepseek-key"
# Run examples
crystal run examples/completion.cr
crystal run examples/image.cr
crystal run examples/deepseek.cr
crystal run examples/list_models.cr
Contributing
Please read CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.
Contributing a Provider
- Create
src/openai/providers/your_provider.cr - Add an example in
examples/your_provider.cr - Document differences in the README
- Submit a PR
Support
- Issues: Use the GitHub Issues for bug reports and feature requests
- Discussions: For questions and discussions, use GitHub Discussions
License
This project is licensed under the MIT License - see the LICENSE file for details.
Changelog
See CHANGELOG.md for a list of changes.
Contributors
- Dominic Sisneros - creator and maintainer
Acknowledgments
- OpenAI for their comprehensive API
- The Crystal community for excellent tooling and support
- All contributors who help improve this project
openai
- 0
- 0
- 0
- 0
- 2
- 5 days ago
- December 19, 2025
MIT License
Wed, 24 Dec 2025 15:19:58 GMT