shrine

shrine.cr
Shrine is a toolkit for file attachments in Crystal applications. Heavily inspired by Shrine for Ruby.
Documentation
Installation
-
Add the dependency to your
shard.yml:dependencies: shrine: github: jetrockets/shrine.cr
1. Run shards install
Usage
```crystal
require "shrine"
Shrine.cr is under active development.
First configure Shrine storages:
Shrine.configure do |config|
config.storages["cache"] = Storage::FileSystem.new("uploads", prefix: "cache")
config.storages["store"] = Storage::FileSystem.new("uploads")
end
Upload files directly:
Shrine.upload(file, "store")
Custom filenames are supported via metadata:
Shrine.upload(file, "store", metadata: { "filename" => "foo.bar" })
Custom uploaders
Create custom uploader classes by inheriting from Shrine:
class FileImport::AssetUploader < Shrine
def generate_location(io : IO | UploadedFile, metadata, context, **options)
name = super(io, metadata, **options)
File.join("imports", context[:model].id.to_s, name)
end
end
FileImport::AssetUploader.upload(file, "store", context: { model: YOUR_ORM_MODEL })
S3 Storage
Amazon S3 and S3-compatible storage support.
Basic Setup
require "awscr-s3"
# Create S3 client
client = Awscr::S3::Client.new(
region: "us-east-1",
aws_access_key: ENV["AWS_ACCESS_KEY"],
aws_secret_key: ENV["AWS_SECRET_KEY"]
)
# Configure Shrine storage
Shrine.configure do |config|
config.storages["store"] = Shrine::Storage::S3.new(
bucket: "my-app-bucket",
client: client,
prefix: "files"
)
end
Public Access
Make uploaded files publicly accessible:
storage = Shrine::Storage::S3.new(
bucket: "my-app-bucket",
client: client,
public: true # Adds x-amz-acl: public-read header
)
Custom Upload Options
Add additional headers for all uploads:
storage = Shrine::Storage::S3.new(
bucket: "my-app-bucket",
client: client,
upload_options: {
"x-amz-storage-class" => "STANDARD_IA",
"x-amz-server-side-encryption" => "AES256"
}
)
S3-Compatible Services
Works with any S3-compatible service (DigitalOcean Spaces, MinIO, Wasabi):
client = Awscr::S3::Client.new(
region: "nyc3",
aws_access_key: ENV["DO_SPACES_KEY"],
aws_secret_key: ENV["DO_SPACES_SECRET"],
endpoint: "https://nyc3.digitaloceanspaces.com"
)
storage = Shrine::Storage::S3.new(
bucket: "my-space",
client: client,
public: true
)
Presigned URLs
Generate temporary URLs for secure access:
# 1-hour access
url = storage.url("files/document.pdf", expires: 3600)
# 1-day access
url = storage.url("files/document.pdf", expires: 86400)
File Operations
storage = Shrine::Storage::S3.new(bucket: "my-bucket", client: client)
# Upload
storage.upload(file, "photos/vacation.jpg")
# Check existence
storage.exists?("photos/vacation.jpg") # => true
# Generate URL
storage.url("photos/vacation.jpg") # Presigned URL
# Download
io = storage.open("photos/vacation.jpg")
content = io.gets_to_end
# Delete
storage.delete("photos/vacation.jpg")
SQLite storage
SQLite storage is built-in. No additional dependencies required:
storage = Shrine::Storage::SQLite.new("files.db")
storage.upload(file, "document.pdf")
ORM usage examples
ORM adapters are in development.
Lucky (Avram)
LuckyCast tutorial on file uploads
Granite
class FileImport < Granite::Base
connection pg
table file_imports
column id : Int64, primary: true
column asset_data : Shrine::UploadedFile, converter: Granite::Converters::Json(Shrine::UploadedFile, JSON::Any)
after_save do
if @asset_changed && @asset_data
@asset_data = FileImport::AssetUploader.store(@asset_data.not_nil!, move: true, context: { model: self })
@asset_changed = false
save!
end
end
def asset=(upload : Amber::Router::File)
@asset_data = FileImport::AssetUploader.cache(upload.file, metadata: { filename: upload.filename })
@asset_changed = true
end
end
Jennifer
class FileImport < Jennifer::Model::Base
@asset_changed : Bool | Nil
with_timestamps
mapping(
id: Primary32,
asset_data: JSON::Any?,
created_at: Time?,
updated_at: Time?
)
after_save :move_to_store
def asset=(upload : Amber::Router::File)
self.asset_data = JSON.parse(FileImport::AssetUploader.cache(upload.file, metadata: { filename: upload.filename }).to_json)
asset_changed! if asset_data
end
def asset
Shrine::UploadedFile.from_json(asset_data.not_nil!.to_json) if asset_data
end
def asset_changed?
@asset_changed || false
end
private def asset_changed!
@asset_changed = true
end
private def move_to_store
if asset_changed?
self.asset_data = JSON.parse(FileImport::AssetUploader.store(asset.not_nil!, move: true, context: { model: self }).to_json)
@asset_changed = false
save!
end
end
end
Plugins
Shrine.cr provides a plugin system similar to Shrine.rb for extending uploader functionality.
Determine MIME Type
Extracts MIME type from uploaded files using various analyzers:
require "shrine/plugins/determine_mime_type"
class Uploader < Shrine
load_plugin(
Shrine::Plugins::DetermineMimeType,
analyzer: Shrine::Plugins::DetermineMimeType::Tools::File
)
finalize_plugins!
end
Analyzers
Name Description File Uses the file utility to determine MIME type from content (default) Mime Uses MIME.from_filename to determine type from extension ContentType Reads #content_type attribute from IO object
Add Metadata
Add custom metadata to uploaded files:
require "base64"
require "shrine/plugins/add_metadata"
class Uploader < Shrine
load_plugin(Shrine::Plugins::AddMetadata)
add_metadata :signature, -> {
Base64.encode(io.gets_to_end)
}
finalize_plugins!
end
Access custom metadata:
image.metadata["signature"]
Extract multiple metadata values at once:
class Uploader < Shrine
load_plugin(Shrine::Plugins::AddMetadata)
add_metadata :multiple_values, -> {
text = io.gets_to_end
Shrine::UploadedFile::MetadataType{
"custom_1" => text,
"custom_2" => text * 2
}
}
finalize_plugins!
end
Store Dimensions
Extract image dimensions and store in metadata. Requires fastimage.cr:
require "fastimage"
require "shrine/plugins/store_dimensions"
class Uploader < Shrine
load_plugin(Shrine::Plugins::StoreDimensions,
analyzer: Shrine::Plugins::StoreDimensions::Tools::FastImage)
finalize_plugins!
end
Access dimensions:
image.metadata["width"]
image.metadata["height"]
Analyzers
Name Description FastImage Uses FastImage library (default) Identify Wraps ImageMagick's identify command
Storage Backends
Storage Status
- FileSystem -> Stable
- Memory -> table
- S3 -> Stable
- SQLite -> New
- Redis Planned
- PostgreSQL Planned
Feature Progress
- Shrine core
- Shrine::UploadedFile
- #original_filename
- #extension
- #size
- #mime_type
- #close
- #url
- #exists?
- #open
- #download
- #stream
- #replace
- #delete
- Shrine::Attacher
- Shrine::Attachment
- Storage backends
- FileSystem
- Memory
- S3
- SQLite
- Custom uploaders
- Derivatives
- ORM adapters
- Granite
- Jennifer
- Avram (Lucky)
- Crecto
- Plugin system
- Background processing
Contributing
- Fork it (https://github.com/jetrockets/shrine.cr/fork)
- Create your feature branch (git checkout -b my-new-feature)
- Commit your changes (git commit -am 'Add some feature')
- Push to the branch (git push origin my-new-feature)
- Create a new Pull Request
Contributors
- Igor Alexandrov - creator and maintainer
- Arina Shmeleva - S3 Storage
- Mick Wout - Plugins and Lucky integration
- slick_phantom - SQLite storage
shrine
- 0
- 0
- 0
- 0
- 7
- 38 minutes ago
- March 10, 2026
MIT License
Tue, 10 Mar 2026 21:31:18 GMT
