Latch
File attachments for Crystal. Cache, promote, process, and serve uploads with pluggable storage, metadata extraction, and file variant generation.
- Two-stage uploads. Cache first, promote later for safer form handling.
- File processing. Constrain originals, create variants, run in parallel.
- Avram integration. Attach files to models with a single macro.
- Pluggable storage. FileSystem, S3, and Memory out of the box.
- Metadata extraction. Filename, MIME type, size, and image dimensions.
- Framework-agnostic. Built-in Lucky support, adaptable to Kemal or any other Crystal framework.
The name is short for Lucky Attachment. While originally created for Lucky, Latch can be used with any Crystal framework.
Table of contents
- Quick start
- Installation
- Configuration
- Uploaders
- Avram integration
- Processors
- Storage backends
- Metadata extractors
- Working with stored files
- Other frameworks
- API docs
Quick start
Set up your uploader:
# src/uploaders/avatar_uploader.cr
struct AvatarProcessor
include Latch::Processor::Magick
original resize: "2000x2000>"
variant thumb, resize: "200x200", gravity: "center"
end
struct AvatarUploader
include Latch::Uploader
extract dimensions, using: Latch::Extractor::DimensionsFromMagick
process versions, using: AvatarProcessor
end
# src/models/user.cr
class User < BaseModel
include Latch::Avram::Model
table do
attach avatar : AvatarUploader::StoredFile?
end
end
# src/operations/save_user.cr
class User::SaveOperation < User::BaseOperation
attach avatar, process: true
end
Upload a file:
user = User::SaveOperation.create!(avatar_file: uploaded_file)
user.avatar.url # => "/uploads/user/1/avatar/a1b2c3d4.jpg"
user.avatar.versions_thumb.url # => "/uploads/user/1/avatar/a1b2c3d4/versions_thumb.jpg"
user.avatar.width # => 2000
Installation
-
Add the dependency to your
shard.yml:dependencies: latch: github: wout/latch -
Run
shards install -
Require Latch with your framework integration:
require "latch" require "latch/lucky/avram" # Lucky + AvramOther combinations:
require "latch/lucky/uploaded_file" # Lucky without Avram require "latch/avram/model" # Avram without Lucky
Configuration
Latch.configure do |settings|
settings.storages["cache"] = Latch::Storage::FileSystem.new(
directory: "uploads", prefix: "cache"
)
settings.storages["store"] = Latch::Storage::FileSystem.new(
directory: "uploads"
)
settings.path_prefix = ":model/:id/:attachment"
end
For tests, use the in-memory backend:
Latch.configure do |settings|
settings.storages["cache"] = Latch::Storage::Memory.new
settings.storages["store"] = Latch::Storage::Memory.new
end
Uploaders
An uploader defines how files are stored and what metadata is extracted.
struct ImageUploader
include Latch::Uploader
end
Every uploader automatically extracts filename, mime_type, and size.
These are available as methods on the returned StoredFile.
Uploading files
# Cache (temporary storage, e.g. between form submissions)
cached = ImageUploader.cache(uploaded_file)
# Promote from cache to permanent storage
stored = ImageUploader.promote(cached)
# Or store directly
stored = ImageUploader.store(uploaded_file)
Custom upload locations
struct ImageUploader
include Latch::Uploader
def generate_location(uploaded_file, metadata, **options) : String
date = Time.utc.to_s("%Y/%m/%d")
File.join("images", date, super)
end
end
Custom storage keys
By default, uploaders use "cache" and "store". Override with the
storages macro:
struct ImageUploader
include Latch::Uploader
storages cache: "tmp", store: "offsite"
end
Avram integration
Latch integrates with Avram for model-level file attachments with automatic caching, promotion, and cleanup.
Model setup
Use the attach macro inside a table block. The column should be a jsonb
type in your migration:
class User < BaseModel
include Latch::Avram::Model
table do
attach avatar : ImageUploader::StoredFile?
end
end
# In your migration
add avatar : JSON::Any?
SaveOperation setup
The attach macro registers a file attribute and lifecycle hooks:
class User::SaveOperation < User::BaseOperation
attach avatar
end
The file attribute defaults to avatar_file. A custom name can be provided:
attach avatar, field_name: "avatar_upload"
For nilable attachments, a delete_avatar attribute is added automatically:
User::SaveOperation.update!(user, delete_avatar: true)
Processing after upload
To run processors after promotion, pass process: true:
attach avatar, process: true
For background processing, pass a block instead. For example, using Mel:
attach avatar do |record|
User::AvatarProcessingJob.run(record_id: record.id)
end
The background job:
struct User::AvatarProcessingJob
include Mel::Job::Now
def initialize(@record_id : Int64)
end
def run
user = UserQuery.find(@record_id)
# For nilable attachments:
user.avatar.try(&.process)
# Otherwise simply:
user.avatar.process
end
end
Validating attachments
Validate file size and MIME type in a before_save block:
class User::SaveOperation < User::BaseOperation
attach avatar
before_save do
validate_file_size_of avatar_file, max: 5_000_000
validate_file_mime_type_of avatar_file, in: %w[image/png image/jpeg image/webp]
end
end
MIME types can also be validated with a pattern:
validate_file_mime_type_of avatar_file, with: /image\/.*/
Upload lifecycle
- Before save the file is cached to temporary storage
- After commit the cached file is promoted to permanent storage
- After promotion processors run (if configured)
- On update the old file is replaced
- On delete the attached file is removed
Processors
Processors transform uploaded files into variants and can optionally modify the original. Processing is decoupled from uploading, runs in parallel for variants, and can be triggered inline or in a background job.
ImageMagick processor
The built-in Latch::Processor::Magick module wraps magick convert. Define
variants with compile-time validated options:
struct AvatarProcessor
include Latch::Processor::Magick
original resize: "2000x2000>"
variant large, resize: "800x800"
variant thumb, resize: "200x200", gravity: "center"
end
Typos are caught at compile time. All built-in options are optional, but custom processors can declare required options.
Available options
auto_orient: true(Bool) → fix orientation from EXIF databackground: "white"(String) → background color, e.g. "white", "transparent"colorspace: "sRGB"(String) → convert color model, e.g. "sRGB", "Gray"crop: "200x200+10+10"(String) → cut a regiondensity: 72(Int32 | String) → resolution in DPI, e.g. 72 or "72x72"extent: "800x600"(String) → pad/canvas sizeflatten: true(Bool) → merge layers into onegaussian_blur: "0x3"(String) → blur effectgravity: "center"(String) → anchor point, e.g. "center", "north"interlace: "Plane"(String) → progressive renderingquality: 85(Int32 | String) → compression qualityresize: "800x600"(String) → scale to fit, e.g. "800x600", "200x200>"rotate: 90(Int32 | String) → rotate by degreessampling_factor: "4:2:0"(String) → chroma subsamplingsharpen: "0x1"(String) → sharpenstrip: true(Bool) → remove all metadata and profilesthumbnail: "200x200"(String) → like resize but strips profiles for smaller files
[!IMPORTANT] Requires ImageMagick to be installed.
FFmpeg processor
The built-in Latch::Processor::FFmpeg module wraps ffmpeg for video and
audio transformations:
struct VideoProcessor
include Latch::Processor::FFmpeg
original video_codec: "libx264", crf: "23", preset: "fast"
variant preview, scale: "640:-1", video_codec: "libx264", crf: "28"
variant thumb, frames: "1", format: "image2", scale: "320:-1"
end
Available options
audio_bitrate: "128k"(String) → audio bitrateaudio_codec: "aac"(String) → audio codec, e.g. "aac", "libopus"audio_filter: "volume=0.5"(String) → custom audio filtercrf: 23(Int32 | String) → constant rate factor (quality)duration: 10(Int32 | String) → max duration, e.g. 10 or "00:01:30"format: "webm"(String) → output format, e.g. "mp4", "webm", "image2"frame_rate: 30(Int32 | String) → output frame rateframes: 1(Int32 | String) → number of frames to output (for thumbnails)no_audio: true(Bool) → strip audio trackpreset: "fast"(String) → encoding speed/quality, e.g. "fast", "slow"scale: "1280:720"(String) → resize, e.g. "1280:720", "-1:480"start: "00:00:05"(String) → start timevideo_bitrate: "1M"(String) → video bitrate, e.g. "1M", "500k"video_codec: "libx264"(String) → video codec, e.g. "libx264", "libx265"video_filter: "transpose=1"(String) → custom video filter
[!IMPORTANT] Requires FFmpeg to be installed.
Vips processor
The built-in Latch::Processor::Vips module uses vipsthumbnail for resize
operations and vips copy for metadata/format changes:
struct AvatarProcessor
include Latch::Processor::Vips
original resize: "2000x2000>", strip: true
variant large, resize: "800x800"
variant thumb, resize: "200x200", crop: true, quality: 85
end
Available options
auto_orient: true(Bool) → fix orientation from EXIF datacrop: true(Bool) → crop to fill instead of shrink-to-fitformat: "webp"(String) → output format, e.g. "webp", "png"linear: true(Bool) → process in linear color space (higher quality)quality: 85(Int32 | String) → JPEG/WebP compression quality (1-100)resize: "200x200"(String) → bounding box, e.g. "200x200", "800x", "2000x2000>"smartcrop: "attention"(String) → smart crop mode, e.g. "attention", "entropy"strip: true(Bool) → remove all metadata and profiles
[!IMPORTANT] Requires libvips to be installed.
Processing the original
The original macro processes the uploaded file in place without creating a
copy. Variants are always processed first so they use the maximum available
quality.
struct AvatarProcessor
include Latch::Processor::Magick
original resize: "2000x2000>"
end
[!NOTE] If
originalis not declared, the uploaded file remains as-is.
Registering and running processors
Register a processor on an uploader with the process macro:
struct AvatarUploader
include Latch::Uploader
process versions, using: AvatarProcessor
end
Processing runs separately from uploading:
stored = AvatarUploader.store(uploaded_file)
stored.process
Variant accessors are generated on StoredFile, prefixed with the processor
name:
stored.versions_large.url # => "/uploads/abc123/versions_large.jpg"
stored.versions_thumb.url # => "/uploads/abc123/versions_thumb.jpg"
stored.versions_thumb.exists? # => true
Custom processors
Create a module with @[Latch::VariantOptions(...)] and use the process
macro to define per-variant logic. The block should return an IO:
@[Latch::VariantOptions(quality: Int32)]
module MyQualityProcessor
include Latch::Processor
process do
do_your_thing_with_the(tempfile, variant_options) # return an IO
end
end
struct QualityProcessor
include MyQualityProcessor
variant high, quality: 95
variant low, quality: 30
end
The block runs with stored_file, storage, name, tempfile,
variant_name, and variant_options in scope.
For full control, bypass the process macro and generate self.process
directly with an included macro:
@[Latch::VariantOptions(quality: Int32)]
module MyQualityProcessor
include Latch::Processor
macro included
def self.process(
stored_file : Latch::StoredFile,
storage : Latch::Storage,
name : String,
**options,
) : Nil
stored_file.download do |tempfile|
VARIANTS.each do |variant_name, variant_options|
location = stored_file.variant_location("\#{name}_\#{variant_name}")
io = do_your_thing_with_the(tempfile, variant_options)
storage.upload(io, location)
end
end
end
end
end
Storage backends
FileSystem
Latch::Storage::FileSystem.new(
directory: "uploads",
prefix: "cache", # optional subdirectory
clean: true, # clean empty parent dirs on delete (default)
permissions: File::Permissions.new(0o644),
directory_permissions: File::Permissions.new(0o755)
)
S3
Works with AWS S3 and any S3-compatible service (RustFS, Tigris, Cloudflare R2):
[!NOTE] RustFS is the open-source successor to MinIO, whose repository has been archived.
Latch::Storage::S3.new(
bucket: "my-bucket",
region: "eu-west-1",
access_key_id: ENV["AWS_ACCESS_KEY_ID"],
secret_access_key: ENV["AWS_SECRET_ACCESS_KEY"],
endpoint: "http://localhost:9000", # optional, for S3-compatible services
prefix: "uploads", # optional key prefix
public: false, # set to true for public-read ACL
upload_options: { # optional default headers
"Cache-Control" => "max-age=31536000",
}
)
[!NOTE] S3 storage requires the
awscr-s3shard. Add it to yourshard.yml:dependencies: awscr-s3: github: taylorfinnell/awscr-s3
Presigned URLs are supported:
stored_file.url(expires_in: 1.hour)
Memory
In-memory storage for testing:
storage = Latch::Storage::Memory.new(
base_url: "https://cdn.example.com" # optional
)
storage.clear! # reset between tests
Custom storage
Inherit from Latch::Storage and implement five methods:
class MyStorage < Latch::Storage
def upload(io : IO, id : String, **options) : Nil
end
def open(id : String, **options) : IO
end
def exists?(id : String) : Bool
end
def url(id : String, **options) : String
end
def delete(id : String) : Nil
end
end
Metadata extractors
Built-in extractors
Every uploader registers three extractors by default:
FilenameFromIO(filename) → Original filename from the uploadMimeFromIO(mime_type) → MIME type from the Content-Type headerSizeFromIO(size) → File size in bytes
Additional extractors can be registered with the extract macro:
MimeFromExtension(mime_type) → MIME type from the file extensionMimeFromFile(mime_type) → requiresfileCLI toolDimensionsFromMagick(width,height) → requiresmagickoridentifyDimensionsFromVips(width,height) → requiresvipsheader
struct ImageUploader
include Latch::Uploader
extract mime_type, using: Latch::Extractor::MimeFromFile
extract dimensions, using: Latch::Extractor::DimensionsFromMagick
end
Custom extractors
Create a struct that includes Latch::Extractor:
struct PageCountExtractor
include Latch::Extractor
def extract(uploaded_file, metadata, **options) : Int32?
count_pages(uploaded_file.tempfile)
end
end
Register it and access the value on the stored file:
struct PdfUploader
include Latch::Uploader
extract pages, using: PageCountExtractor
end
stored = PdfUploader.store(uploaded_file)
stored.pages # => 24
Working with stored files
StoredFile objects are JSON-serializable and provide convenience methods for
accessing, downloading, and streaming files:
stored.url # storage URL
stored.exists? # check existence
stored.extension # file extension
stored.delete # remove from storage
stored.open { |io| io.gets_to_end } # read content
stored.download { |tempfile| tempfile.path } # download to tempfile
stored.stream(response.output) # stream to IO
Each uploader generates its own StoredFile subclass, which can be extended
with custom methods:
struct ImageUploader
include Latch::Uploader
# This extractor extracts `width` and `height` and creates methods for them
extract dimensions, using: Latch::Extractor::DimensionsFromMagick
class StoredFile
def ratio : Float64
width.to_f / height
end
end
end
stored = ImageUploader.store(uploaded_file)
stored.ratio # => 1.5
StoredFile serializes to a format compatible with
Shrine. Values from registered extractors are also
stored in the metadata object:
{
"id": "uploads/a1b2c3d4.jpg",
"storage": "store",
"metadata": {
"filename": "photo.jpg",
"size": 102400,
"mime_type": "image/jpeg",
"width": 2000,
"height": 1333
}
}
Other frameworks
Latch works with any Crystal framework. Implement the Latch::UploadedFile
module on your framework's upload class:
module Latch::UploadedFile
abstract def tempfile : File
abstract def filename : String
# Optional overrides with sensible defaults:
# def path : String -> tempfile.path
# def content_type : String? -> nil
# def size : UInt64 -> tempfile.size
end
Kemal example
require "kemal"
require "latch"
struct Kemal::FileUpload
include Latch::UploadedFile
def filename : String
@filename || "upload"
end
def content_type : String?
headers["Content-Type"]?
end
end
post "/upload" do |env|
upload = env.params.files["image"]
stored = ImageUploader.store(upload)
stored.url
end
API docs
Online API documentation is available at wout.github.io/latch.
Contributing
- Fork it (https://github.com/wout/latch/fork)
- Create your feature branch (
git checkout -b my-new-feature) - Commit your changes (
git commit -am 'Add some feature') - Push to the branch (
git push origin my-new-feature) - Create a new Pull Request
Contributors
- Wout - creator and maintainer