Latch

File attachments for Crystal. Cache, promote, process, and serve uploads with pluggable storage, metadata extraction, and file variant generation.

The name is short for Lucky Attachment. While originally created for Lucky, Latch can be used with any Crystal framework.

CI GitHub tag

Table of contents

Quick start

Set up your uploader:

# src/uploaders/avatar_uploader.cr

struct AvatarUploader
  include Latch::Uploader

  struct VersionsProcessor
    include Latch::Processor::Magick

    original resize: "2000x2000>"
    variant thumb, resize: "200x200", crop: "200x200+0+0", gravity: "center"
  end

  extract dimensions, using: Latch::Extractor::DimensionsFromMagick
  process versions, using: VersionsProcessor
end

# src/models/user.cr

class User < BaseModel
  include Latch::Avram::Model

  table do
    attach avatar : AvatarUploader::StoredFile?
  end
end

# src/operations/save_user.cr

class User::SaveOperation < User::BaseOperation
  attach avatar, process: true
end

Upload a file:

user = User::SaveOperation.create!(avatar_file: uploaded_file)
user.avatar.url # => "/uploads/user/1/avatar/a1b2c3d4.jpg"
user.avatar.versions_thumb.url # => "/uploads/user/1/avatar/a1b2c3d4/versions_thumb.jpg"
user.avatar.width # => 2000

Installation

  1. Add the dependency to your shard.yml:

    dependencies:
      latch:
        github: wout/latch
  2. Run shards install

  3. Require Latch with your framework integration:

    require "latch"
    require "latch/lucky/avram" # Lucky + Avram

    Other combinations:

    require "latch/lucky/uploaded_file" # Lucky without Avram
    require "latch/avram/model"         # Avram without Lucky

Configuration

Latch.configure do |settings|
  settings.storages["cache"] = Latch::Storage::FileSystem.new(
    directory: "uploads", prefix: "cache"
  )
  settings.storages["store"] = Latch::Storage::FileSystem.new(
    directory: "uploads"
  )
  settings.path_prefix = ":model/:id/:attachment"
end

For tests, use the in-memory backend:

Latch.configure do |settings|
  settings.storages["cache"] = Latch::Storage::Memory.new
  settings.storages["store"] = Latch::Storage::Memory.new
end

Uploaders

An uploader defines how files are stored and what metadata is extracted.

struct ImageUploader
  include Latch::Uploader
end

Every uploader automatically extracts filename, mime_type, and size. These are available as methods on the returned StoredFile.

Uploading files

# Cache (temporary storage, e.g. between form submissions)
cached = ImageUploader.cache(uploaded_file)

# Promote from cache to permanent storage
stored = ImageUploader.promote(cached)

# Or store directly
stored = ImageUploader.store(uploaded_file)

Custom upload locations

struct ImageUploader
  include Latch::Uploader

  def generate_location(uploaded_file, metadata, **options) : String
    date = Time.utc.to_s("%Y/%m/%d")
    File.join("images", date, super)
  end
end

Custom storage keys

By default, uploaders use "cache" and "store". Override with the storages macro:

struct ImageUploader
  include Latch::Uploader

  storages cache: "tmp", store: "offsite"
end

Avram integration

Latch integrates with Avram for model-level file attachments with automatic caching, promotion, and cleanup.

Model setup

Use the attach macro inside a table block. The column should be a jsonb type in your migration:

class User < BaseModel
  include Latch::Avram::Model

  table do
    attach avatar : ImageUploader::StoredFile?
  end
end
# In your migration
add avatar : JSON::Any?

SaveOperation setup

The attach macro registers a file attribute and lifecycle hooks:

class User::SaveOperation < User::BaseOperation
  attach avatar
end

The file attribute defaults to avatar_file. A custom name can be provided:

attach avatar, field_name: "avatar_upload"

For nilable attachments, a delete_avatar attribute is added automatically:

User::SaveOperation.update!(user, delete_avatar: true)

Processing after upload

To run processors after promotion, pass process: true:

attach avatar, process: true

For background processing, pass a block instead. For example, using Mel:

attach avatar do |record|
  User::AvatarProcessingJob.run(record_id: record.id)
end

The background job:

struct User::AvatarProcessingJob
  include Mel::Job::Now

  def initialize(@record_id : Int64)
  end

  def run
    user = UserQuery.find(@record_id)

    # For nilable attachments:
    user.avatar.try(&.process)

    # Otherwise simply:
    user.avatar.process
  end
end

Validating attachments

Validate file size and MIME type in a before_save block:

class User::SaveOperation < User::BaseOperation
  attach avatar

  before_save do
    validate_file_size_of avatar_file, max: 5_000_000
    validate_file_mime_type_of avatar_file, in: %w[image/png image/jpeg image/webp]
  end
end

MIME types can also be validated with a pattern:

validate_file_mime_type_of avatar_file, with: /image\/.*/

Upload lifecycle

  1. Before save the file is cached to temporary storage
  2. After commit the cached file is promoted to permanent storage
  3. After promotion processors run (if configured)
  4. On update the old file is replaced
  5. On delete the attached file is removed

Processors

Processors transform uploaded files into variants and can optionally modify the original. Processing is decoupled from uploading, runs in parallel for variants, and can be triggered inline or in a background job.

ImageMagick processor

The built-in Latch::Processor::Magick module wraps magick convert. Define variants with compile-time validated options:

struct AvatarProcessor
  include Latch::Processor::Magick

  original resize: "2000x2000>"
  variant large, resize: "800x800"
  variant thumb, resize: "200x200", crop: "200x200+0+0", gravity: "center"
end

Typos are caught at compile time. All built-in options are optional, but custom processors can declare required options.

Available options

[!IMPORTANT] Requires ImageMagick to be installed.

FFmpeg processor

The built-in Latch::Processor::FFmpeg module wraps ffmpeg for video and audio transformations:

struct VideoProcessor
  include Latch::Processor::FFmpeg

  original video_codec: "libx264", crf: "23", preset: "fast"
  variant preview, scale: "640:-1", video_codec: "libx264", crf: "28"
  variant thumb, frames: "1", format: "image2", scale: "320:-1"
end
Available options

[!IMPORTANT] Requires FFmpeg to be installed.

Vips processor

The built-in Latch::Processor::Vips module uses vipsthumbnail for resize operations and vips copy for metadata/format changes:

struct AvatarProcessor
  include Latch::Processor::Vips

  original resize: "2000x2000>", strip: true
  variant large, resize: "800x800"
  variant thumb, resize: "200x200", crop: true, quality: 85
end
Available options

[!IMPORTANT] Requires libvips to be installed.

Processing the original

The original macro processes the uploaded file in place without creating a copy. Variants are always processed first so they use the maximum available quality.

struct AvatarProcessor
  include Latch::Processor::Magick

  original resize: "2000x2000>"
end

[!NOTE] If original is not declared, the uploaded file remains as-is.

Registering and running processors

Register a processor on an uploader with the process macro:

struct AvatarUploader
  include Latch::Uploader

  process versions, using: AvatarProcessor
end

Processing runs separately from uploading:

stored = AvatarUploader.store(uploaded_file)
stored.process

Variant accessors are generated on StoredFile, prefixed with the processor name:

stored.versions_large.url     # => "/uploads/abc123/versions_large.jpg"
stored.versions_thumb.url     # => "/uploads/abc123/versions_thumb.jpg"
stored.versions_thumb.exists? # => true

Nilable accessors are also available, returning nil if the variant hasn't been processed yet:

# Returns nil before processing, the StoredFile after
stored.versions_thumb?.try(&.url)

# Useful in templates
if thumb = user.avatar.versions_thumb?
  img src: thumb.url
end

# The non-nilable accessor always returns a StoredFile,
# even if the file doesn't exist in storage yet
user.avatar.versions_thumb.url

Custom processors

Create a module with @[Latch::VariantOptions(...)] and use the process macro to define per-variant logic. The block should return an IO:

@[Latch::VariantOptions(quality: Int32)]
module MyQualityProcessor
  include Latch::Processor

  process do
    do_your_thing_with_the(tempfile, variant_options) # return an IO
  end
end

struct QualityProcessor
  include MyQualityProcessor

  variant high, quality: 95
  variant low, quality: 30
end

The block runs with stored_file, storage, name, tempfile, variant_name, and variant_options in scope.

For full control, bypass the process macro and generate self.process directly with an included macro:

@[Latch::VariantOptions(quality: Int32)]
module MyQualityProcessor
  include Latch::Processor

  macro included
    def self.process(
      stored_file : Latch::StoredFile,
      storage : Latch::Storage,
      name : String,
      **options,
    ) : Nil
      stored_file.download do |tempfile|
        VARIANTS.each do |variant_name, variant_options|
          location = stored_file.variant_location("\#{name}_\#{variant_name}")
          io = do_your_thing_with_the(tempfile, variant_options)
          storage.upload(io, location)
        end
      end
    end
  end
end

Storage backends

FileSystem

Latch::Storage::FileSystem.new(
  directory: "uploads",
  prefix: "cache",                # optional subdirectory
  clean: true,                    # clean empty parent dirs on delete (default)
  permissions: File::Permissions.new(0o644),
  directory_permissions: File::Permissions.new(0o755)
)

S3

Works with AWS S3 and any S3-compatible service (RustFS, Tigris, Cloudflare R2):

[!NOTE] RustFS is the open-source successor to MinIO, whose repository has been archived.

Latch::Storage::S3.new(
  bucket: "my-bucket",
  region: "eu-west-1",
  access_key_id: ENV["AWS_ACCESS_KEY_ID"],
  secret_access_key: ENV["AWS_SECRET_ACCESS_KEY"],
  endpoint: "http://localhost:9000",   # optional, for S3-compatible services
  prefix: "uploads",                   # optional key prefix
  public: false,                       # set to true for public-read ACL
  upload_options: {                    # optional default headers
    "Cache-Control" => "max-age=31536000",
  }
)

[!NOTE] S3 storage requires the awscr-s3 shard. Add it to your shard.yml:

dependencies:
  awscr-s3:
    github: taylorfinnell/awscr-s3

Presigned URLs are supported:

stored_file.url(expires_in: 1.hour)

Memory

In-memory storage for testing:

storage = Latch::Storage::Memory.new(
  base_url: "https://cdn.example.com"  # optional
)
storage.clear!  # reset between tests

Custom storage

Inherit from Latch::Storage and implement five methods:

class MyStorage < Latch::Storage
  def upload(io : IO, id : String, **options) : Nil
  end

  def open(id : String, **options) : IO
  end

  def exists?(id : String) : Bool
  end

  def url(id : String, **options) : String
  end

  def delete(id : String) : Nil
  end
end

Metadata extractors

Built-in extractors

Every uploader registers three extractors by default:

Additional extractors can be registered with the extract macro:

struct ImageUploader
  include Latch::Uploader

  extract mime_type, using: Latch::Extractor::MimeFromFile
  extract dimensions, using: Latch::Extractor::DimensionsFromMagick
end

Custom extractors

Create a struct that includes Latch::Extractor:

struct PageCountExtractor
  include Latch::Extractor

  def extract(uploaded_file, metadata, **options) : Int32?
    count_pages(uploaded_file.tempfile)
  end
end

Register it and access the value on the stored file:

struct PdfUploader
  include Latch::Uploader
  extract pages, using: PageCountExtractor
end

stored = PdfUploader.store(uploaded_file)
stored.pages # => 24

An extractor can also write multiple values to metadata directly. Use the @[Latch::MetadataMethods] annotation to generate typed accessor methods for each value:

@[Latch::MetadataMethods(width : Int32, height : Int32)]
struct DimensionsExtractor
  include Latch::Extractor

  def extract(uploaded_file, metadata, **options) : Nil
    metadata["width"] = 800
    metadata["height"] = 600
  end
end

stored = ImageUploader.store(uploaded_file)
stored.width  # => 800
stored.height # => 600

Working with stored files

StoredFile objects are JSON-serializable and provide convenience methods for accessing, downloading, and streaming files:

stored.url       # storage URL
stored.exists?   # check existence
stored.extension # file extension
stored.delete    # remove from storage

stored.open { |io| io.gets_to_end }          # read content
stored.download { |tempfile| tempfile.path } # download to tempfile
stored.stream(response.output)               # stream to IO

Each uploader generates its own StoredFile subclass, which can be extended with custom methods:

struct ImageUploader
  include Latch::Uploader

  # This extractor extracts `width` and `height` and creates methods for them
  extract dimensions, using: Latch::Extractor::DimensionsFromMagick

  class StoredFile
    def ratio : Float64
      width.to_f / height
    end
  end
end

stored = ImageUploader.store(uploaded_file)
stored.ratio # => 1.5

StoredFile serializes to a format compatible with Shrine. Values from registered extractors are also stored in the metadata object:

{
  "id": "uploads/a1b2c3d4.jpg",
  "storage": "store",
  "metadata": {
    "filename": "photo.jpg",
    "size": 102400,
    "mime_type": "image/jpeg",
    "width": 2000,
    "height": 1333
  }
}

Other frameworks

Latch works with any Crystal framework. Implement the Latch::UploadedFile module on your framework's upload class:

module Latch::UploadedFile
  abstract def tempfile : File
  abstract def filename : String

  # Optional overrides with sensible defaults:
  # def path : String         -> tempfile.path
  # def content_type : String? -> nil
  # def size : UInt64         -> tempfile.size
end

Kemal example

require "kemal"
require "latch"

struct Kemal::FileUpload
  include Latch::UploadedFile

  def filename : String
    @filename || "upload"
  end

  def content_type : String?
    headers["Content-Type"]?
  end
end

post "/upload" do |env|
  upload = env.params.files["image"]
  stored = ImageUploader.store(upload)
  stored.url
end

API docs

Online API documentation is available at wout.github.io/latch.

Contributing

  1. Fork it (https://github.com/wout/latch/fork)
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

Contributors