class Latch::Storage::S3

Overview

S3-compatible storage backend. Supports AWS S3 and any S3-compatible service such as RustFS, Tigris, or Cloudflare R2 via a custom endpoint.

Requires the awscr-s3 shard to be added to your shard.yml:

dependencies:
  awscr-s3:
    github: taylorfinnell/awscr-s3

AWS S3

Latch::Storage::S3.new(
  bucket: "lucky-bucket",
  region: "eu-west-1",
  access_key_id: ENV["KEY"],
  secret_access_key: ENV["SECRET"]
)

RustFS or other S3-compatible services

Latch::Storage::S3.new(
  bucket: "lucky-bucket",
  region: "eu-west-1",
  access_key_id: ENV["KEY"],
  secret_access_key: ENV["SECRET"],
  endpoint: "http://localhost:9000"
)

Bring your own client

client = Awscr::S3::Client.new("eu-west-1", ENV["KEY"], ENV["SECRET"])
Latch::Storage::S3.new(bucket: "lucky-bucket", client: client)

Defined in:

latch/storage/s3.cr

Constructors

Instance Method Summary

Instance methods inherited from class Latch::Storage

delete(id : String) : Nil delete, exists?(id : String) : Bool exists?, move(io : IO, id : String, **options) : Nil
move(file : Latch::StoredFile, id : String, **options) : Nil
move
, open(id : String, **options) : IO open, upload(io : IO, id : String, **options) : Nil upload, url(id : String, **options) : String url

Constructor Detail

def self.new(bucket : String, region : String, access_key_id : String, secret_access_key : String, prefix : String | Nil = nil, endpoint : String | Nil = nil, public : Bool = false, upload_options : Hash(String, String) = Hash(String, String).new) #

Initialises a storage using credentials.

storage = Latch::Storage::S3.new(
  bucket: "lucky-bucket",
  region: "eu-west-1",
  access_key_id: "key",
  secret_access_key: "secret",
  endpoint: "http://localhost:9000"
)

[View source]
def self.new(bucket : String, client : Awscr::S3::Client, prefix : String | Nil = nil, public : Bool = false, upload_options : Hash(String, String) = Hash(String, String).new) #

Initialises a storage with a pre-built Awscr::S3::Client. Useful when you need full control over the client configuration, or in tests for example.

client = Awscr::S3::Client.new("eu-west-1", "key", "secret")
storage = Latch::Storage::S3.new(
  bucket: "lucky-bucket",
  client: client
)

[View source]

Instance Method Detail

def bucket : String #

[View source]
def client : Awscr::S3::Client #

[View source]
def delete(id : String) : Nil #

Deletes the object for the given key. Does not raise if the object does not exist.

storage.delete("uploads/photo.jpg")

[View source]
def exists?(id : String) : Bool #

Tests if an object exists in the bucket.

storage.exists?("uploads/photo.jpg")
# => true

[View source]
def move(file : Latch::StoredFile, id : String, **options) : Nil #

Promotes a file efficiently using a server-side S3 copy when the source is a StoredFile in the same bucket, avoiding the download/re-upload. Falls back to a regular upload for plain IO sources.


[View source]
def object_key(id : String) : String #

Returns the full object key including any configured prefix.

storage.object_key("photo.jpg")
# => "photo.jpg"

[View source]
def open(id : String, **options) : IO #

Opens the S3 object and returns an IO::Memory for reading.

io = storage.open("uploads/photo.jpg")
content = io.gets_to_end
io.close

Raises Latch::FileNotFound if the object does not exist.


[View source]
def prefix : String | Nil #

[View source]
def public? : Bool #

[View source]
def upload(file : File, id : String, **options) : Nil #

Uploads a File to the given key in the bucket using FileUploader, which automatically switches to multipart uploads for files larger than 5MB.

storage.upload(File.open("photo.jpg"), "uploads/photo.jpg")

[View source]
def upload(io : IO, id : String, **options) : Nil #

Uploads an IO to the given key in the bucket. The IO is fully read into memory before uploading because awscr-s3 requires a sized body.

NOTE Prefer the File overload when possible to avoid reading the entire file into memory and to benefit from automatic multipart uploads.

storage.upload(io, "uploads/photo.jpg")
storage.upload(io, "uploads/photo.jpg", metadata: {
  "filename"  => "photo.jpg",
  "mime_type" => "image/jpeg",
})

[View source]
def upload_options : Hash(String, String) #

[View source]
def url(id : String, **options) : String #

Returns the URL for accessing the object. When expires_in is provided (in seconds), a presigned URL is returned. Otherwise a plain public URL is constructed without any HTTP round-trip.

storage.url("uploads/photo.jpg")
# => "https://s3-eu-west-1.amazonaws.com/lucky-bucket/uploads/photo.jpg"

storage.url("uploads/photo.jpg", expires_in: 3600)
# => "https://s3-eu-west-1.amazonaws.com/lucky-bucket/uploads/photo.jpg?X-Amz-Signature=..."

[View source]