class
Latch::Storage::S3
- Latch::Storage::S3
- Latch::Storage
- Reference
- Object
Overview
S3-compatible storage backend. Supports AWS S3 and any S3-compatible service such as RustFS, Tigris, or Cloudflare R2 via a custom endpoint.
Requires the awscr-s3 shard to be added to your shard.yml:
dependencies:
awscr-s3:
github: taylorfinnell/awscr-s3
AWS S3
Latch::Storage::S3.new(
bucket: "lucky-bucket",
region: "eu-west-1",
access_key_id: ENV["KEY"],
secret_access_key: ENV["SECRET"]
)
RustFS or other S3-compatible services
Latch::Storage::S3.new(
bucket: "lucky-bucket",
region: "eu-west-1",
access_key_id: ENV["KEY"],
secret_access_key: ENV["SECRET"],
endpoint: "http://localhost:9000"
)
Bring your own client
client = Awscr::S3::Client.new("eu-west-1", ENV["KEY"], ENV["SECRET"])
Latch::Storage::S3.new(bucket: "lucky-bucket", client: client)
Defined in:
latch/storage/s3.crConstructors
-
.new(bucket : String, region : String, access_key_id : String, secret_access_key : String, prefix : String | Nil = nil, endpoint : String | Nil = nil, public : Bool = false, upload_options : Hash(String, String) = Hash(String, String).new)
Initialises a storage using credentials.
-
.new(bucket : String, client : Awscr::S3::Client, prefix : String | Nil = nil, public : Bool = false, upload_options : Hash(String, String) = Hash(String, String).new)
Initialises a storage with a pre-built
Awscr::S3::Client.
Instance Method Summary
- #bucket : String
- #client : Awscr::S3::Client
-
#delete(id : String) : Nil
Deletes the object for the given key.
-
#exists?(id : String) : Bool
Tests if an object exists in the bucket.
-
#move(file : Latch::StoredFile, id : String, **options) : Nil
Promotes a file efficiently using a server-side S3 copy when the source is a
StoredFilein the same bucket, avoiding the download/re-upload. -
#object_key(id : String) : String
Returns the full object key including any configured prefix.
-
#open(id : String, **options) : IO
Opens the S3 object and returns an
IO::Memoryfor reading. - #prefix : String | Nil
- #public? : Bool
-
#upload(file : File, id : String, **options) : Nil
Uploads a File to the given key in the bucket using
FileUploader, which automatically switches to multipart uploads for files larger than 5MB. -
#upload(io : IO, id : String, **options) : Nil
Uploads an IO to the given key in the bucket.
- #upload_options : Hash(String, String)
-
#url(id : String, **options) : String
Returns the URL for accessing the object.
Instance methods inherited from class Latch::Storage
delete(id : String) : Nil
delete,
exists?(id : String) : Bool
exists?,
move(io : IO, id : String, **options) : Nilmove(file : Latch::StoredFile, id : String, **options) : Nil move, open(id : String, **options) : IO open, upload(io : IO, id : String, **options) : Nil upload, url(id : String, **options) : String url
Constructor Detail
Initialises a storage using credentials.
storage = Latch::Storage::S3.new(
bucket: "lucky-bucket",
region: "eu-west-1",
access_key_id: "key",
secret_access_key: "secret",
endpoint: "http://localhost:9000"
)
Initialises a storage with a pre-built Awscr::S3::Client. Useful when you
need full control over the client configuration, or in tests for example.
client = Awscr::S3::Client.new("eu-west-1", "key", "secret")
storage = Latch::Storage::S3.new(
bucket: "lucky-bucket",
client: client
)
Instance Method Detail
Deletes the object for the given key. Does not raise if the object does not exist.
storage.delete("uploads/photo.jpg")
Tests if an object exists in the bucket.
storage.exists?("uploads/photo.jpg")
# => true
Promotes a file efficiently using a server-side S3 copy when the source is
a StoredFile in the same bucket, avoiding the download/re-upload. Falls
back to a regular upload for plain IO sources.
Returns the full object key including any configured prefix.
storage.object_key("photo.jpg")
# => "photo.jpg"
Opens the S3 object and returns an IO::Memory for reading.
io = storage.open("uploads/photo.jpg")
content = io.gets_to_end
io.close
Raises Latch::FileNotFound if the object does not exist.
Uploads a File to the given key in the bucket using FileUploader, which
automatically switches to multipart uploads for files larger than 5MB.
storage.upload(File.open("photo.jpg"), "uploads/photo.jpg")
Uploads an IO to the given key in the bucket. The IO is fully read into
memory before uploading because awscr-s3 requires a sized body.
NOTE Prefer the File overload when possible to avoid reading the
entire file into memory and to benefit from automatic multipart uploads.
storage.upload(io, "uploads/photo.jpg")
storage.upload(io, "uploads/photo.jpg", metadata: {
"filename" => "photo.jpg",
"mime_type" => "image/jpeg",
})
Returns the URL for accessing the object. When expires_in is provided
(in seconds), a presigned URL is returned. Otherwise a plain public URL is
constructed without any HTTP round-trip.
storage.url("uploads/photo.jpg")
# => "https://s3-eu-west-1.amazonaws.com/lucky-bucket/uploads/photo.jpg"
storage.url("uploads/photo.jpg", expires_in: 3600)
# => "https://s3-eu-west-1.amazonaws.com/lucky-bucket/uploads/photo.jpg?X-Amz-Signature=..."