I was working on an app where user image uploads needed to be resized and converted before hitting storage, both to keep storage lean but most importantly to serve faster loading images to users.
My options:
Supabase Image Transformations. We were already using Supabase for storage. Supabase has a built-in feature for on-the-fly image resizing. Works great but:
- it costs $5 per 1,000 origin images and requires the Pro plan.
- It only solves serving. The raw files still land in storage at full size.
A serverless function with Sharp. This is what I tried first. Every upload hit a function that resized and stripped metadata before writing to storage. The problem: 1000ms of latency on every save. Get that out of here.
Client-side processing. Obvious in retrospect. The browser has a Canvas API. It can resize images, convert formats, and return File objects before anything touches the network. Free, fast, and the files are already small by the time they reach Supabase.
Overview of the Pipeline
We’re building a client-side pipeline that:
- Takes raw image files from a
<input type="file"> - Resizes them to a max dimension of 1200px (preserving aspect ratio)
- Converts them to WebP at 0.85 quality, with JPEG fallback
- Returns new
Fileobjects. Same interface as the originals - Handles failures gracefully by returning the original file unchanged
- Cleans up memory properly (no blob URL leaks, no stranded canvas buffers)
As a bonus, I’ll also show the interactive cropping UI that runs before optimization, and how I chain them together.
Part 1: Image Optimization
This is the heart of the pipeline. Here’s the optimizeImage function:
async function optimizeImage(file) {
return new Promise((resolve, reject) => {
const blobUrl = URL.createObjectURL(file);
const img = new Image();
img.onload = () => {
// Revoke immediately — we have the image data, we don't need the URL anymore
URL.revokeObjectURL(blobUrl);
// Calculate new dimensions, capping at 1200px on the longest side
let { width, height } = img;
const MAX_DIMENSION = 1200;
if (width > MAX_DIMENSION || height > MAX_DIMENSION) {
if (width >= height) {
height = Math.round((height / width) * MAX_DIMENSION);
width = MAX_DIMENSION;
} else {
width = Math.round((width / height) * MAX_DIMENSION);
height = MAX_DIMENSION;
}
}
const canvas = document.createElement("canvas");
canvas.width = width;
canvas.height = height;
const ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
// Try WebP first
canvas.toBlob(
(webpBlob) => {
if (webpBlob) {
// Free the canvas memory
canvas.width = 0;
canvas.height = 0;
const newName = file.name.replace(/\.[^.]+$/, ".webp");
const optimizedFile = new File([webpBlob], newName, {
type: "image/webp",
lastModified: Date.now(),
});
resolve(optimizedFile);
} else {
// WebP failed, fall back to JPEG
canvas.toBlob(
(jpegBlob) => {
canvas.width = 0;
canvas.height = 0;
if (jpegBlob) {
const newName = file.name.replace(/\.[^.]+$/, ".jpg");
const optimizedFile = new File([jpegBlob], newName, {
type: "image/jpeg",
lastModified: Date.now(),
});
resolve(optimizedFile);
} else {
reject(new Error("Both WebP and JPEG conversion failed"));
}
},
"image/jpeg",
0.85,
);
}
},
"image/webp",
0.85,
);
};
img.onerror = () => {
URL.revokeObjectURL(blobUrl);
reject(new Error(`Failed to load image: ${file.name}`));
};
img.src = blobUrl;
});
}
Then wrap it for batch processing with a fallback:
async function optimizeImages(files) {
const results = await Promise.all(
files.map(async (file) => {
// Skip videos entirely
if (file.type.startsWith("video/")) return file;
try {
return await optimizeImage(file);
} catch (err) {
console.warn(
`Optimization failed for ${file.name}, using original:`,
err,
);
// Never block the upload — if optimization fails, just use the original
return file;
}
}),
);
return results;
}
Gotcha #1: canvas.width = 0; canvas.height = 0?
Browsers don’t always garbage-collect canvas buffers even after a canvas element goes out of scope.
So the underlying pixel buffer can stick around. Setting the dimensions to zero explicitly releases the backing memory.
Gotcha #2: Why revoke the blob URL right after img.onload?
Once the image is loaded into the HTMLImageElement, the browser has decoded it into memory. The blob URL is just a handle to the original File blob. It’s no longer needed.
Revoking early rather than at the end of the function means we’re not holding that reference any longer than necessary. Accumulated unreleased blob URLs will cause subtle memory growth.
Part 2: Interactive Cropping
Before optimization, I run a cropping step.
The idea: show the user their photos one at a time, let them choose between a few common ratios and let them nudge the crop position. For the specific app I was working on, this was very important.
Here’s the cropImage function:
async function cropImage(
file,
aspectWidth,
aspectHeight,
positionOffset = { x: 0, y: 0 },
) {
return new Promise((resolve, reject) => {
const blobUrl = URL.createObjectURL(file);
const img = new Image();
img.onload = () => {
URL.revokeObjectURL(blobUrl);
const imgAspect = img.width / img.height;
const targetAspect = aspectWidth / aspectHeight;
let srcX, srcY, srcWidth, srcHeight;
if (imgAspect > targetAspect) {
// Image is wider than target — crop width
srcHeight = img.height;
srcWidth = img.height * targetAspect;
srcX = (img.width - srcWidth) / 2 + positionOffset.x;
srcY = 0;
} else {
// Image is taller than target — crop height
srcWidth = img.width;
srcHeight = img.width / targetAspect;
srcX = 0;
srcY = (img.height - srcHeight) / 2 + positionOffset.y;
}
// Clamp to image bounds
srcX = Math.max(0, Math.min(srcX, img.width - srcWidth));
srcY = Math.max(0, Math.min(srcY, img.height - srcHeight));
const canvas = document.createElement("canvas");
canvas.width = srcWidth;
canvas.height = srcHeight;
const ctx = canvas.getContext("2d");
ctx.drawImage(
img,
srcX,
srcY,
srcWidth,
srcHeight,
0,
0,
srcWidth,
srcHeight,
);
canvas.toBlob((blob) => {
canvas.width = 0;
canvas.height = 0;
const croppedFile = new File([blob], file.name, {
type: file.type,
lastModified: Date.now(),
});
resolve(croppedFile);
}, file.type);
};
img.onerror = () => {
URL.revokeObjectURL(blobUrl);
reject(new Error(`Failed to load image for cropping: ${file.name}`));
};
img.src = blobUrl;
});
}
The position offset is the part that makes this actually usable. Default centering is almost always wrong for product photos. The item is usually offset, or there’s a barcode sticker we wanted to avoid. I give users arrow buttons that call:
function adjustPosition(container, deltaX, deltaY) {
const state = container._croppingState;
state.positionOffset.x += deltaX;
state.positionOffset.y += deltaY;
// Re-render the preview with the new offset
updateCropPreview(container);
}
Cropping Workflow
The full workflow manages state on the container element itself (I find this cleaner than a separate module-level state object when multiple dialogs could theoretically be open):
async function processCroppingWorkflow(files, container) {
const imageFiles = files.filter((f) => f.type.startsWith("image/"));
const videoFiles = files.filter((f) => f.type.startsWith("video/"));
if (imageFiles.length === 0 || !CROPPING_ENABLED) {
return files; // Nothing to crop
}
// Attach state to the container DOM node
container._croppingState = {
originalFiles: [...imageFiles],
processedFiles: [],
currentIndex: 0,
positionOffset: { x: 0, y: 0 },
currentPreviewUrl: null,
};
return new Promise((resolve) => {
// Show cropping UI, hide main form
showCroppingUI(container);
showCurrentImage(container);
// "Keep original" button
container.querySelector(".btn-crop-original").onclick = async () => {
const state = container._croppingState;
state.processedFiles.push(state.originalFiles[state.currentIndex]);
await advanceOrFinish(container, videoFiles, resolve);
};
// "Apply crop" button
container.querySelector(".btn-crop-apply").onclick = async () => {
const state = container._croppingState;
const file = state.originalFiles[state.currentIndex];
const aspectRatio = getSelectedAspectRatio(container); // "1:1" or "3:2"
const [w, h] = aspectRatio.split(":").map(Number);
const cropped = await cropImage(file, w, h, state.positionOffset);
state.processedFiles.push(cropped);
await advanceOrFinish(container, videoFiles, resolve);
};
});
}
function showCurrentImage(container) {
const state = container._croppingState;
// Revoke previous preview
if (state.currentPreviewUrl) {
URL.revokeObjectURL(state.currentPreviewUrl);
}
const file = state.originalFiles[state.currentIndex];
state.currentPreviewUrl = URL.createObjectURL(file);
state.positionOffset = { x: 0, y: 0 }; // Reset offset for each new image
const previewImg = container.querySelector(".crop-preview-image");
previewImg.src = state.currentPreviewUrl;
}
And when the workflow finishes, clean up any remaining preview URLs:
function cleanupCroppingUrls(container) {
const state = container._croppingState;
if (state?.currentPreviewUrl) {
URL.revokeObjectURL(state.currentPreviewUrl);
state.currentPreviewUrl = null;
}
}
Part 3: Hooking It Into the File Input
Here’s where it all connects. When the user selects files:
fileInput.addEventListener("change", async (e) => {
const newFiles = Array.from(e.target.files);
// Reset the input so the same file can be re-selected later
e.target.value = "";
// Validate (size limits, count limits, one video max, etc.)
const validFiles = validateMediaUploads(newFiles, existingCount);
if (!validFiles) return;
// Step 1: Crop
const croppedFiles = await processCroppingWorkflow(validFiles, container);
// Step 2: Optimize
const optimizedFiles = await optimizeImages(croppedFiles);
// Step 3: Add to pending state
state.filesToUpload.push(...optimizedFiles);
// Step 4: Re-render preview grid
renderMediaGrid(container);
});
The result: users get a cropping UI for each image, every image gets resized and converted to WebP, and by the time uploadMediaToStorage() runs, the files are already small and properly formatted. The server function just does a straight upload. No transformation logic required.
Part 4: Preview Object URL Cleanup
There’s one more place leaks can happen: preview thumbnails. When you render uploaded files as <img> tags using URL.createObjectURL(), those URLs live until you revoke them or the page unloads.
function cleanupObjectUrls(container) {
const previewImages = container.querySelectorAll(".preview-image");
previewImages.forEach((img) => {
if (img.src.startsWith("blob:")) {
URL.revokeObjectURL(img.src);
}
});
}
In our app, we call this before:
- Closing the modal/dialog that shows images.
- Re-rendering the preview grid.
- Removing an individual preview item.
And when removing a single preview item, revoke before removing:
function removePreviewItem(index, container) {
const state = container._imageState;
const previewImg = container.querySelector(`[data-index="${index}"] img`);
if (previewImg?.src.startsWith("blob:")) {
URL.revokeObjectURL(previewImg.src);
}
state.filesToUpload.splice(index, 1);
renderMediaGrid(container); // Re-render without the removed item
}
Results
Before: ~1000ms for the image upload step (client → serverless function → Sharp → Supabase → response).
After: ~200–400ms (client-side processing is nearly instant for 1–2MB phone photos, then straight upload to Supabase).
File sizes dropped significantly too. Like a 90%+ reduction.