Image Filters on Canvas: From CSS filter to Convolution Kernels#

Photo filters look like magic, but underneath they’re just math on pixel values. I recently built an online image filter tool and want to share how the core rendering pipeline works — combining CSS filters for simple adjustments with raw pixel convolution for custom effects.

Two Approaches to Image Filtering#

The browser gives us two paths for image filtering:

  1. CSS filter property — declarative, GPU-accelerated, great performance
  2. Canvas getImageData + manual pixel ops — full control, supports custom kernels

In our tool, we use a hybrid: basic adjustments go through CSS filters, while sharpening uses a convolution kernel on raw pixel data. Here’s why.

CSS Filters for Basic Adjustments#

When you draw an image on Canvas, setting ctx.filter applies CSS filter functions before rendering:

const ctx = canvas.getContext('2d')

const filters: string[] = []
filters.push(`brightness(${100 + brightness}%)`)
filters.push(`contrast(${100 + contrast}%)`)
filters.push(`saturate(${100 + saturation}%)`)

if (blur > 0) {
  filters.push(`blur(${blur}px)`)
}

ctx.filter = filters.join(' ')
ctx.drawImage(img, 0, 0, canvas.width, canvas.height)

A few gotchas:

  • The baseline for percentage-based filters is 100%, not 0%. A slider range of -100 to +100 maps to 0%–200%.
  • Multiple filter functions are space-separated in a single string — no commas.
  • GPU acceleration degrades on some mobile browsers; always test real devices.

CSS filters are great for standard effects, but they can’t do custom operations like sharpening. That’s where convolution comes in.

Convolution Kernels: The Math Behind Filters#

A convolution kernel (or filter matrix) is an N×N matrix that slides over every pixel, computing a weighted sum of its neighborhood. The sharpening kernel is a classic 3×3:

Sharpening kernel:
  0  -1   0
 -1   5  -1
  0  -1   0

This is a Laplace sharpening kernel. It amplifies high-frequency information (edges) by subtracting the Laplacian second derivative from the original pixel.

Here’s the implementation:

const kernel = [0, -1, 0, -1, 5, -1, 0, -1, 0]
const output = new Uint8ClampedArray(data.length)

for (let y = 1; y < height - 1; y++) {
  for (let x = 1; x < width - 1; x++) {
    for (let c = 0; c < 3; c++) {  // R, G, B channels
      let val = 0
      for (let ky = -1; ky <= 1; ky++) {
        for (let kx = -1; kx <= 1; kx++) {
          const pixelIdx = ((y + ky) * width + (x + kx)) * 4 + c
          const kernelIdx = (ky + 1) * 3 + (kx + 1)
          val += data[pixelIdx] * kernel[kernelIdx]
        }
      }
      output[(y * width + x) * 4 + c] = Math.min(255, Math.max(0, val))
    }
    output[(y * width + x) * 4 + 3] = data[(y * width + x) * 4 + 3]  // keep alpha
  }
}

Key implementation details:

Why Uint8ClampedArray: ImageData.data returns a Uint8ClampedArray. By pre-allocating the output buffer with the same type, values outside 0–255 are automatically clamped on assignment. This saves both memory allocation overhead and manual clamping in the hot loop.

Convolution result clamping: The summed value can exceed 255 or go below 0. Math.min(255, Math.max(0, val)) is the standard safeguard. Uint8ClampedArray’s auto-clamp also works, but only if you’re assigning via index — the set() method won’t clamp.

Edge pixels: Pixels at the image boundary (x=0, y=0) don’t have a full 3×3 neighborhood, so the convolution loop runs from x=1 to x=width-2. Edge pixels are copied from the source:

for (let i = 0; i < width * 4; i++) {
  output[i] = data[i]
  output[((height - 1) * width) * 4 + i] = data[((height - 1) * width) * 4 + i]
}

The strict approach uses padded borders (replicating edge pixels to virtual rows), but for an online tool, skipping edges is perfectly adequate.

Performance Comparison#

Testing on a 1920×1080 image on Chrome:

Operation CSS Filter Pixel-level
Brightness < 1ms ~8ms
Blur (5px) ~2ms ~45ms
Sharpen Not supported ~15ms

The GPU acceleration of CSS filters is significant. Note that ctx.filter with blur doesn’t actually slow down drawImage — the blur convolution happens internally after the initial render pass.

Other Common Convolution Kernels#

Changing the kernel values creates completely different effects:

// Sobel edge detection (horizontal)
[-1, -1, -1,
  0,  0,  0,
  1,  1,  1]

// Emboss
[-2, -1,  0,
 -1,  1,  1,
  0,  1,  2]

// Gaussian blur (5×5, must normalize)
[1,  4,  6,  4, 1,
 4, 16, 24, 16, 4,
 6, 24, 36, 24, 6,
 4, 16, 24, 16, 4,
 1,  4,  6,  4, 1]
// Divide by 256 to normalize

The Gaussian blur kernel uses a 5×5 matrix because it approximates the normal distribution. The sum of all values is 256, so you must divide the result by 256 — otherwise the image will be overexposed.

Summary#

The strategy for implementing image filters is straightforward: use CSS filters whenever possible (better performance, less code), and fall back to convolution kernels for custom effects. A convolution kernel is nothing more than a weight matrix — once you understand that, edge detection, emboss, and blur are all the same algorithm with different numbers.

The full source is available at: Image Filter Tool


Related tools: Color Extractor | Image Crop | Image Pixelate