Pure Frontend Image Background Removal: Canvas API and Color Distance Algorithm#

I needed to process some images with solid color backgrounds recently. I considered using remove.bg’s API, but the free tier is limited and requires uploading to a third-party server. So I built my own solution using just the Canvas API.

Core Concept: Color Distance#

Background removal is essentially: find the background color, make pixels similar to it transparent.

The key is defining “similar”. The simplest approach is Euclidean distance:

// Distance in RGB color space
function colorDistance(r1: number, g1: number, b1: number, 
                      r2: number, g2: number, b2: number): number {
  const dr = r1 - r2
  const dg = g1 - g2
  const db = b1 - b2
  return Math.sqrt(dr * dr + dg * dg + db * db)
}

It’s the straight-line distance between two colors in the RGB cube. Smaller distance = more similar colors.

Implementation Steps#

1. Load Image to Canvas#

const canvas = document.createElement('canvas')
canvas.width = img.width
canvas.height = img.height
const ctx = canvas.getContext('2d')!
ctx.drawImage(img, 0, 0)

// Get pixel data
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height)
const { data } = imageData  // Uint8ClampedArray, RGBA values in groups of 4

data is a one-dimensional array where every 4 elements represent one pixel’s RGBA values.

2. Iterate Pixels and Remove Background#

const targetR = parseInt(bgColor.slice(1, 3), 16)
const targetG = parseInt(bgColor.slice(3, 5), 16)
const targetB = parseInt(bgColor.slice(5, 7), 16)

const maxDist = tolerance * 4.42  // Tolerance coefficient

for (let i = 0; i < data.length; i += 4) {
  const dist = colorDistance(
    data[i], data[i+1], data[i+2],
    targetR, targetG, targetB
  )
  
  if (dist < maxDist) {
    data[i + 3] = 0  // Set Alpha to 0, making it transparent
  }
}

ctx.putImageData(imageData, 0, 0)

3. Export as PNG#

const result = canvas.toDataURL('image/png')

PNG supports the Alpha channel, so transparent areas are preserved correctly.

Auto-Detect Background Color#

Users might not know the exact background color. My solution: sample edge pixels and find the dominant color.

function detectBackgroundColor(img: HTMLImageElement): string {
  const canvas = document.createElement('canvas')
  canvas.width = img.width
  canvas.height = img.height
  const ctx = canvas.getContext('2d')!
  ctx.drawImage(img, 0, 0)

  // Sample 8 points: 4 corners + 4 edge midpoints
  const margin = Math.floor(Math.min(img.width, img.height) * 0.05)
  const positions = [
    [margin, margin],                              // top-left
    [img.width - margin - 1, margin],              // top-right
    [margin, img.height - margin - 1],             // bottom-left
    [img.width - margin - 1, img.height - margin - 1], // bottom-right
    [Math.floor(img.width / 2), margin],           // top-center
    [Math.floor(img.width / 2), img.height - margin - 1], // bottom-center
    [margin, Math.floor(img.height / 2)],          // left-center
    [img.width - margin - 1, Math.floor(img.height / 2)]  // right-center
  ]

  const samples: number[][] = []
  for (const [x, y] of positions) {
    const pixel = ctx.getImageData(x, y, 1, 1).data
    if (pixel[3] > 0) {  // ignore transparent pixels
      samples.push([pixel[0], pixel[1], pixel[2]])
    }
  }

  // Clustering: group similar colors
  const clusters: { color: number[]; count: number }[] = []
  for (const sample of samples) {
    let matched = false
    for (const cluster of clusters) {
      const dist = colorDistance(
        cluster.color[0], cluster.color[1], cluster.color[2],
        sample[0], sample[1], sample[2]
      )
      if (dist < 30) {
        cluster.count++
        matched = true
        break
      }
    }
    if (!matched) {
      clusters.push({ color: sample, count: 1 })
    }
  }

  // Return the most frequent color
  clusters.sort((a, b) => b.count - a.count)
  const dominant = clusters[0].color
  
  return '#' + dominant
    .map(v => v.toString(16).padStart(2, '0'))
    .join('')
    .toUpperCase()
}

This algorithm assumes the background is solid or nearly solid, so edge pixels represent it well.

The Tolerance Parameter#

Tolerance determines how “similar” colors need to be:

  • Too low: Only removes exact matches, leaving jagged edges
  • Too high: Removes foreground parts that are similar to background
const maxDist = tolerance * 4.42  // Empirical coefficient

Why 4.42? In RGB space, the maximum distance between two colors is √(255² + 255² + 255²) ≈ 441. Tolerance range 1-100 multiplied by 4.42 covers the entire color space.

Color Picker Feature#

Sometimes auto-detection isn’t accurate. Implementing click-to-pick:

const handlePreviewClick = (e: React.MouseEvent<HTMLDivElement>) => {
  const img = previewRef.current.querySelector('img')
  const imgRect = img.getBoundingClientRect()
  
  // Calculate coordinates on original image
  const scaleX = imgRef.current.width / imgRect.width
  const scaleY = imgRef.current.height / imgRect.height
  
  const px = Math.floor((e.clientX - imgRect.left) * scaleX)
  const py = Math.floor((e.clientY - imgRect.top) * scaleY)
  
  // Read pixel color
  const canvas = document.createElement('canvas')
  canvas.width = imgRef.current.width
  canvas.height = imgRef.current.height
  const ctx = canvas.getContext('2d')!
  ctx.drawImage(imgRef.current, 0, 0)
  
  const pixel = ctx.getImageData(px, py, 1, 1).data
  const hex = '#' + [pixel[0], pixel[1], pixel[2]]
    .map(v => v.toString(16).padStart(2, '0'))
    .join('')
    .toUpperCase()
  
  setBgColor(hex)
  processImage(imgRef.current, hex, tolerance)
}

Handle scaling: displayed image might be smaller than actual, so calculate real coordinates based on scale ratio.

Performance Optimization#

requestAnimationFrame#

Image processing is CPU-intensive and blocks UI. Use requestAnimationFrame to let browser execute it next frame:

setProcessing(true)
requestAnimationFrame(() => {
  // Process image
  processImage(img, color, tolerance)
  setProcessing(false)
})

Web Worker (Advanced)#

For large images, move processing to a Web Worker:

// worker.ts
self.onmessage = (e) => {
  const { data, targetColor, tolerance } = e.data
  // Process pixels...
  self.postMessage(processedData)
}

// main.tsx
const worker = new Worker('worker.ts')
worker.postMessage({ data: imageData.data, targetColor, tolerance })
worker.onmessage = (e) => {
  const result = e.data
  // Update canvas
}

Edge Cases#

1. Transparent Images#

Skip pixels that are already transparent:

if (data[i + 3] === 0) continue  // Already transparent, skip

2. Jagged Edges#

Hard cuts create jagged edges. Use alpha gradient for smooth transition:

if (dist < maxDist) {
  // Fully transparent
  data[i + 3] = 0
} else if (dist < maxDist * 1.2) {
  // Semi-transparent transition
  const alpha = (dist - maxDist) / (maxDist * 0.2)
  data[i + 3] = Math.round(alpha * 255)
}

3. Complex Backgrounds#

This algorithm works for solid or near-solid backgrounds. Complex backgrounds (gradients, textures) require advanced algorithms:

  • GrabCut: Graph-cut based foreground extraction
  • Deep Learning: U²-Net, MODNet models

But these need backend support or WebAssembly, making pure frontend implementation costly.

The Result#

Based on these ideas, I built: Background Remover

Features:

  • Auto-detect background color
  • Manual color picking
  • Tolerance adjustment
  • Real-time preview
  • Drag & drop, paste to upload

The implementation isn’t complex, but handling edge cases properly requires careful consideration. Hope this helps.


Related: Image Compression | Image Format Converter