Summed images for efficient motion blur

spin0 spin1 spin2 spin3 spin4 spin5 spin6 spin7
motion0 motion1 motion2 motion3 motion4 motion5 motion6 motion7

When you've got a lot of still frames, it's easy enough to create an animation by playing back the frames in order. To compute motion blur for the frames, though, you need to take the average of a range of frames. This is quite slow: for example, computing the average of frames 100-200 involves reading 100 images, and summing them.

Here's a simple hack to speed up the process massively: make a pass through the images, computing a running sum, and write this sum as a series of summed images. So now summed image N is the sum of images 0...N. To compute the sum of frames 100-200, read summed images 200 and 100, and subtract them. Only two images need to be read: a 50X speedup.

This idea is also applicable to sprites for games running on modern graphics hardware. All that is required is a trivial fragment program to read the two texture frames, subtract them and divide by their difference.

One wrinkle is the precision required for the summed images. Clearly, the last image in the sequence has to have enough precision to accurately represent the sum of all the preceding images. If color precision is 11 bits - which it is in the popular HALF 16-bit point-point format - then a 64 (2^6) frame animation has just 5 bits of precision per image. For this reason either 16-bit int or even 32-bit FLOAT format might be more appropriate for long animations.

Here's some sample Python code to read in some summed images (in OpenEXR format) and construct a simple animation from them:

import Image
import math
import OpenEXR
import Imath
from vop import *

sz = (256,256)
szm = sz[0] * sz[1]

def iadd(i0,i1):
        return [(i1[c] + i0[c]) for c in range(3)]

def fetch(i):
    i = int(i)
    if (i > 399):
        m = i / 400
        i0 = [(f399[c] * m) for c in range(3)]
        i1 = fetch(i % 400)
        return iadd(i0, i1)
    d = OpenEXR.InputFile("stills/%06d.exr" % i)
    r = fromstring(d.channel('R'))
    d.close()
    return [r,r,r]

f399 = fetch(399)

# p is position
p = 0
for t in range(600):
    # Velocity term that gives a gentle acceleration over 600 frames
    v = 1 + 50 * (1 + sin(-0.5 * math.pi + math.pi * t / 600))

    # (t0,t1) are the sample times
    t0 = int(p)
    t1 = int(p + v / 2)
    print t, t0, t1

    # Apply the velocity
    p += v

    # Sort t0,t1
    if t1 < t0:
        (t0,t1) = (t1,t0)

    # i0 and i1 are the two image sums
    i0 = fetch(t0)
    i1 = fetch(t1 + 1)

    # Subtract i1 from i0 and divide by the distance between them
    im = [(i1[c] - i0[c]) / ((t1 + 1) - t0) for c in range(3)]

    result = Image.merge("RGB", [Image.fromstring("L", sz, x.tostring()) for x in im])
    result.save("blur_%06d.png" % t)

This code and the source frames: here.

Resulting movie here.