What Is Optical Flow?

Mal Baxter

Optical flow describes computerized tracking of moving objects by analyzing content differences between video frames. In a video, both object and the observer may be in motion; the computer can locate cues that mark the boundaries, edges, and regions of individual still images. Detecting their progressions allows the computer to follow an object through time and space. The technology is employed in industries and research, including the operation of unmanned aerial vehicles (UAV) and security systems.

Quadcopters, when used as unmanned aerial vehicles, are often equipped with digital cameras and their associated software.
Quadcopters, when used as unmanned aerial vehicles, are often equipped with digital cameras and their associated software.

Two primary methods generate this computer vision: gradient-based and feature-based motion detection. Gradient-based optical flow measures changes in image intensity through space and time. It scans a dense flow field plane. Feature-based flows overlay edges of objects within frames to mark progress.

This technique resembles camcorder image stabilization, allowing a computed field of vision to be locked into the frame despite camera shake. Optical flow algorithms calculate matches between images in sequence. The computer divides each image into square grids. Overlaying two images permits comparisons to find the best matches of squares. When the computer locates a match, it draws a line between the points of difference, sometimes called needles.

Algorithms work systematically from coarse to fine resolutions. This permits motion tracking between images with differences in resolution. The computer does not recognize objects, but only detects and follows those characteristics of objects that can be compared between frames.

Computing optical flow vectors can detect and track objects and also extract an image's dominant plane. This can aid in robotic navigation and visual odometry, or robot orientation and position. It notes not only objects but also surrounding environs in three dimensions, and gives robots more lifelike spatial awareness. Vectors computed in a plane allow the processor to infer and respond to movements extracted from the frames.

Some weaknesses of the optical flow technique include data loss that results from squares the computer cannot match between images. These unmatched areas remain vacant and create planar voids, reducing accuracy. Clear edges or stable elements like corners contribute to flow analysis.

Detailed factors may be obscured if the observer is also in motion, since it can't distinguish certain elements from frame to frame. The analysis divides motion into apparent global flow and localized object motion, or egomotion. Spatial-temporal changes in edges or image intensity get lost in the motion of the camera and the global flow of the moving environment. Analysis is enhanced if the computer can eliminate the effect of the global flow.

You might also Like

Readers Also Love

Discussion Comments


@nony - I have image stabilization on my camera, but honestly I can’t really tell the difference with or without it. It must be useful for only limited camera movement. I can’t see how the camera could be effective in determining optical flow from my rather shaky handheld motions.


@Mammmood - I think optical flow sensors for robots probably use gradient-based detection, which uses image intensity. I say this because robots (some of them) are used to detect the motion of other human beings.

The best way to do this is to look for infrared radiation, since humans generate more body heat in a plane of vision than regular inanimate objects. That radiation will be sure to register as intensified images and the robots can latch on to that and begin tracking the object.


Optical flow sounds very similar to motion detection. I am not an expert by I’ve found a few motion detection libraries on the Internet that I’ve been able to use to build my own programs.

Basically motion detection compares two or more frames of an image to determine if there are any differences. It sets a threshold for variances, so if the differences are above a certain threshold then that means that motion has taken place.

To extend the use to optical flow applications, once the computer identifies that motion has taken place, it can then define boundaries for the object and begin tracking it on a frame by frame basis. I think they use this technology in security cameras in addition to the other applications mentioned in this piece.

Post your comments
Forgot password?