Wednesday, March 8, 2017

Detecting motion with RPi

Using OpenCV library and few lines of Python code it's easy to create a simple motion detection with a Raspberry Pi.

This could be accomplished with other methods, like using a PIR sensor, but sometimes you just want to use a webcam without adding further hardware.

For this post I'm using my Sphaera configuration. This means that the video from a PiCamera is available as a web stream, but it would be easy to get your video frames from different sources...

Let's start with the following two frames:

If you click on them you could better see where is the movement. Actually there are two "things" moving: the cat and the shadows.

The following code is what I'm using to detect the motion:

#!/usr/bin/env python
#-*- coding: utf-8 -*-
# Detect motion

import cv2, numpy, time, urllib, os

# Get image from stream and return and OpenCV image
def GetFrame():
    req = urllib.urlopen('')
    arr = numpy.asarray(bytearray(, dtype=numpy.uint8)
    image = cv2.imdecode(arr,-1)
    return image
while True:      
    g1=cv2.cvtColor(GetFrame(),cv2.COLOR_BGR2GRAY) # Convert first image to gray scale
    time.sleep(.1) # Wait for the next capture
    g2=cv2.cvtColor(GetFrame(),cv2.COLOR_BGR2GRAY) # Convert second image to gray scale
    d1=cv2.absdiff(g1,g2) # Make differences between the frames
    rv,dt=cv2.threshold(d1,35,255,cv2.THRESH_BINARY) # Set a threshold for B/W
    n=cv2.countNonZero(dt) # Count the white dots
    if (n>150):  #Set the detection result
        s="Motion detected!"
        s="no motion..."
    print s

After importing the needed libraries, we found the GetFrame function. This function only gets a video frame and returns an OpenCV image. If you wish to get your frames from a different source, this is where you need to change the code.

Next there is the main code for detecting motion. The while True cycle of course means a never ending loop.

The first thing to do is to get two frames and convert them to gray scale. This is done by the first three lines inside the main loop. Between the creation of the two gray images there is a small delay of 0.1 seconds. You can change this value depending on your needs.

This is how the two frames above will become after these instructions:

Nothing big. Just removed all the color info from the images.

Next step is making a difference pixel by pixel between the two images. As we need just the value, I'm using the absdiff function from OpenCV.
Again here is the result of the operation:

Where the image is black, the pixels where the same, so no movement is detected in those spots. On the countrary, where you see gray pixels, something changed between the two frames, so there should be the motion.
With this image you can clearly see that the cat is not the only thing that is moving.

Now it's just a matter to count pixels that are not black, but there is a problem... Mjpeg video streams (produced by most of the webcams) introduce artifacts inside every frame. This means that even if nothing is actuallly changed between two frames, the code will detect the noise caused by the artifacts. These are small differences between two pixels in the same place, so it would be easy to ignore them.

To remove the noise we just use a threshold function from OpenCV. In the code above we are just creating a new image from the difference above, where all the pixels below a value of 35 are black and everything else is white.

I used 35 as a threshold value after having checked different values. This was the one that performed better.

This is the thresholded image:

Now we can count the white points using the countNonZero function. As in the video there are things that can move, but that should not fire an alarm (i.e. the shadows), I detect the motion only if the white pixels are more than a specific value (150 in the example code). This value has been calculated by checking the difference between two frames where just the shadows are changing.

Of course this method is not the best one, but it works fine (at least for me). False detections can arise if the shadows changes too much, or in case of big changes in the lighting conditions. Sometimes nothing is detected if the motion is present, but it's too small. Anyway it performs well for places without too many variance.

In the code above I just print a short text if motion is detected or not, but you can do many other things.

In my Sphaera project, when a motion is detected, one message with a photo (taken from the video stream) is sent to my phone using Telegram (no more than one photo every 5 minutes) and every two seconds a frame is sent to an ftp server outside my house, so I can have a sort of timelapse of what is happening in the room.

Hope you like this simple ways to detect motion using a Raspberry Pi.


  1. I just bought a raspberry pi zero and some other stuff from I am using it for image processing!
    Can you guyz tell me that the camera used for raspberry pi 3 will also be compatible with pi zero or I will have to buy a new pi zero specific camera??

    1. The camera is the same, but you have to change the flat cable because the connector on the PiZero is smaller than the one on the other RPi models.

  2. Nice blog,how to contact you ?

  3. hello sir im working on a project that must detect motion then detect a human face if it is there, take a picture and send that picture to a phone can you help. i have a raspberry pi 3 and pi camera

    1. In the code above, instead of printing the result of the detection, you can just use a piece of code like the one you can find in another post in this blog to detect the faces, by just using one of the two captured frames (g1 or g2 in the example above).
      If a face is then detected you can send using maybe email or Telegram. For the latter you can again find some code in this blog.
      You can also take a look at the site that has really tons of info and code examples about computer vision.