Single Shot MultiBox Detector is an algorithm that is capable of locating and identifying portions of an image, and returning multiple boxes describing what it thinks that part of the image is. This post will help get you started with this algorithm.
Single Shot MultiBox Detector, or SSD for short (which doesn’t make Googling for it any easier), is a paper and corresponding software which is designed to pick out objects from within a larger image, and classify them based on annotated image sets.
It does not require any previous segmentation of the image (i.e. dividing up the areas of interest into their own images), and does a good job even when objects are partially occluded (blocked) by other objects. It isn’t as fast as something like You Only Look One (which I’ll cover in a later post), but has provided better accuracy in my use cases.
As I mentioned in my intro post, if you’re familiar with docker, you can simply pull my SSD+CUDA docker image.
If you’re not familiar with Docker, I’ll provide some basic installation instructions, but these are pretty much a copy paste of my Dockerfile (which is used to build a docker image).
All of my instructions assume an Ubuntu 16.04 LTS (xenial) release, using the BASH shell.
To build SSD, you’ll need a number of software packages. Luckily Ubuntu provides most of these through apt already, so you won’t need to build anything special. You can install these packages by running the following commands:
sudo apt-get update sudo apt-get install -y --no-install-recommends \ build-essential \ cmake \ git \ libatlas-base-dev \ libboost-all-dev \ libgflags-dev \ libgoogle-glog-dev \ libhdf5-serial-dev \ libleveldb-dev \ liblmdb-dev \ liblmdb-dev \ libopencv-dev \ libprotobuf-dev \ libsnappy-dev \ protobuf-compiler \ python-dev \ python-numpy \ python-pip \ python-setuptools \ python-scipy
Download SSD’s Code
Like so many open-source projects today, SSD is host on GitHub. Follow these commands to download the software using git:
export SSD_ROOT=$HOME/local/src/ssd mkdir -p "$SSD_ROOT" git clone -b ssd https://github.com/weiliu89/caffe.git "$SSD_ROOT"
You may want to consider adding the
export SSD_ROOT=... line to your
$HOME/.bashrc file as well, as it may be relied upon by future additions to
that file, and/or future commands run.
Download Python Pre-requisites
There are a handful more pre-requisites for building the python module which are specific within a file of the cloned SSD repository. Unfortunately, as of this writing, I’ve experienced issue if I didn’t include ‘wheel’ in addition to those specified by the SSD installation instructions. So, I’ve added it to the commands below:
cd "$SSD_ROOT" sudo pip install --upgrade pip for req in wheel pydot $(cat python/requirements.txt); do \ sudo pip install $req; done
SSD supports CMake, which, IMHO, makes the build process significantly easier than doing it the “old fashioned way.” The following commands will create the build directory
mkdir "$SSD_ROOT/build" cmake -DUSE_CUDNN=1 -DUSE_OPENCV=1 .. make -j$(nproc)
- If you aren’t using CUDNN, modify the
cmakeline above to read:
- If you want to build for the CPU instead of GPU modify the
cmakeline above to read:
cmake -DCPU_ONLY=1 ..
make command may take a few minutes, or over an hour, depending on the
speed of your machine.
Environment Variables & Library Registration
Python relies heavily on the
PYTHONPATH variable to locate modules.
If you don’t install the modules from
$SSD_ROOT/build/ in to a directory
PYTHONPATH you won’t be able to import them.
If you’d like to run them from their current location, be sure to run the
following command in any terminal you plan on running SSD from, and consider
adding it to your
You will also want to inform your system of where it can find the new libraries. You can do this either on a system level (if you have sudo, and would like to), or at a user-specific level.
sudo echo "$CAFFE_ROOT/build/lib" >>/etc/ld.so.conf.d/ssd.conf sudo ldconfig
Please note that you may also want to add that line to your
file as well.
You now have build SSD from source (or leveraged my docker image). My next post will cover how to download pre-trained models, dataset, etc. so you can get started quickly.