Review: Pixy CMUcam5 Image Sensor

My original intent was to use this hardware as an opportunity for the kids at my sons robotics club to learn how to evaluate something and to hone their critical thinking skills.  We ended up with a younger set of kids mostly middle school and freshmen in high school who just didn’t yet have the skills to do the review justice so I felt it made sense for me to do it. 

The result then is that this took a lot longer since I am a busy guy. I apologize how long it has taken.  I keep thinking that I should play with it a bit more and try different use cases, but I never get to it.  So, the result is I have been sitting on this over a year.  


The Pixy CMU Camera is a sensor that can be configured to view colored objects.  It is easy to setup, easy to use and within minutes one can be identifying colors and where they are on the screen.  This information can be transmitted to a processor such as an Arduino or other similar devices. 

The Pixy CMU Camera makes it very easy to track and identify solid colored objects in consistent lighting conditions.

Before It Arrived

Even after 16 years as a software engineer, I love anything shiny and new.  So it was with great excitement that I looked up the info on the Pixy CMU Camera.  I have some experience playing with vision programming on OpenCV, but I am certainly no expert on it.  I played with OpenCV and was frustrated at how any change in light would make the color change for OpenCV.  I was impressed that their documentation stated very clearly that their advanced algorithms had figured out a way to ensure that across varying light conditions, it identified colors consistently.  I was never able to get OpenCV to do that.  I also liked the interface.  The directions stated that the object should be in the middle of the camera’s view, press a button on the side, and it would now know to look for that color of object.  Using an Arduino or other MCU, it would give a pixel position and object number back to the MCU using SPI or other protocol.  I couldn’t wait to get my hands on it!

First impressions when opening the box were that the item was very small, compact and thin.  The box it came in was slick with nice graphics which gave the impression of a very professional release.  Someone had obviously put some thought and hard work into packaging which seemed to bode well for the actual device.  Inside was some nice mounting hardware which would make it a simple addition to all but the smallest robots.  Any robot would consider the Pixy CMU a handsome addition to their robot.


I installed PixyMon on my Windows computer and plugged in the Pixy to the usb port.  PixyMon is their tool to help configure the Pixy so it knows what to identify and look for.  One points it at an object about 6-12” away, aims to put the desired object in the center, holds the button down on the side of it and waits for the Pixy’s LED to become the color of the object for which you are looking.  Release the button and the Pixy will know that is the color object for which it is intended to search.

It took about a half hour of tinkering, starting and restarting PixyMon before PixyMon finally recognized the Pixy CMU.  From reading other people’s experiences with the Pixy, they didn’t have these issues so it probably is something specific to the setup on my computer.  With my laptop which runs Windows 10, it seems to take some fiddling, plugging and unplugging the Pixy from the USB port for it to recognize that the Pixy is connected, but I am always able to get it to work.  Other times, I plug the Pixy in, startup PixyMon and it comes right up.  According to the instructions, it can take a while to get good at setting up the Pixy and that eventually one may not need to use PixyMon to setup the Pixy.  PixyMon also has a set of parameters that can be tweaked which will enable it to give better search results.

I tried a number of different objects with varying levels of success.  Objects with multiple colors were by far the least successful searches since it hones in on one color only.  The Pixy can be trained to find particular colors but seems to get confused with multiple colors especially with colors that are different shades.  The Pixy did the best with objects that are a solid color. 

One object I tested was the dark red trackball from my mouse.  I placed it on a white notebook to ensure it would have better contrast.  After a bit of playing with it, I finally got the Pixy to find the object.  I was looking at this in my dining room which has a dimmer, so I tried to dim the lights to see what the difference in recognition is.  Here is a screenshot from PixyMon with the lights full on.  The number of squares on the red ball indicate how well the Pixy identifies the object, that is the more squares the better recognition of the object.

The dimmer lights definitely had an impact on the Pixy finding that red ball.  In the last example with the lighting at its dimmest, it couldn’t find the object at all.



Middle brightness:


Least bright:


I tried several different objects with varying colors to test the Pixy’s efficacy.  After several hours of playing with the Pixy and the dimmer in my dining room, it appeared that the lighter colored objects were less sensitive to light changes (kind of makes sense doesn’t it?).  The Pixy’s recognition was definitely sensitive to changes in lighting, but definitely an improvement over just using OpenCV looking for blobs of particular colors as I have in the past.  With OpenCV, just about any changes in light would bring about a color change and make the blob unidentifiable since OpenCV would treat it as a different color.

I also downloaded the Arduino sample code and pretty quickly had the little robot I am working with following the red ball with the light in the dining room on full.  I changed their code only slightly and had my little bot chasing it around in just a few minutes.   The integration and samples into Arduino, Raspberry PI and BeagleBone Black appear to be outstanding.  I only worked with the Arduino but other comments I have read online speak to how easy the integration is with the RasPi and BB.

One thing I wanted to comment on was how surprised I was at how “washed out” the images the camera returned.   The camera that comes with the Pixy seems to have a poor color saturation.  I used a $5 camera I had bought a few months ago and compared the color saturation.  As you can tell, there is a substantial difference in the color returned. I used the brightest lights on the dimmer for the below image.


It seems to me that if the device uses color as a way to identify objects on the screen, that a better color saturation on the camera would lend to possibly better and more accurate results.  There may be a technical reason why they used that camera for the Pixy CMU Camera, but it seems a strange choice.


From a technology perspective, the Pixy CMU Camera is a pretty nifty device.  The interface to set it up is easy to use and works well within certain parameters.  The programming samples, at least for the Arduino, are easy to use, and I had it working within an hour.  It took longer to read the directions than it did to get it working with the Pixy.

The Pixie CMU Camera has a use case where it makes sense.  If you know ahead of time that the light will be consistent, that the objects will be a solid color and what those colors will be, this is an excellent option.  The interface makes it very easy to setup for situations exactly like that.  For instance, an M and M sorter, ball sorter, LEGO sorter or working with anything where the light doesn’t change drastically and the objects to be found are SOLID colors.  For instance, someone at our local Nashua Robotics Group had a robot that used a Pixy that would find and pick up a solid colored blue ball. It was a situation where the lighting didn't change and the object was a solid colored object so it worked very well.  

So, within those parameters, I definitely recommend the Pixy CMU Camera.  Its application is fairly narrow, so be careful with how you decide to use it.

Good to know!
Very useful information. Now I think that the Pixy might not exactly be the quality that I would like for my (later) robots after all…

Thank you for sharing your thoughts, Bill.


It’s nice to see the image - wondering if you’d be best to start with a solid object against a white background and try it with a pan/tilt? Good, unbiased review. Thanks!

I’ve had a Pixy cam for

I’ve had a Pixy cam for about a year now, and basically have come to the same conclusion as this review. It’s really cool, but the lighting conditions have to be very consistent. I’ve tried hard to build a “follow the ball” robot like the video at Adafruit. I’ve used the Zumo platform, got the gimbal, used their code, tried my own code, used it via python on a raspberry pi, and I’ve never been really happy with the results. I always wind up trying it on my kitchen floor and I think there are just too many colors to distract the thing when it’s looking for a neon green ball. There’s a reason that the super-impressive youtube of that “pixy pet robot” has a very solid colored floor, that’s very different from the ball it’s chasing. Nevertheless, it’s a lot of fun to play with, and I don’t consider my money wasted at all. It can kind-of work with nothing more than the Pixy, an Arduino Nano, a cheap motor controller, and a couple of cheap motors.

Pixy Experiences

I would agree with the review and the comments here…I had simlar experiences of getting it working quickly, its good points, and its limitations.

I was impressed with the refresh rate of the sensor, the simplicity, the ability to track multiple colors and objects at the same time 7, snd the ability to use colors “codes” in combinations.  In practice, I was happy tracking about 4 of the 7 colors at a time…like pink, bright reds, blue, green, yellow, but had a great deal of trouble with darker colors like purple, brown, darker reds.  The color range of each color can be set to widen the chance of getting a match.

Unfortunately, I feel like this sensor is a bit of a dead end without better software.  Face tracking, recog, facial expressions, or OCR would be a nice add.  I was trying to increase the situational awareness of my bot at a rapid refresh rate, which I did, but I rapidly reached its limits.  I hoped to use the color codes to create visual landmarks.  I would probably recommend someone not bother with a video cam of any kind unless it can track faces or connect into something with a lot more tools like OpenCV.  I was able to do a lot more with a phone running OpenCV, slower refresh rate, but so many more use cases.

I spent a great deal of time working with the Pixy and designing my Ava bot’s head to be able to use this sensor and other supporting hardware.  The downstream burden of having so much data to pipe around became a bigger challenge than anything else.  I wish I had better evaluated the value add of that data before spending so much time implementing this sensor.  Better software is needed to support visual navigation or some other use case to make this sensor worth it.  Tracking a ball is not enough and can be done with other things.  In retrospect, I wish I had designed around a depth sensor or some other config like 2 pi cameras running OpenCV.  I still like thermal cams.


I haven’t used it myself,

I haven’t used it myself, yet, but the Intel RealSense cameras look interesting for smaller mobile robots.

I wrote up a summary of some “research” I made in another comment thread:

I didn’t say it in the review

but it would be cool if the Pixy source code was open source (assuming there were resources on the chip to made add-ons).  I could imagine a pretty good ecosystem evolving that could be hooked into.  


yep…that would be an awesome step forward.

They could also add a lot of value by adding more feature detectors (w/ their source code too) that could be turned on/off to provide more functionality without requiring people having to tackle something like OpenCV.  OpenCV could be made a great deal easier too if it was redesigned around ease of use and “add-ons” like you mention.

I often design around having some kind of engine that does a lot of things automatically that can be turned on/off or configured, while also having a “plug-in” model so people can customize and add new features in a much simplier way.  Unfortunately, people with formal training often can’t see the forest for the trees and design simple and easily extensible models.  Perhaps if these kind of people had worked on large software teams they would incorporate some of these lessons.

These are perhaps unfair criticsims of Pixy as their point was to create something simple and fast.  Extensible should have been added to those goals.  They did imply when they first released it that they would add new features though.  I haven’t seen any progress, but haven’t looked lately.  Sad, because a cam coupled with processing power on a board has so much more potential.

You can’t please all the people all the time.

I was curious about when

I was curious about when their last cut of code was made and it was September 2015 with a version for Lego.  So, not a lot going on with it anymore.

I misspoke.  It IS open source.  The firmware is a hex file though so not sure what exactly they consider is “open source”.  The only files they have are for working with Arduino, BB or Raspi as far as I can tell.

It would be nice if this evolved into something that folks could hook into.  

This looks like it might be what we are talking about.

Being able to hook into it in Python is kind of cool.  THIS could be pretty cool.




Pixy C souce is available and open


You can get the actual Pixy source code here:

You’ll need the Keil/ARM compiler or the GCC compiler installed.

BTW weren’t you and Noah at the meeting when I brought my Pixycam robot in and had it performing the fetch the red objects task?




Thanks Rich.

I didn’t see it on the site, so good to know. 

I refer to at least one of the times you brought your Pixycam robot to the club in my review.  I thought it was a blue ball it was picking up that time so maybe missed the red objects task (or misremember the color).