Wolfspec 2.0

Wolfspec 2.0 – Spectrometry with the Raspicam

This post is a reprint of an article I wrote on my earlier website.  I’ve tried to update the links and images, but may have missed a few.

I recently purchased the camera that attaches to a Raspberry Pi and thought about how one might be able to make a spectrophotometer using the camera as a ccd-like detector. This work is still in progress, but with relatively few steps, I was able to get an instrument up and running (and even calibrated – sort of).

Setting up the instrument

Here are some pictures of the instrument. As with my other spectrophotometer implementation, I use a white LED as the light source and a bunch of legos as my optical bench. I’ve switched to actual (disposable) cuvettes and replaced the DVD diffraction grating with a transmission diffraction grating. I still use my mini 10x magnifying lens as a condensing lens.


The big difference is the addition of the camera, which is mounted using whatever legos I had around the house. The camera takes a picture of the light that is projected onto a mounted business card. You may note that this setup does have the GPIO cable connected to a breadboard, but I don’t use the GPIO for anything other than powering the source.

So the idea behind this instrument is the following: take a picture of the diffracted light with and without a sample in the light path and then use Mathematica to do some image processing and generate a spectrum. My samples will be red and green food coloring.  I want to control a number of parameters while acquiring the images, so I use Import instead of DeviceRead. Something like this works well for me

 cmd ="!raspistill -n -t 1 -sh 0 -co 0 -br 50 -sa 0 -ISO 100 -ex verylong -ev 0 -awb none -o -";
 imgsymbol =Import[cmd,"JPG"];

In the commands above imgsymbol is actually substituted with blank, red, green, or whatever symbol name I want to use to describe the image. The images I collected for this experiment can be found here: empty , blank , green and red . Here’s what the empty image looks like:


There is an awful lot of wasted space, but the RaspiCam has no optical zoom. It also doesn’t have a way to focus. (Actually, it does , but I wanted to get a few blog entries in before I potentially damaged my camera.) I’ll say it now, and probably repeat it later, that there are many optimizations that need to be made to the instrument; consider this article a snapshot of my journey towards building a functional spectrometer based on Wolfram/RPi.

Processing the data

So I took three additional pictures, one with a water-filled cuvette and two cuvettes containing red and green dye, respectively. The first step in processing the data was to crop the images so that I was working with datasets containing only the diffracted light. Note that I have probably lost some resolution when I Imported/Exported the images, since the images from the RPi are approximately 2Mb in size but the Mathematica exported images are only 97 kB (although they are essentially all black). This is something to consider when working with this instrument for something more than a blog post… I loaded the images and then used ImageTake to grab just the portion of the image that contained useful information. {blank, empty, green, red}=Import[#]&/@{“blank.jpg”, “empty.jpg”,”green.jpg”,”red.jpg”};


Then I can apply those parameters to each of the images, making a set of spectrographs from which to work.

spectrographs =ImageTake[#,{781,853},{1498,1663}]&/@ {empty, blank, green,red};
TableForm[{spectrographs},TableHeadings -> {None,"empty","blank","green","red"}}]


Qualitatively, these images look good; water doesn’t look very different from the blank, the green dye appears to be absorbing all of the red and the red dye appears to be absorbing some of the blue and green. To convert these images into spectra, we need to do 3 things:

  1. Process the 2D images into lines representing light intensity
  2. Find transmittance and abosrbance
  3. Calibrate the x axis to convert it to wavelength Image processing

Here, I’m going to assume that each row of the image contains the same information, which allows me to average the intensities in each row and obtain a better signal-to-noise ratio. Additionally, since the columns contain the wavelength information, there is no need to have three separate channels of colors, so I will convert the image to grayscale.

{edata, bdata, gdata, rdata}=Mean@ImageData@ColorConvert[#,"GrayScale"]&/@ spectrographs;
ListLinePlot[{edata, bdata, gdata, rdata}, PlotStyle->{{Black,Dashed},Black,Green,Red}]


We observe more blue intensity in the blank than we do in the empty spectrometer and that suggests that there is some reflected light that is getting to the detector when the cuvette is inserted into the instrument. I’ll have to think about ways to improve this in future versions of my instrument. For now, I think I’ll ignore it.

Converting to absorbance spectra

Transmittance is the amount of light being transmitted relative to the blank and absorbance is the negative log of the transmittance. After a little bit of algebra, we can obtain the absorbance spectra using the following command.




Lastly, we need to calibrate the x axis. There are definitely more sophisticated was to do this, but I’m interested in proof of concept at this stage. Looking at the wikipedia entry for LEDs , we find that the RGB LED emits three colors of approximately equal intensity. We can use this spectrum to calibrate the spectrographs. The three maxima appear at approximately 480, 560 and 620 nm. Let’s take a closer look at the empty spectrograph image:


Now I can use the maxima in this plot(at pixel numbers 45, 92 and 128) to find the transform from pixel units to wavelength by first creating a linear transformation of the data and then using the transformation to create a common ordinate for the data:

lm =LinearModelFit[{{45,480},{92,560},{128,620}},{1, x}, x]; 
ord = lm /@Range@Length@bdata;
ListLinePlot[Transpose[{ord,#}]&/@ {Log10[bdata/gdata],Log10[bdata/rdata]},PlotStyle->{Green,Red},
Axes->None,FrameLabel->{"Wavelength (nm)","Absorbance"},
PlotLegends->LineLegend[{Green,Red},{"Green dye","Red dye"}]]


Comparison to lit values

The vernier website has example spectra of food coloring dyes:

Given the trivial method of calibration, a light source that is not continuous through the visible spectrum, and an un-optimized detection system, the results are decent. The general shape of the absorption curves is correct and the maxima are good to about 10 nm. The spectral features in our instrument are much broader than in a commercial instrument, which is not surprising given the limitations. However, it is very likely that we can make some improvements in the instrument design, data acquisition and processing in order to improve the performance of the Wolfspec.

6 thoughts on “Wolfspec 2.0

  1. Thankyou for this 🙂 As part of my food product development masters, I’ve had to study food adulteration (mixing in an undeclared item, or falsifying a food items content to make a profit). for my essay in this module, I’m writing about the use of Spectrometry to identify if sunflower oil has been added to olive oil.

    This article has helped me understand the process better, and now I’m building my own Spectrometer using a pi (and a noir camera + uv/ir/white leds) for my research. I might even be able to use it for my dissertation 😀 I’ll make sure to reference you 🙂
    thanks again

    • Glad to hear this helped. I am certainly interested in other people’s DIY spectrometer designs so I hope you come back to share.

      • I’m facing an issue I hoped you could help with :/

        I’m rewriting this in python (Since I dont know wolfram), and I’m struggling with getting the intensity from the image. I get very different results to you, when using your spectrum image.

        I dont understand what the wolfram code is doing to work out intensity? Any guidance would be greatly appreciated 🙂


        • Daniel, I have not tried this with Python yet, but in principle you should get similar results as I did (probably not the same, since I manipulated the images before posting them on the website). Your python code will need to do these things: (1) crop the image to get rid of the dark edges; (2) convert the image to greyscale; (3) convert the greyscale image into a nested list of numbers corresponding to intensity; (4) average the rows of numbers to get a single row of datapoints that correspond to pixel intensity. Breaking your code down into those four parts should send you on the right track (or at least help you identify where the problem is).

          • Thanks. I’ll have a play, and see what works.

            I think my problems may have something to do with the python colorsys function for intensity. Going greyscale and doing it that way will probably give better results. Would I be right in assuming that to get intensity, you divided the grey value (0-255) by 255 to get a value of 0-1?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.