I’m working on a project that will include using LEDs as light sensors, and one of the first tasks is to learn a bit more about the wavelengths of light that are emitted by an array of LEDs. Since I’ve recently created a Mathematica interface to an Ocean Optics spectrometer (on a Raspberry Pi, naturally), the first task was pretty straightforward.
Looking at the three spectra from the RGB LED, I noticed that not all of the irradiation looks like a “normal” Gaussian shape. I tried fitting each of the spectra to a skewed normal distribution to get the maximum wavelength, the bandwidth and the skewness, which is a measure of how close to a normal distribution the spectrum is.
Note how the blue LED is more symmetric than the red or green ones. It has a profile that most closely represents a normal distribution, which is indicated by a skewness of zero in the table below. Why is this information useful? No idea. I just wanted to do it and was having fun playing with my Raspberry pi-controlled spectrometer.
Here’s the data for the remaining LEDS (with the exception of the white one, which wouldn’t fit nicely to a normal distribution. Note that the intensity in the plot below doesn’t mean anything, since I adjusted the spectrometers integration time for each of the LEDs in order to get the highest intensity possible. You’ll note that the IR LED does not have a very strong feature even though I was close to the maximum integration time on my spectrometer.
…and here’s the metrics for these LEDs. The values in parentheses are the Adafruit product numbers for the LEDs.
Interesting, but it looks to me as if the blue LED data is a very poor match to the Gaussian you fit to it – the lower right side shows a clear systematic mis-match. This shows pretty clearly in the second figure – the maximum data points are both to the left of the fitted maximum. I’m no expert in solid-state theory, but I don’t think a Gaussian is a particularly good model for LED emission, either.
Thanks for commenting. The folks at Ocean Optics reached out to me about these measurements and they said that if I wanted to make these measurements correctly, then I need to be using a calibrated source or have a setup to do relative irradiance (see the measurement notes here). Some of the odd bandshape is likely due to the detector sensitivity at each wavelength. My spectra are simply “raw” values from the detector.
From my casual reading, emission bands are assumed to be Gaussian or Lorentzian, but I have not dug deep enough into the primary literature to understand why, and my basic understanding of MO/band theory leads me to believe that they would not be symmetric. I suspect “it’s easy and tells us what we need to know” is a big part of the reason.
Very interesting, could you share the code of the Mathematica interface? Did you read in the spectrometer signal directly using Mathematica?
The code I used can be found in my github repository https://github.com/bobthechemist/seafim I’m pretty sure I followed those directions as indicated but the original code is a couple years old so I might have needed to change version numbers and such. Essentially, I modified the Seabreeze sample program to output a CSV file and then call that program from Mathematica. Definitely a hackish approach; however it served my purpose so I didn’t see a reason to refactor.
Thanks for sharing. I just did it with seabreeze python. Also hackish…
I’ve got a few more years of experience with Mathematica than with Python, but Python make sense from a resource perspective, especially on a RPi. Good luck with your project – I’m interested in hearing more about it at some point.