Wednesday, March 19, 2014

LED Planet Software

In my last post, I discussed the process of building a spiral-sphere of LEDs that could be controlled from my laptop. The purpose was to create visuals for each movement of Gustav Holst's The Planets for an upcoming concert of the Boulder Symphony Orchestra. I wrote about the math that went into the design, the process of 3D printing the structure in many pieces, and powering the LED strip that was glued to the outside.


The work required to get this sphere ready for filming did not end there. In fact, building the sphere was fairly simple compared to what followed. While the sphere satisfied all of the physical requirements for the film, there was still a set of requirements on how the sphere would act. I've said that the LEDs would be controlled from my laptop, but in more concrete terms, the list of requirements was:
  • Independent control over each LED from laptop
  • Full-sphere refresh rate of at least 30 Hz
  • Pre-coded, modifiable effects (pulsing, twinkling, etc)
  • Image and video projection capabilities
  • Effect mixing
  • Keyframed effect attributes
Each task on its own was a formidable challenge. Implementing all of these together would require code written on multiple platforms, from a tiny 8-bit microcontroller running at 16 MHz to my workstation-laptop with a 64-bit quad-core processor running at 2.4 GHz. Luckily, every platform I had to work with could be programmed with C, so I could stick to one language throughout the whole project. While the entire code for this post (as well as my last 3 posts) is hosted on github, it's an ugly beast of a code. I'll try to include relevant code examples that have been isolated from the main code and cleaned up for legibility.

I considered splitting this software post into multiple sub-posts talking about the different levels of software I had to write to get everything to work. Instead of doing this, I've decided to lump it all into this post. I figure that it is the collaborative effort of all of these levels that make the software side of this project interesting. But to help organize this post, I've split it up into 5 sections:

Part 1 - Driving the LED Strip
Part 2 - Data Transfer
Part 3 - Parallel Protocol
Part 4 - Simple Effects
Part 5 - Sequenced Effects

As I step through the different levels of the software, I will be talking about how long it takes to run certain segments of code. Since I had a target frame rate that I needed to hit, timing was of utmost importance. A frame rate of 30 Hz means that every bit of software has to work together to produce a new frame in less than 33 milliseconds. To get across the idea of how long various bits of code take relative to 33 ms, I'll be using a basic timing diagram:


The idea here is to visualize how long different blocks of code take by displaying them as, well, blocks. The horizontal bar that contains the red, yellow, orange, and white blocks represents what a given processor is doing. As time passes from left to right, the processor performs different tasks, each colored differently. The lower bar that remains the same color represents the LED strip. It does not do any appreciable data processing, so I'll treat it as a passive item. The vertical arrows from the processor to the strip indicate the flow of data, and are positioned in the center of a code block that handles the transfer. Finally, the vertical lines in the background give the time-scale of things. I have placed these 33 ms apart, since that is our target update time. Their spacing on the screen may change between images as I zoom in and out of different features, but they will always represent the same time step. The goal is to transfer new data to the LED strip at least every 33 ms in order to get an update rate of 30 Hz, so in this kind of diagram we want to see one of those black arrows occur at least as often as the background lines. In the example above, the processor serving data to the LED strip is not fast enough.

I've measured the time needed to run various blocks of code by hooking up a Salea Logic probe to a pin of the relevant computing platform and triggering responses by inserting commands to toggle the pin to a 1 or 0 in the code. In most instances, the toggling takes around 0.0001 milliseconds. Since I'll probably be rounding most timing figures to the nearest millisecond, I've treated the toggling as instantaneous.

Part 1 - Driving the LED Strip

As with many of my projects, I used a flexible strip of WS2812s as my LEDs. These are surface-mount RGB LEDs with on-board memory and driving. Data for what color the LED should be is shifted down the strip to each LED through a single line. The datasheet for these LEDs gives details about how data must be formatted in order to properly communicate with them. The key to reliable data transfer to the strip is precise timing. Luckily, there are a few freely-available libraries on the internet that handle this timing. I happen to prefer the Adafruit Neopixel library, but I have heard that there are others just as good. This library includes sections of Assembly code that are hand-tuned for various architectures and processor speeds, all intended for use on an Arduino-compatible microcontroller. Since I'm just barely comfortable reading Assembly, I'm glad someone else has taken the time to work out this library for others.

For testing, I used my workhorse Arduino Mega. It runs at 16 MHz and has plenty of memory and 5V digital I/O pins. The interface for using the library is very simple:

Show/Hide Code

To reach the target frame rate, I didn't need to be concerned with the time it took to initialize things. All that mattered was how often I could call strip.show(). Since the timing of the data transfer is determined by the strip, and not the processor sending data, updating 233 pixels takes roughly 7 ms. In my timing diagram, the above code looks like:


If all we want to do is keep telling the strip to update, we can get a frame rate of over 130 Hz. Unfortunately, this doesn't allow for changing any of the LED colors ever. Hardly seems worth the frame rate if the image never changes. What we want is for a separate computer to do the complicated calculations that determine what image should be displayed, then the computer sends the LED values to the Arduino Mega via USB. The data is parsed on the Mega and then the LED strip is updated.

Part 2 - Data Transfer

I wrote a short C code that ran on my computer, calculated some LED values, and sent the values out to the Mega. It packaged the information for 233 LEDs in a 2048-byte packet, 5 bytes per LED and some padding on the end. The color for any given LED is specified by 3 bytes, but I included a 'pixel start' byte and a 'pixel index' byte to reduce visual glitches. The USB specification indicates that rounding up to the nearest 64 bytes for my package would have worked, but for some reason the nearest 1024 worked better. The Mega ran a code that waited for incoming serial data from the USB to serial converter, copied the incoming data to a processing buffer, parsed through the buffer to set LED values, then updated the LED strip. The additional code (neglecting some of the elements from the last snippet for brevity) looks like this:

Show/Hide Code

On my laptop, I wrote a short code to begin serial communications, package LED color values into a 2048-byte packet, and send them along. The code to open the serial port in Linux was ripped from the example code here:

Show/Hide Code

The data packaging code assumes it is given a struct called a 'strip' that contains arrays of length NUMPIXELS for the red, green, and blue channels:

Show/Hide Code

The timing diagram for the Mega code and laptop code working together is as follows:


I've added a new bar to show what my laptop was doing. The blue blocks are the USB transfer and the white blocks are delays introduced to keep things synchronized. On the Mega, the orange block still shows updating the LED strip, the yellow block is parsing the data buffer, and the red block is receiving the USB data. As you can see, the data transfer rate is slow compared to the background vertical lines. My Mega was only able to copy data in at around 55 kB/s, resulting in almost 38 ms for just the data transfer. This is close to the peak transfer speeds found in others' experiments. My data parsing code (yellow) also took around 20 ms. Not enough to push the frame rate past 33 ms by itself, but it certainly wasn't helping. At this point, I was updating the strip at 15 Hz. Too slow by a factor of two.

Luckily, I had a solution. If the Mega was too slow, I just had to use a faster microcontroller! I happened to have a few Teensy 3.1s fitting around, which boast USB transfer rates up to 20x faster than the Mega. The Teensy is Arduino-compatible, but uses a slightly different library for driving LED strips such as the one I was working with. Still, the interface was basically the same and it took the same amount of time to update the strip (again, determined by the strip, not the controller). I ported my Mega code over to the Teensy and ran it with the laptop still supplying USB data.


Blindingly fast! Well, at least compared to the sluggish Mega. The code on the Teensy is similar to that running on the Mega, but colored differently. Lime is the strip update, green is the data parsing, and light blue is the data receive. With the higher performance USB communication and 96 MHz clock (versus 16 MHz on the Mega), both the data transfer and the data parsing have sped up immensely. The frame rate here is about 77 Hz, more than twice what I need.

So that's it, right? I can run at 77 Hz, assuming I can get my laptop to churn out a new frame every 13 ms. Unfortunately, no. The Teensy runs at 3.3V as opposed to the Mega's 5V, so the data signal to update the LED strip needs to be buffered. I used an 74HTC245 octal bus transceiver as suggested by this site to bring the 3.3V digital signal up to 5V. It didn't work. For some reason, the LED strip did not like this voltage level. The only way I could get the strip to accept data was to drop the power supply 5V line to around 4.5V, but this was not something I could do continuously due to the nature of my power supply. Without an oscilloscope, it was nearly impossible to determine what the quality of the data line was like. It was entirely possible that the transceiver was giving me a crummy signal, but I had no way of knowing for sure. I was under a strict deadline to get the entire LED system up and running two days after this unfortunate discovery, so I had to consider my options:

 - the Mega could drive the strip, but couldn't get data fast enough
 - the Teensy could get data quickly, but couldn't drive the strip (for unknown reasons)

The answer was suddenly clear: use both! The Teensy would receive data from the laptop, pass it to the Mega, and the Mega would update the strip. The only catch was that I needed a way of passing data from the Teensy to the Mega faster than the Mega could handle USB data.

Part 3 - Parallel Protocol

Let's start by looking at how you can read in data manually to the Mega. Using some bits of non-Arduino AVR-C, the code to quickly read in a single bit of data from a digital I/O pin is:

Show/Hide Code

Not bad for two lines of code. The port values (DDRA, PINA) are found in tables listing the internal registers of the Mega and the Arduino pin mappings. I use these bitwise commands instead of the standard Arduino digitalRead() in order to speed up the process of reading pin values. The above code isn't too useful, because the data being read in one bit at a time is being written to the same place in memory before the previous bit is stored anywhere. But before I go into how to manage data as it arrives, we can look at how to speed up the existing code that reads in data. You may think, how can you possibly speed up a single line of code that probably only takes a handful of clock cycles? Easy:

Show/Hide Code

By removing some of the code, we've sped up data input by a factor of 8. What exactly did I do? I removed the mask that blocked out the other 7 pins available on PORTA, allowing a mapping of 8 digital I/O pins to 8 bits of internal memory. Now the code will read in 8 bits simultaneously every time that line of code is executed. Not a bad improvement. How does this compare to the serial USB communication from before? The average data rate I could get before was 55 KB/s, or roughly 284 clock cycles of the Mega for each byte. The act of grabbing a whole byte of data using this new method takes only a single clock cycle, leading to an impressive 15 MB/s. This is of course, completely unrealistic. The Mega doesn't only have to read the value of the port, but also store the value somewhere in memory. If I want to do something useful with the incoming data, the Mega also has to make sure each new byte of data gets stored in a unique location. Then there is the issue of synchronization with whatever is supplying the incoming data. You wouldn't want to miss an incoming byte of data, or even read the same byte twice before it is updated. Solving each of these issues creates overhead that will drastically slow down the data rate. The hope is that even with appropriate data management and synchronization, this method is still faster than using serial communication.

To manage where each new byte of data goes in the Mega, I used a 2048-byte buffer that could store the entire package sent for each update for the LED strip. As each byte is read in, it is placed in the buffer, and a counter is incremented to keep track of where in the buffer the next byte should go. To handle synchronization, I added two more data pins to the existing 8 pins of this parallel protocol. One pin is the Mega-Ready pin (RDY), which signals to the data source that it is prepared to accept data. The other new pin is the Data-Clock pin (CLK), which the data source uses to tell the Mega that a new byte has been made available and that it should read it in. I used an interrupt on the Mega triggered by the CLK signal to read in a byte. The Mega code to read in data and process it once there is enough looks like this:

Show/Hide Code

I found that the process of jumping to the interrupt routine and running the code within took around 6 microseconds. Assuming the data source can instantaneously provide a new byte once the Mega has finished reading the previous one, this allows for an overall data rate of 162 KB/s. Not nearly as good as the overly-ideal 15 MB/s, but much better (3x) than the original serial 55 KB/s.

Teensy on left, Mega on right. 8 data lines plus RDY and CLK.

The data source for the Mega is the Teensy controller. Since it can handle fast communication with the laptop and has a faster clock speed than the Mega, it handles reading in USB data and splitting each byte up into one bit per pin of the Mega.

Show/Hide Code

Going back to the timing diagrams, we can look at how the laptop, Teensy, and Mega work together to pass data along to the LED strip in an optimal way:


Starting with the top bar, the laptop spends its time computing what color each LED should be (black), sending the 2048-byte packet over USB to the Teensy (purple), and sitting around waiting for everyone else to catch up (white). Next, the Teensy waits for data from the laptop (white), receives and stores the data (blue), then starts arranging the data and sending it on to the Mega using the parallel protocol (green). The Mega receives the parallel data from the Teensy (red), parses it into LED values (yellow), then updates the LED strip (orange). The most important part of this figure is that by using the parallel protocol, the overall time taken to update the LED strip is less than the 33 ms vertical lines. This means the entire system can finally run at faster than the initial goal of 30 Hz.

It's interesting to see the relative time it takes to do various tasks on different computing platforms. The laptop can perform all sorts of complicated mathematical operations to determine what pattern will appear on the sphere in less time than it takes the Mega to just pass on the LED values to the strip. It's also neat to see how each platform independently handles its own share of the work, then joins with another platform to ensure a steady flow of data.

And with that, I had a reliable way of updating every LED on the spiral-sphere at right above 30 Hz. While it may not have been the simplest solution out there, I ended up very fond of this method. It was an excellent exercise in simple data transfer protocols and the limitations of different hardware platforms. For the rest of this post, I will move past the Mega+Teensy hardware and focus only on what the laptop has to do to produce the desired LED colors at each timestep.

Part 4 - Simple Effects

With the communication protocol worked out between each computing platform, my laptop had full control over the color of every LED and could update every value at 30 Hz. All it had to do was produce a properly-formatted 2048-byte packet containing the color values for each LED and send it off to the Teensy once every 33 ms. The next step in this project was to create a series of 'effects' that could be executed on the laptop and would determine what color values to send along at every update. These effects would later be the building blocks of the final visual product, so I needed a fairly general set that could be combined later.

I knew that while most effects would need a common set of parameters to determine their appearance (color, fade), they would often need their own specialized parameters based on the effect (pulsing rate, image location, etc). Instead of coding each effect to have a unique interface, I created a 'handle' struct in the code:

Show/Hide Code

The handle contained all of the parameters that any effect could need. It was then up the the individual effect to use the different values within a handle appropriately. The same handle could be applied to multiple effects, although I rarely did this. Simpler effects would use the fade and color variables, while more complicated ones would use the attr and file arrays for other effect attributes. The 'effect' struct contains miscellaneous variables related to a single effect (start time, which handle to use, unique pixel buffer, etc):

Show/Hide Code

When an effect is run, the appropriate function is called at every time step and passed an effect struct, a handle struct, and the current time so that the effect can know how far along it is.With this fairly straightforward interface worked out, I started creating effects.

Effect: Solid
Show/Hide Code

Set every LED to the same value based on color1 and fade.


Effect: Image
Show/Hide Code

Use projection mapping to display an image from file. The projection method (planar, cylindrical, spherical) and the projection direction are specified in attr.


Effect: Pulse
Show/Hide Code

Smoothly pulse the value of every LED between color1 and color2 at a rate determined by attr.


Effect: Circle
Show/Hide Code

Set LEDs within a certain distance of a specified location on the sphere to color1. The location and size of the circle are specified in attr, as well as a tapering parameter. This effect loads information on the 3D location of each LED from elsewhere.


Effect: Flicker
Show/Hide Code

Every time step, tells some number of LEDs to begin lighting up to either color1 or color2. The number of LEDs to light up per second, the fade in time, and the fade out time are all specified in attr. I've used a Poisson distribution to calculate how many LEDs to light up after every time step.


Effect: Packet
Show/Hide Code

Send a packet of light up or down the LED strip, ignoring the spherical geometry. The values in attr give the packet velocity, origin, and width.


Effect: Ring
Show/Hide Code

Given a start and end LED, lights up LEDs between them with a travelling wave of color. The start LED, end LED, wave speed, and wave length are specified in attr.


I wrote three more effects that were mostly variations on the ones above. There is a Marching effect that is similar to Flicker, but synchronizes the LEDs into two groups instead of letting each be random. The Video effect uses the same projection mapping as Image, but projects a frame of a video based on the elapsed time of the effect. The Single-Line Packet is a slight variation on Packet, where the packet of light is constrained to move along a single loop of the spiral-sphere. These effects were all introduced with the specific intention of using them in in the final video production.

As mentioned at the beginning of this section, each effect that needs to be running is called upon at each time step. If multiple effects are called, their unique pixel buffers are added together (additive blending) to create the final frame that is sent out to the electronics. For the Image and Video effects, I had the code pre-load the appropriate files before starting into the main loop. This reduced the amount of time spent calculating these effects during playback.

Part 5 - Sequenced Effects

With a complete set of effects written up, the next step of this project was to sort out how to create a sequence of effects that could be run as a unit. I not only needed multiple effects to run simultaneously, but to start and stop independent of each other. I also needed to be able to change the attributes within each handle as the sequence progressed. The simplest example of this would be a fade, where the fade attribute in a handle varies smoothly from 0 to 1 over the course of a few seconds, causing an effect to fade into view.

Ideally, I wanted key-framing. Every attribute of every handle would be able to change at any moment, and the changes would be defined by a set of key frames. Effects would be launched at specified times and produce visuals based on these key framed attributes. Since I had two days to set up the framework for this, I went with text-based key framing instead of a GUI. This required creating a 'language' that my code would interpret and turn into key frames. To build the syntax for this language, I wrote out some psuedo-code for how I wanted it to work:

# allow comments starting with a "#"
at 0:00, set fade of handle 1 to 0.0
at 0:00, set color1 of handle 1 to (255, 0, 0)
at 0:00, launch effect "solid" using handle 1
at 0:10, set fade of handle 1 to 1.0

The above example would create a handle with color1 set to red and fade set to 0. The effect Solid would be linked to this handle and launched at the beginning, but since the fade would be set to 0, it would not appear. Over the first ten seconds of the sequence, the fade would gradually increase until it equaled 1.0 (full on) at the end.

This example shows the two main commands I needed in my language; adding a key frame to an attribute of a handle and launching an effect linked to a handle. Shortening the above example to the final syntax I chose:

# comments still work
0:00 h1 fade 0.0
0:00 h1 color1 255 0 0
0:00 effect solid 1
0:10 h1 fade 1.0

Writing the code to parse these commands was fairly straightforward. Lines in a sequence file would start either with a #, indicating a comment, or a time stamp. In the latter case, the next bit of text after the space would either be "effect", indicating the launching of an effect at that time stamp, or "h*", indicating the addition of a key frame for an attribute of a handle. Since string parsing is an ugly matter in C, I'll save you the burden of looking at the code I wrote for this.

An array of empty handles and effects would be created at the start of the main program and filled with information as the sequence file was parsed. To keep track of key frames, the handle struct was modified to include arrays of keys:

Show/Hide Code

At each time step during the execution of the sequence, every attribute of every handle would be updated to reflect the appropriate value based on the nearest key frames:

Show/Hide Code

While this might not be the most efficient method of key framing, I only ever dealt with around 30 handles and key frames. If I had a need for hundreds or thousands, I would have worked some on optimizing this algorithm. I wrote up a quick testing sequence and recorded the result:

Show/Hide Code


Finally, with all of the effects playing nicely inside the sequencer, I began writing up the sequences for each movement of The Planets. I began by translating storyboards made by my collaborator on this project, then filled in gaps and introduced fades between different effects. After about 20 hours of translating pictures to text, I had the sequences written down in my made-up language:

In retrospect, maybe a GUI would have been worth an extra few days of work..

For filming the sphere in action, I would tell the main code to execute one of the sequences. It would pre-load any images and videos, then wait for my command to begin playback. With fabric draped around the sphere and cameras rolling, I would tell the code to play through the sequence. The visuals were completely determined by the written sequences, so we could do multiple takes and get identical results. I could start the sequences at any time stamp and fast-forward through the playback to get a quick preview before we committed to anything.

With around 55 minutes of content to produce, filming took quite a while. Writing sequences for the first time was a burden, but going through and editing them during the filming process was a nightmare. Even answering questions like "why is the sphere blue at 4:31" was difficult due to the nature of the sequencing language. The text-based key framing was a simple solution to a problem I had to solve in a short amount of time, but spending an extra few days implementing a graphical interface would have saved me quite a few headaches.

When the final full-length video is posted online, I will link to it here. For now, I can give you two still shots.

Venus, Bringer of Peace 

In all, I'm happy with the way this project turned out. Some of the technical bits may have been a little rough around the edges, but I've learned that's what happens when you have only a few weekends to get things working and an ideal budget of $0.

Uranus, the Magician

While the Planets project is over, the LED sphere and accompanying electronics are still fully-functional. I'll have to see if I can incorporate them in some other personal project down the road. It might even make for a nice hanging lamp in my apartment.

Monday, March 10, 2014

LED Planet Hardware

Well, I made a joke about how my blog posts are ending up just being about blinking a couple of color LEDs, and here I am talking about them again. I suppose that's what I get for thinking I can avoid falling into the pit of trendy maker projects. This is a continuation of my exploration of projection possibilities with LED strips, but also the start of a specific project to utilize my new toys.

A few weeks ago I was approached by my friend Julie to collaborate on a new art project. The goal was to create a video that could accompany a live performance of Gustav Holst's The Planets. The Planets is a 55-minute, seven-movement orchestral suite where each movement is named after a planet of the Solar System. With Julie's expert artistic skills and my graduate education in astrophysics, we figured we could have a lot of fun with this challenge. As I write this post, the video is being prepared for its debut in one week, so I'll save my lengthy discussion about the video itself until later. In this post and my next one, I'll cover the hardware and software (respectively) that was created in order to create the final product.

Without dragging on about the artistic implications, I should explain briefly what our goal was. We wanted to film a spiral-sphere of LEDs hanging in mid-air, surrounded by fabric. Both the fabrics and the images displayed on the sphere would change based on the movement being played.


The first step was to design the sphere. We wanted a spiral sphere, so I set off figuring out exactly what shape we wanted. I used parametric equations to describe the curve in 3D space:

\[ x = cos(2 \pi N t) \sqrt{r^2 - z^2} \\ y = -sin(2 \pi N t) \sqrt{r^2 - z^2} \\ z = r (2 t - 1)  \]

Here, $t$ goes from 0 to 1, $N$ is the number of turns on the spiral from top to bottom, and $r$ is the sphere radius.

Spiral ends at bottom when gnuplot gets tired of plotting.

I knew exactly how many LEDs I had on one strip (about 4 meters worth), and we wanted around 8 turns of the strip as it went down the sphere. This pinned the size of the sphere to a diameter of 19.7cm (7.75in). Our first idea was to bend stainless steel wire into the right shape and then glue the LED strip on the outside. Since I had no method of bending wire very accurately, and the preciseness of the spiral-sphere shape was important to the overall visual effect, we figured we needed a method less prone to human error. Luckily, I had a small 3D printer sitting on my desk.

Since they don't exactly offer pre-designed spiral-spheres of 19.7cm diameter on Thingiverse, I had to figure out how to design the 3D model myself. I saw online that OpenSCAD was a good choice for people comfortable with programming, so I went for that. The learning curve is not bad at all, at least for those with good spatial reasoning. Since I had already figured out the equations that described our spiral-sphere, I could get an incredibly accurate print of what we wanted. I've posted the .scad and .stl files for the entire sphere here.

A spiral-sphere for ants.

The build volume of the printer is small, about 4 inches on a side. There was no way of printing the entire sphere in one go, but I could print it piece by piece and glue it together in the end. I started by splitting the sphere into octants that could be joined along internal 'ribs'. Due to the curvature of each octant, I needed to allow for a lot of support material in the printing process.

3D printers are mostly good for scribbling in air.

The octant method did not go well. While my printer is a sturdy little fellow, it was not up to the task of printing for 16 hours on end without fault. This method of splitting up the sphere would not work. I decided a better way was to drop the supports ribs and print small sections of the spiral one at a time. That way each print was only about an hour long, and any printing failures would no longer cause a loss of an entire eighth of the sphere. The spiral ended up in 32 pieces:

Suspiciously fewer than 32 pieces.


I then used hot glue to stick the LED strip on the outside of the sphere.

Slight deformation.

Without the internal supports I had originally designed for the spiral-sphere, it would deform under its own weight. I decided to print out the ribs I had removed and glue them to the inside of the sphere to help keep the spiral in line.


With the LED spiral-sphere built, the next step was to power it. The datasheet for the LEDs is readily available, so I could get the exact voltage and current requirements. The whole strip runs off 5V, and each color of each RGB LED draws about 20mA of current when on. With 233 LEDs, keeping the entire strip at full brightness requires about 14A (70W). This is a fair amount of power, certainly enough to cause some damage. It's far too much current to use a linear voltage regulator to drop any appreciable amount of voltage, and too low of a voltage for most wall adapters. Luckily, I happened to have an old computer power supply sitting unused in the corner of my apartment.

Relevant connector circled.

The power supply is capable of delivering a total of 30A on its 5V line, plenty for what I needed. There are quite a few tutorials online showing how to use a desktop power supply for hobby electronics, so I won't go into too much detail about how I got it to work. There is a standard 20ish pin plug (circled in red) on all power supplies like this one with a standard wiring scheme. Black is ground, red is 5V, and the rest don't matter as much. If I had simply routed a pair of black and red wires to my strip and flipped on the power supply, nothing would have happened. Before the power supply will supply anything on its main power lines, it has to be told to turn on by the circuit it is powering. There is a single green wire in the standard plug that when shorted to ground, tells the power supply to turn on. I spliced apart another connector I picked up at my local hardware store and routed the wires I needed to a protoboard:


When the power supply is on (but not 'running') and plugged into this board, the PS Status LED lights up. When I flip the 5V Enable switch, the green power supply wire is shorted to ground and the power supply starts running. The 5V Status LED lights up when the 5V line is active. I hooked up the LED strip power lines to the blue screw terminal and had a reliable power supply.


The strip uses a single wire for sending data in. Using 5V logic and reasonably precise timing, data for what color each LED should be is sent down the strip. I used my workhorse Arduino Mega to interpret commands sent from my computer and send the appropriate data to the LED strip. It takes about 30 microseconds to send the data for a single LED, so around 7.3 milliseconds for the entire strip, allowing for some overhead. If you just wanted to keep updating the strip with the exact same colors over and over, or modified the colors in a trivial way, you could update the strip at a wonderful 130 Hz. I would eventually need a 'frame rate' of at least 24 Hz, so just the act of telling the strip what colors to show would not be a problem. I'll go into the details about the software side of this project in a later post, but I can assure you there were enough other problems to make it interesting.

Using the projections methods I've developed over the last two blog posts, I projected my standard test image, the SMPTE color bars:

Upside-down, but otherwise correct.

With a working spiral-sphere of LEDs, the final step was to suspend it inside a wooden frame that could support the fabrics that would complete the picture. I grabbed a few 8' long 1x2 strips at Home Depot, dug out my power tools (hand tools where I yell "POWER" as I use them), and got to work. Since boxes have significantly simpler geometry than a spiral-sphere, it was a simple task.

Ignore the screws sticking out everywhere.

And there we have it, an LED planet ready for its big film debut. My next post will cover the flow of data from computer to LEDs, and what software was needed to control the flow. After that (and after the film debut), I'll do a short writeup on the conceptual side of this planet project (artistic goals, scientific influence, historical context). For now, I'll leave you with a picture of the setup we used for filming.

Battlestation nearly ready.

Friday, March 7, 2014

LED Projection Mapping with Webcams (and Math)



In my last post, I covered the initial steps to setting up an LED projection system that can handle arbitrary LED locations. The LEDs were all constrained to a single strip, mostly because I can't commit to dedicating the LEDs to any single project and cutting the strip up. I wrapped the strip around a cylinder and was able to project a few images and a video with reasonable success. The biggest problem was the imperfect placement of the LEDs.

My projection system depends on knowing the 3D position of every LED in the display in order to properly project an image. Using simple math to describe the positions like I did before only works if the LEDs are placed very accurately. To allow for projections on to an LED strip that is not placed on any sort of simple mathematical curve, I've worked out a method of automatically locating every LED in 3D space using some webcams and some more math.

Single-Camera Mapping

To start, let's look at how a webcam can be used to recognize an LED.

An appropriately messy pile of LEDs.

In an image recorded from the camera, the strip appears as a mostly white strand with very small non-white regions denoting the location of an LED. Not a great situation for automatic feature detection. What happens when we turn on an LED? This is done by sending the appropriate commands to my Arduino LED strip controller to turn on exactly one LED.



Now we have a bright spot showing us the location of a single LED. To automatically locate this LED, we could write an algorithm to search through the camera image to find the brightest spot and return its position. Unfortunately, this kind of method is easily confused by anything else in the camera image that is bright. A way to fix this is to subtract off the image with the LED off to eliminate anything that hasn't changed between the two frames.



Now all we have left is a single bright point so search for. Using OpenCV to handle capturing and storing the webcam images, the location of the bright spot can be located as such:

Show/Hide Code


These positions are not the full 3D positions of each LED that we will eventually need, but just the projection of each LED position on to the webcam field of view. We can loop through each LED in the strip and determine this projected position:


You might notice that when an LED lights up a significant portion of the paper the strip is taped to, the LED-finding algorithm picks a point somewhere between the actual LED and the lit up paper. This is because the algorithm is finding the center of light in the image as opposed to the peak of light. I feel this is more appropriate in this case due to the fact that some of the LEDs are pointed away and will only be seen in the light they shine on the paper. Instead of treating those LEDs as lost, I might as well register the lit up paper as the source of light from the viewers perspective and map it appropriately.



This is enough information to perform a projection mapping that is only viewable from the perspective of the webcam.




This single-camera method is not able to determine the distance from the camera to any of the LEDs. All it can do it tell you the projected position of each LED on the image plane. In order to generalize this projection mapping, we need some method of supplementing the 2D position information provided so far with an estimate of the camera-to-LED distance.

Multi-Camera Mapping

Consider a single camera looking at a single lit LED. The LED position on the camera image gives us the angle relative to the direction the camera is pointing that the LED is located. From the LED position (as determined in the above algorithm), we can work out a vector line that originates from the camera and hits the LED at some point:
\[ \vec{f} = \vec{c} + t\hat{\vec{n}} \]
Here, $\vec{c}$ is the position of the camera, $\hat{\vec{n}}$ is the normal vector originating from the camera and pointing towards the LED, and $t$ is the vector line distance parameter. The issue here is that we don't know how far along this vector the LED sits, or rather, what the appropriate value of $t$ is. But suppose we add a second camera that can see the single LED but is positioned elsewhere. Again we can define a vector line originating from this new camera that hits the LED. If we know the positions and orientations of the two cameras relative to each other, we know the equations for each line in a common coordinate system.
\[ \vec{f_1} = \vec{c_1} + t\hat{\vec{n_1}} \]
\[ \vec{f_2} = \vec{c_2} + s\hat{\vec{n_2}} \]
Ideally, these two lines will intersect ($\vec{f_1} = \vec{f_2}$) exactly where the LED exists in 3D space. Solving for this intersection point using some high school math gives us this point. In this way, we can use two cameras to determine the 3D position of each LED.

Unfortunately, these two vector lines will rarely intersect. Due to imperfections in the camera positions, the measurements thereof, and the camera optics, the two vector lines will most often be skew. Instead of finding the point of intersection, we need to find the closest point between these two lines. To do this, we need to find the values for $s$ and $t$ that minimize the distance between the two lines. One way to solve for these values is to write out the distance between the two lines for any ($s$,$t$) and set its derivative with respect to each quantity equal to zero. This is a pain to write out, so here's an easier way of doing it.

First we define the distance vector:

\[ \vec{d}(s,t) = \vec{f_1} - \vec{f_2} = \vec{c_1} - \vec{c_2} + t\hat{\vec{n_1}} - s\hat{\vec{n_2}} \]

We want to know the values of $s$ and $t$ that minimize the length of this vector. We also know from geometry that when this vector distance is minimized, it will be perpendicular to both original lines. We can express this by saying that there will be no component of the distance line along the original lines:

\[ \hat{\vec{n_1}} \cdot \vec{d}(s,t) = 0 \]
\[ \hat{\vec{n_2}} \cdot \vec{d}(s,t) = 0 \]

Expanding these out and expressing the two equations with two unknowns as a matrix problem,

\[ \left( \begin{array}{cc}
\hat{\vec{n_1}} \cdot \hat{\vec{n_1}} & -\hat{\vec{n_1}} \cdot \hat{\vec{n_2}} \\
\hat{\vec{n_2}} \cdot \hat{\vec{n_1}} & -\hat{\vec{n_2}} \cdot \hat{\vec{n_2}} \end{array} \right)
\begin{pmatrix} s \\ t \end{pmatrix} =
\begin{pmatrix} -\hat{\vec{n_1}} \cdot (\vec{c_1} - \vec{c_2}) \\ -\hat{\vec{n_2}} \cdot (\vec{c_1} - \vec{c_2}) \end{pmatrix}
\]

The diagonal of the 2x2 matrix can of course be simplified to 1 and -1. Solving this matrix problem for $s$ and $t$ allows us to compute where on each original vector line the distance vector hits. Since I have no reason to favor one camera over the other, I assume the LED is most likely located half-way between these two points.

With the matrix inversion expanded out, the code to find the LED based on input vectors looks like this:

This of course depends on having a decent estimate of the two camera-to-LED vectors. I've decided to place my two cameras so that they both are looking at a common point in space that I decide is the origin of the coordinate system. This way, I can simple measure the location of each camera with a ruler, and know not only the vector positions but also the normal vector that is produced when looking at the middle of each camera image. When a camera locates an LED in its field of view, the normal vector is then determined relative to the one pointing to the origin. The process of finding these normal vectors is a little tedious and requires a few 3D rotation matrices. I'll leave the math and code for that process out of this post for simplicity.

To test out this method, I set up two identical webcams in two identical 3D printed frames pointed at a sphere of LEDs:

Spiral Sphere of LEDs, for another project

Similar to before, I march through each LED turning it off and on to produce background-subtracted images that make locating the LED easy. Instead of mapping the image coordinates directly to a source image, I run the two estimates of LED location through the 3D mapping algorithm described above. The resulting point cloud looks promising:

A few pixels were poorly located by one or both cameras and ended up having strange 3D positions. The most obvious case is the single point floating outside of the sphere. A stronger criteria on whether or not a camera has successfully located an LED might eliminate these outliers. There is also a reasonable amount of distortion on what I expected to be one side of a perfect sphere. I'll be honest on that part, I think I messed up the math in my code that converts an LED location in image coordinates to a normal vector. Before I wrote up this algorithm in C, I prototyped it in an easier language to make sure it worked. Somewhere in translating to C I must have missed a minus sign somewhere, but it seems to be hidden well. If I ever plan on using this method for a more serious project, I'll have to go back and find that bug to fix this distortion.

The reason so few LEDs made it into the final point cloud is that only LEDs visible through both cameras can be located in 3D space. Spacing the cameras out more increases the accuracy of this method, but decreases the number of LEDs on the sphere they can both see. One solution to this problem would be to add more cameras around the sphere. This would increase the number of LEDs seen with two cameras, and introduces the possibility of using three cameras at once to locate a single LED. When using three or more cameras, finding the point in space that minimizes the distance to each camera-to-LED vector line becomes a least squares problem, similar in form to the one I went through on my robot arm project.

The next step in this project on LEDs is to do something visually interesting with this mapping technique. So far I've just been projecting the colorbar test image to verify the mapping, but the real point is to project more complicated images and videos onto a mass of LEDs. In the next few weeks, I will be talking about the process of using this projection mapping to do something interesting with the 3D printed spiral sphere I've shown at the end of this post.

Wednesday, February 5, 2014

LED Projection Mapping with Math


I have a love-hate relationship with RGB-LED strips. Before starting my Daft Punk Helmet project last year, I was getting used to using individual 4-pin RGB-LEDs for my colored-lighting needs. Once I realized I would need 96 of these LEDs for the gold helmet (384 leads, 288 channels to multiplex), I broke down and ordered a strip of WS2812s. These are surface-mount RGB-LEDs with built-in memory and driving. With three wires at one end of the strip to provide power and data, you can easily control the color of each individual LED on the strip with an Arduino or similar controller. I was in love.



But then I started to see the dark side of easy-to-use LED strips like this. Like so many other LED strip projects floating around the internet, my ideas started to lack originality. Instead of exploring the visual capabilities of over 16 million unique colors possible per pixel, I was blinking back and forth between primary colors and letting rainbow gradients run wild. Maybe I was being overwhelmed with possibilities; maybe the low difficulty made me lazy. Either way, the quality of my projects was suffering, and I hated it.

Zero-effort xmas lights

It's entirely possible that I'm being overly dramatic, but this is probably not the case. Still, something has to be done to break the cycle of low-hanging fruit projects. To do this, I've decided to embark on a project to create an arbitrary-geometry LED display. I want to be able to bend, wrap, and coil an LED strip around any object and still be able to display an image with little distortion. I expect this project will lead me on a mathematical journey that will be covered in at least two (2!) posts.

To start this off, let's differentiate the goal of this project from the projection mapping commonly used in art and advertising these days. In standard projection mapping, one or more regular projectors are used to display a 2D image on an arbitrary 3D surface. Specialized software determines how to warp the projected images so that they appear normal on the projection surface. For my project, I have pixels that are placed arbitrarily on the arbitrary 3D surface. This added complication is like placing a glass cup in front of the projector in standard projection mapping and expecting the warped image to still yield a comprehensible picture. In a sense, two mappings have to be done. 1 - Find the mapping of pixels to real space, 2 - find the mapping of real space to source image space.

While I'm using my laptop to do the heavy lifting of computing the appropriate pixel mapping, I need to use a microprocessor to interface with the LED strip. I'm using my prototyping Arduino Mega to interpret commands sent by the laptop via USB and send data to the LED strip. For my first arbitrary-geometry LED display, I wrapped about 4 meters of LEDs (243 total) around a foam roller.

Nothing says professional like duct tape and styrofoam.

The first step in the process is to find the physical coordinates of each LED in real space. Luckily, I chose a configuration that can be described mathematically. The strip wraps around a cylinder in a helix with some spacing between each successive loop. In my projection code, I slowly march up and around an imaginary cylinder placing LEDs every now and then.

We can plot the locations of every LED to confirm this works:



Next, we have to project the real-space location of each LED back on to the source image we want to display. The real-space locations are three-dimensional while the source image is two-dimensional, so we need a way of 'flattening' the LEDs down. The way in which we chose to do this depends on both the display geometry and how we want to view the display. For our cylinder example, we can start by unwrapping the LEDs:

Plotting the location of each LED on the source image gives:

Source, Image Coordinates, Result

Here, I've chosen the LED colors based on the nearest color found in the SMPTE color bars. Displayed on the LED strip, this kind of mapping shows the source image stretched around the cylinder.



From any one direction, the source image is difficult to see. Another method of projecting the LED positions on to the source image is to flatten them along a single direction perpendicular to the cylinder axis.

Source, Image Coordinates, Result

This allows the viewer to see the entire image from one direction.



Of course, the image is also projected on to the other side of the cylinder, but appears backwards. With this more useful projection, we can try to watch a short video:


In this example, I'm using OpenCV to grab frames from a source video, compute the projection on to display pixels, and send the pixel values out to the Arduino Mega. Without spending too much time on optimization, I've been able to get this method to run at around 15 frames per second, so the recording above has been sped up a bit to match the source video.

An important thing to note in that recording is the low amount of image distortion. Even though the LEDs are wrapped around a cylinder and directed normal to the surface, the projection prevents the source image from appearing distorted when viewed from the correct angle. However, if you look at the various color bar tests I've done, You can see that vertical lines in the source image do not always appear completely vertical when projected. This is not due to any inherent flaw in the projection math, but instead caused by improper assumptions on the LED locations. I've assumed that I wrapped the LED strip perfectly around the cylinder so that my mathematical model of a helix matches the locations perfectly. In reality, the spacing and angle of each loop around the cylinder was slightly off, introducing slight variations between the real positions and the programmed positions that got worse towards the bottom. To increase the accuracy of this projection exercise, these variations need to be dealt with.

There are three ways I see of correcting for the issue of imperfect LED placement:

  1. Place them perfectly. Use a caliper and a lot of patience to get sub-millimeter accuracy on the placement of each LED.
  2. Place them arbitrarily, but measure the positions perfectly. Use accurate measurements of each LED position instead of a mathematical model.
  3. Place them arbitrarily, get the computer to figure out where they are.
Since options 1 and 2 would most certainly drive me insane, I'm going with option 3. It's like I always say: why spend 5 hours doing manual labor when you can spend 10 hours making a computer do it?

At this point, I've already made a decent amount of progress on letting the projection code figure out the location of each LED. I'm using a few stripped down webcams as a stereoscopic vision system to watch for blinking LEDs and determine their positions in 3D space. I'll save the details for later, but here's a quick shot of one of the cameras mounted in a custom printed frame:


The code for this project (including the stereoscopic vision part) can be found on my github.