My employer makes a nifty little 12-volt LED light that contains separate RGB (red/green/blue) diodes. It’s 3/4-inch in diameter and intended for rugged, outdoor environments — specifically decorative use on carnival rides. The cool thing is that it can change color on command. It has four separate wires — one power plus three separate ground leads — so that the three colors are independently controllable. The result is that the unit can display the eight colors of the 3-Bit RGB palette by powering the diodes alone and in combination.
I began thinking about taking the technology for driving LED matrices and scaling it up to use this light. I thought an LED monitor of sorts made from the company’s products might be an intersting promotional tool, such as at trade shows. The larger lens size and greater viewing angle would make it more akin to an incandescent scoreboard than a desktop circuit board. That would be fairly well suited to scoreboard-style scrolling text and simple animations, but what if you want to display video?
Unfortunately, the two problems with my idea of building a matrix of these are cost and flexibility. This light costs 50–100 times the cost of those little 5mm LEDs that attach to a breadboard, and would have to be driven through relays. Also, its pretty much on or off; all three diodes share a common power lead, so you can’t tweak the color, only the brightness. Even the intensity is very limited: it doesn’t dim very noticeably until the voltage drops to around 6 volts, and if the power drops below about 4–4.5V, it just shuts off completely.
The result is that cost and design limitations dictate a very low resolution, 8-color display. A 12-pixel-square array would take 144 lights, about as complicated and expensive as I would dare specify. Well a 12 x 12 x 3-bit photo or video display is going to be pretty tough to recognize. What the heck would that even look like? To find out, I created the lede video animation above.
How I Simulated It
I grabbed a few varied snippets of video from YouTube and imported them into Adobe Photoshop. I spliced them together, then cropped and scaled each clip to 180 pixels square, and saved that version. I went on to scale the images down to 12 pixels square using Photoshop’s Bicubic Sharper resampling method, which attempts to create the most recognizable de-scaled image. I then enlarged them back up to 180 x 180 pixels using the Nearest Neighbor option. This option doesn’t do any sort of interpolation; it simply expanded each pixel into a 15x larger block of identically colored pixels. In order to simulate the round shape of the carnival lights, I dropped a black mask over the whole shebang. In order to restrict the colors to the 8 color values available, I created an custom 3-bit RGB color table and applied that when I exported the file. I then married the two versions as the animated GIF you see above.
The Better-Than-Expected Result
As you can see, the video signal would be reduced to a predictably unrecognizable sequence of flashing colored lights. But stand back a few feet from your screen and the two human figures — the talking man and the closeup of the woman’s face — will almost be apparent. Rather than disappointing me, actually seeing how this display would teeter on the edge of pure abstraction has me rather excited. Forget trade shows, I now want to build this as public art. Imagine this mounted on the side of a building or in a subway station with a spy cam mounted in the frame, perhaps flipping the image to display a real-time mirror image.
It would be fascinating to see just how long it would take people to discover that they were actually looking at themselves, and what sort of things they’d then do to control the display with their movements.