A 4-Color Dynamic E-Paper Badge

This is the project I worked on in ENGN1650, which is a required capstone class for all students in Computer Engineering. I was one of four members on this project.

The design process for the badge is ongoing – it’s a two-semester class! – but I’m going to throw some notes in here as we create things. By the end of the academic year I hope to have everything (circuit diagram, PCB design, etc.) documented.

Overall Concept

This is a small, lightweight e-paper device designed as an interactive/updatable name or conference badge. Users can cycle between multiple badge displays with a button press, transmit data to other badges or mobile devices with a “tap”, and receive data through the same mechanism. The badge display has four colors: white, black, red, and yellow.

For now, this is mainly a discussion of image processing, since that is what I am working on currently.

Image Rendering

Dithering

We have a rather unusual display. It is, strictly speaking, 2-bit. The only colors we have available are black, white, pure red and pure yellow. How can we render convincing images with just these pixel values?

Say we’re rendering an image on our display. Here’s a nice test image:

A Creative Commons-licensed image of the Golden Gate Bridge.

It has some regions of red, which is easy, but there are large swathes of blue and some areas of green. In an ideal world, we want this to look nice even though we can’t render it precisely.

A naive approach might be to (after rescaling and cropping the image) quantize each pixel to the “nearest” allowed value. (We use nearest neighbor with Euclidean distance.) For each pixel, we quantize it to red, yellow, black, or white, depending on which one is closest to the current value:

A naive quantization of our image.

It looks bad. There’s nasty image artifacting, we’ve lost a lot of detail, and the contrast is way too high. This is a form of quantization error: because each pixel is being heavily quantized, the overall error of the image is large.

To combat this, we’re going to use a dithering process. There are many dithering algorithms, with different goals, but the Floyd-Steinberg algorithm is the king among error-reduction dithers and it’s what we’ll use here. The idea is that each time we quantize a pixel, it introduces some error; so if we want to reduce the overall error in the image, we need to propagate that error into neighboring pixels. (So if one pixel is cast into red, then the pixels around it are less likely to be sent to red.) We go row-by-row, column-by-column, propagating error with the following matrix:

The Floyd-Steinberg dither matrix.

This gives us the following result.

A Floyd-Steinberg dithered image.

Ah, the diffused error seem to be dominating the actual pixel colors. Let’s apply a coefficient (0.5 turns out to be best):

A Floyd-Steinberg dithered image, with reduced error diffusion.

Much better. This actually looks rather nice, modulo some experimenting with the error coefficient. There’s no good way to get that blue back, but greyscale works, and green can be fairly well approximated with a yellow/black mixture. The human brain also adjusts to the palette, making it look better than it actually is.

A nice aspect of our color palette is that we can fairly effectively mimic human skintones, since they tend to be warmer. Let’s try this on a some different skin palettes and see how they look:

Ken Silverman. James Earl Jones. Barack Obama.
Mahershala Ali. Michelle Yeoh. Jason Momoa.
Elmo and Rosita. Gamora (Zoe Saldana). Nebula (Karen Gillian).
Ken Silverman. James Earl Jones. Barack Obama.
Mahershala Ali. Michelle Yeoh. Jason Momoa.
Elmo and Rosita. Gamora (Zoe Saldana). Nebula (Karen Gillian).

Pretty decent! Some skintones get cast a little more ruddy than they are in real life, though the effect isn’t too bad. Blues are not handled accurately at all, but they get sent rather tastefully into grayscale. (Though might be a product dealbreaker if, say, your company has a blue color palette, or you’re a cosplayer who primarily wears blue.)

The more noticeable problem is that some people get badly bleached out. We’re not accounting for something that’s fairly important to the way humans perceive images: gamma correction!

Gamma Correction

[coming soon]