Driving NeoPixels With Elixir and Nerves
Before starting this project, I was looking for a project I could use to learn Elixir. I’d been hearing great things about it, played with it a little on Exercism, and went to a local Elixir meetup a couple of times.
Then I saw the talk “Embedded Elixir in Action” by Garth Hitchens about using Nerves to develop real-world Elixir-based embedded systems. I’d had an original Raspberry Pi B sitting on a shelf for a few years, so this seemed like it would be a good opportunity to learn Elixir and finally use that hardware for something. I also had a bunch of WS2812B “NeoPixel” RGB LEDs that I was itching for an excuse to use for something. (Caution: If you browse the available NeoPixel products on Adafruit, you may not be able to resist buying something).
At this point, I was unable to resist hitting these three birds with one stone!
A Brief Introduction
I started this project pretty new to Elixir, having just begun to read “Programming Elixir” by Dave Thomas. I also wasn’t familiar with embedded systems development beyond writing some hobbyist C code for AVR microcontrollers in the distant past. I want to take an opportunity to chime in with the other voices in saying that you don’t have to really understand all these things deeply in order to get started and build a useful project.
The journey was a lot of fun for me and I hope to share some of my excitement with you!
Elixir, Erlang, and Processes
Elixir is a functional programming language that runs on the Erlang virtual machine, called BEAM. This is just like how the Java Virtual Machine (JVM) was designed for running Java code, but can also host code written in other languages, like Clojure. In both cases, a brand-new language (i.e. Elixir or Clojure) was born with an already-robust, production-ready run-time environment instead of starting from scratch.
Besides being a functional language, Elixir has a few other key concepts to understand when you’re getting started. In Elixir, the code is arranged as functions that are grouped into Modules, which normally run many Processes in order to accomplish their purpose. Elixir code is basically designed around each Process being a tiny microservice sends and responds to messages from other Processes. If this concept just blew your mind, check out Chris Nelson’s talk from CodeMash 2016: Low Ceremony Microservices with Elixir.
These Processes in the Erlang VM are much more lightweight than an Operating System process, so it’s not unusual to have many thousands of them running at any time, much as you might have many Objects instantiated in a Ruby-based system. Also similar to Ruby’s Objects, a Process in Elixir is how state is stored and accessed, by passing messages between Processes.
Speaking of state, it’s also worth mentioning that the data structures in Elixir are all immutable. When the state of a Process needs to be changed, it is accomplished by replacing its state with a new state rather than modifying parts of the state in-place. This is probably confusing at first for a new Elixir programmer, but it quickly becomes natural and enables Elixir to do some great things under the hood to enable efficient concurrency and garbage-collection.
OTP and Applications
OTP is a really cool feature that Elixir inherits from its Erlang heritage.
One thing that wasn’t obvious to me at first is that what a developer might normall call an ‘application’ is, in OTP terms, a collection on interacting OTP Applications.
For example, if you want to log things to stdout
or stderr
, you might include the Logger
Application in your mix.exs
file, like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Then, when you run your project, the mod
option says that the MyProject
Module will start the supervision tree for my top-level Application.
The applications
option says to also start additional OTP Application(s), their process(es) waiting to receive messages from other Processes.
There is plenty more to learn about OTP, but you’ll probably learn best by just reading up on it and trying it out for yourself. The official Elixir Getting-Started Introduction to Mix and OTP is, unsurprisingly, a great place to start.
Onward and upward, to the main point!
Getting Started with Nerves
Having established how fun and exciting Elixir is, let’s get some Elixir code running on a Raspberry Pi (or similar Linux-based embedded development platform). The tool to accomplish this is called Nerves. In a nutshell, Nerves wraps up the Buildroot tool, making it trivially easy to cross-compile a stripped-down Linux image for your target platform of choice.
As of this writing, there’s a brand-new tool called Bakeware that can be used to simplify the process I’m going to describe below for many common use-cases. Wendy Smoak recently wrote a blog post explaining how to use Bakeware, so check that out if you’re interested. Bakeware will probably be the preferred method of interacting with Nerves for most people going forward. Since it didn’t exist when I started this project and you might be curious what Bakeware does behind the scenes, I’ll explain the lower-level method I’ve been using.
Setting up a Nerves Build Environment
Much has been written about how to get Nerves running on Mac OSX, so I’m going to describe how to do it using a Windows machine with access to a Linux VM. On the Windows side, I’m using Windows 10, but that probably isn’t relevant because Windows isn’t doing anything special here. For Linux, I’m using Ubuntu 14.04 since the Nerves website describes how to get started on Ubuntu. If you want to try it on another distribution, you’ll have to figure out which equivalent packages to use.
From a fresh install of Ubuntu, you’ll need to install the following packages to get started:
1
|
|
Then, check out the main Nerves repository:
1
|
|
After that finishes, run make help
to get a list of supported platforms:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Since my target platform is an original Raspberry Pi Model B, I then ran
1
|
|
This sets up the configuration options to build a Linux image that will boot into Elixir’s iex
shell using the Raspberry Pi’s HDMI monitor output.
It’s possible to re-configure the terminal to use the UART pins on the board, so that you could connect to it using PuTTY over an FTDI cable, so check out the how-to on the nerves-system-br
README if you want to do that.
Instead, I just plugged a monitor into the HDMI port and a USB keyboard into one of the USB ports.
Once the default configuration is done, just run make
to build the default system image.
This will probably take a really long time, depending on the speed of your computer, hard drive, Internet connection, etc.
This is ware Bakeware is a win for most use-cases. It pre-builds these base system images and toolchains ahead of time, so you can simply download and use them right away instead of building them yourself. Bakeware also eliminates the dependency on Linux, because the images were already “baked” on Linux with the appropriate kernels, headers, and other various voodoo available.
1
|
|
This will result in the base firmware image being written to buildroot/output/images/nerves-rpi-base.img
.
At this point, I confirmed that things were working properly by copying this .img
file to my Windows host machine using WinSCP, then burned it to an SD card using Win32DiskImager.
Yes, I downloaded and ran a Windows executable (As an Administrator!) from Sourceforge.
I feel bad about it, but apparently, that’s just how people get the functionality of dd
on Windows.
Let’s be honest, it’s about the same thing as a sudo-curl-bash installer which has become so common.
After the SD card is done being written, I put it in my Raspberry Pi and booted it up.
Lo and behold, (after only four seconds!) I was greeted by an iex
prompt.
Driving NeoPixels from a Raspberry Pi
With all that out of the way, we’re ready for the actual blinkenlights. Well, almost.
First, we need a way to drive the 5V NeoPixel data intput using the Raspberry Pi’s 3.3V outputs. One easy way to accomplish this is with a 74AHCT125 Level-Shifter chip, but I didn’t have one laying around and didn’t want to order one. What I did have laying around was an SN74ALS1035N Non-Inverting Buffer. The trick was that the inputs happen to accept a High-level input voltage of only 2V, while the outputs are 5V nominal. Since the outputs are open-collector, I had to use a 1k-ohm pull-up resistor from the output to VCC.
The other issue is that I’m planning to drive a long strip of WS2812B NeoPixels, which draws far more power than the Raspberry Pi can supply. I cut the end off an old 5V cell phone charger and put a header on it to simplify bread-boarding, then added a 3300uF 6.3V electrolytic capacitor that I had lying around. The purpose of the capacitor is to stabilize the voltage being supplied to the strip when the current draw changes suddenly (e.g. when the lights are blinking on and off).
Here’s a crude schematic of the circuit, and a picture of the dead-bug sculpture in all its magesty.
With the hardware interface figured out, I needed a way to generate the required Pulse-Width-Modulation (PWM) pattern to control the LED colors. I did a lot of research about how this works and was considering doing it with a low-cost AVR microcontroller that would interface with the Raspberry Pi. If you’re interested in the details, you should check out the NeoPixel posts on josh.com. This guy did some truly amazing work documenting how the WS2812B “NeoPixel” works, and how to interface efficiently with it.
In the end, it wasn’t necessary, because the rpi_ws281x project makes it possible to directly generate the required patterns using the Raspberry Pi’s hardware PWM and Direct Memory Access (DMA) capabilities.
Building the Elixir Project and Interfacing with C code
This is the part of the project where I learned a lot about the basics of Elixir projects and a few more obscure details about building native C code as part of a project.
To integrate the rpi_ws281x
C library with my Elixir code, I chose to write a little C wrapper around it so that it would accept binary pixel data on STDIN
, taking come configuration parameters on the command-line.
From there, I used a Port
in Elixir to ‘safely’ talk to this C code without risking a catastrophic failure of the whole Erlang VM by loading the C code directly using the NIF
method.
Here’s how that looks.
Also note the use of :code.priv_dir/1
, which takes the name of the specified Release and returns the file system path to the priv/
directory within your packaged Application.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
The other secret sauce was something I adapted from Frank Hunleth’s elixir_ale project, which also uses some C code for low-level hardware interfacing.
In your mix.exs
, you just have to define a special Compile
task (mine is called Ws281x
), then add it to the compilers
list further down.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
This tells mix
to shell-out to make
to build priv/rpi_ws281x
, which is the target of the C code I wrapped around the rpi_ws281x
library.
The priv
directory in your project folder ends up getting packaged into the Erlang Release that is burned into the SD card, and accessible using the :code.priv_dir/1
function mentioned earlier.
Here is what I have in the Makefile
to make that work:
1 2 3 4 5 |
|
Controlling NeoPixels from Elixir
If you want to dive in and try running the code, download the nerves_io_neopixel
repository:
1 2 3 4 |
|
Now, assuming that you checked out nerves-system-br
to your home directory and did the make
step earlier, you can source
the environment script to set up the cross-compilers, then build the project.
1 2 |
|
If you’re doing this on a Linux VM with a Windows host, you also need to take one more step to generate the .img
file that you need to burn to the SD card:
1
|
|
This takes the efficiently-packed “firmware” file with a .fw
extension and formats it into a much larger file with the appropriate blank-space offsets so that it can be booted on the target.
1 2 |
|
From there, you can copy the .img
file to the Windows host using WinSCP, burn it to an SD card using Win32DiskImager, and boot from it on the Raspberry Pi.
Once the Pi boots, it loads iex
but doesn’t do anything with the LEDs.
To make something display, you have to setup
which I/O pin to use and how many LEDs are chained together, then render
something to them:
1 2 3 |
|
The second argument to the render
function is a binary representing the RGB values of each LED.
I have just concatenated three 3-byte binaries here so it’s easier to see where each LED’s configuration begins and ends.
It’s not pretty, and it would be tedious to do anything very complicated with just this interface, but it works!
To demonstrate something a bit more fun, I made a small demo to show how you might use the Nerves.IO.Neopixel
library in a project.
The project implements a scan
function that uses the render
interface to draw a single red light sweeping back and forth across the strip, Battlestar Galactica style.
It initializes a strip with 72 LEDs (since I happened to have a half-meter strip of the 144-LED-per-meter variety) and scans across them at 10 milliseconds per frame.
You can check out the code for the scanner
app in my nerves_neopixel_examples
repo on GitHub.