Before starting this project, I was looking for a project I could use to learn Elixir. I’d been hearing great things about it, played with it a little on Exercism, and went to a local Elixir meetup a couple of times.

Then I saw the talk “Embedded Elixir in Action” by Garth Hitchens about using Nerves to develop real-world Elixir-based embedded systems. I’d had an original Raspberry Pi B sitting on a shelf for a few years, so this seemed like it would be a good opportunity to learn Elixir and finally use that hardware for something. I also had a bunch of WS2812B “NeoPixel” RGB LEDs that I was itching for an excuse to use for something. (Caution: If you browse the available NeoPixel products on Adafruit, you may not be able to resist buying something).

At this point, I was unable to resist hitting these three birds with one stone!

A Brief Introduction

I started this project pretty new to Elixir, having just begun to read “Programming Elixir” by Dave Thomas. I also wasn’t familiar with embedded systems development beyond writing some hobbyist C code for AVR microcontrollers in the distant past. I want to take an opportunity to chime in with the other voices in saying that you don’t have to really understand all these things deeply in order to get started and build a useful project.

The journey was a lot of fun for me and I hope to share some of my excitement with you!

Elixir, Erlang, and Processes

Elixir is a functional programming language that runs on the Erlang virtual machine, called BEAM. This is just like how the Java Virtual Machine (JVM) was designed for running Java code, but can also host code written in other languages, like Clojure. In both cases, a brand-new language (i.e. Elixir or Clojure) was born with an already-robust, production-ready run-time environment instead of starting from scratch.

Besides being a functional language, Elixir has a few other key concepts to understand when you’re getting started. In Elixir, the code is arranged as functions that are grouped into Modules, which normally run many Processes in order to accomplish their purpose. Elixir code is basically designed around each Process being a tiny microservice sends and responds to messages from other Processes. If this concept just blew your mind, check out Chris Nelson’s talk from CodeMash 2016: Low Ceremony Microservices with Elixir.

These Processes in the Erlang VM are much more lightweight than an Operating System process, so it’s not unusual to have many thousands of them running at any time, much as you might have many Objects instantiated in a Ruby-based system. Also similar to Ruby’s Objects, a Process in Elixir is how state is stored and accessed, by passing messages between Processes.

Speaking of state, it’s also worth mentioning that the data structures in Elixir are all immutable. When the state of a Process needs to be changed, it is accomplished by replacing its state with a new state rather than modifying parts of the state in-place. This is probably confusing at first for a new Elixir programmer, but it quickly becomes natural and enables Elixir to do some great things under the hood to enable efficient concurrency and garbage-collection.

OTP and Applications

OTP is a really cool feature that Elixir inherits from its Erlang heritage. One thing that wasn’t obvious to me at first is that what a developer might normall call an ‘application’ is, in OTP terms, a collection on interacting OTP Applications. For example, if you want to log things to stdout or stderr, you might include the Logger Application in your mix.exs file, like this:

mix.exs
1
2
3
4
5
6
7
8
9
10
11
12
13
defmodule MyProject.Mixfile do
  use Mix.Project

  # ...

  def application do
    [
      applications: [:logger],
      mod: {MyProject, []}
    ]
  end

end

Then, when you run your project, the mod option says that the MyProject Module will start the supervision tree for my top-level Application. The applications option says to also start additional OTP Application(s), their process(es) waiting to receive messages from other Processes.

There is plenty more to learn about OTP, but you’ll probably learn best by just reading up on it and trying it out for yourself. The official Elixir Getting-Started Introduction to Mix and OTP is, unsurprisingly, a great place to start.

Onward and upward, to the main point!

Getting Started with Nerves

Having established how fun and exciting Elixir is, let’s get some Elixir code running on a Raspberry Pi (or similar Linux-based embedded development platform). The tool to accomplish this is called Nerves. In a nutshell, Nerves wraps up the Buildroot tool, making it trivially easy to cross-compile a stripped-down Linux image for your target platform of choice.

As of this writing, there’s a brand-new tool called Bakeware that can be used to simplify the process I’m going to describe below for many common use-cases. Wendy Smoak recently wrote a blog post explaining how to use Bakeware, so check that out if you’re interested. Bakeware will probably be the preferred method of interacting with Nerves for most people going forward. Since it didn’t exist when I started this project and you might be curious what Bakeware does behind the scenes, I’ll explain the lower-level method I’ve been using.

Setting up a Nerves Build Environment

Much has been written about how to get Nerves running on Mac OSX, so I’m going to describe how to do it using a Windows machine with access to a Linux VM. On the Windows side, I’m using Windows 10, but that probably isn’t relevant because Windows isn’t doing anything special here. For Linux, I’m using Ubuntu 14.04 since the Nerves website describes how to get started on Ubuntu. If you want to try it on another distribution, you’ll have to figure out which equivalent packages to use.

From a fresh install of Ubuntu, you’ll need to install the following packages to get started:

bash
1
sudo apt-get install git g++ libssl-dev libncurses5-dev bc m4 make unzip

Then, check out the main Nerves repository:

bash
1
git clone git@github.com:nerves-project/nerves-system-br.git

After that finishes, run make help to get a list of supported platforms:

bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Nerves System Help
------------------

Targets:
  all                           - Build the current configuration
  burn                          - Burn the most recent build to an SDCard (requires sudo)
  system                        - Build a system image for use with bake
  clean                         - Clean everything - run make xyz_defconfig after this

Configuration:
  menuconfig                    - Run Buildroot's menuconfig
  linux-menuconfig              - Run menuconfig on the Linux kernel

Nerves built-in configs:
  bbb_linux_defconfig           - Build for bbb_linux
  nerves_bbb_elixir_defconfig   - Build for nerves_bbb_elixir
  nerves_bbb_erlang_defconfig   - Build for nerves_bbb_erlang
  nerves_galileo_elixir_defconfig - Build for nerves_galileo_elixir
  nerves_rpi2_elixir_defconfig  - Build for nerves_rpi2_elixir
  nerves_rpi2_erlang_defconfig  - Build for nerves_rpi2_erlang
  nerves_rpi_elixir_defconfig   - Build for nerves_rpi_elixir
  nerves_rpi_erlang_defconfig   - Build for nerves_rpi_erlang
  nerves_rpi_lfe_defconfig      - Build for nerves_rpi_lfe

Since my target platform is an original Raspberry Pi Model B, I then ran

bash
1
make nerves_rpi_elixir_defconfig

This sets up the configuration options to build a Linux image that will boot into Elixir’s iex shell using the Raspberry Pi’s HDMI monitor output. It’s possible to re-configure the terminal to use the UART pins on the board, so that you could connect to it using PuTTY over an FTDI cable, so check out the how-to on the nerves-system-br README if you want to do that. Instead, I just plugged a monitor into the HDMI port and a USB keyboard into one of the USB ports.

Once the default configuration is done, just run make to build the default system image. This will probably take a really long time, depending on the speed of your computer, hard drive, Internet connection, etc.

This is ware Bakeware is a win for most use-cases. It pre-builds these base system images and toolchains ahead of time, so you can simply download and use them right away instead of building them yourself. Bakeware also eliminates the dependency on Linux, because the images were already “baked” on Linux with the appropriate kernels, headers, and other various voodoo available.

bash
1
make

This will result in the base firmware image being written to buildroot/output/images/nerves-rpi-base.img. At this point, I confirmed that things were working properly by copying this .img file to my Windows host machine using WinSCP, then burned it to an SD card using Win32DiskImager.

Yes, I downloaded and ran a Windows executable (As an Administrator!) from Sourceforge. I feel bad about it, but apparently, that’s just how people get the functionality of dd on Windows. Let’s be honest, it’s about the same thing as a sudo-curl-bash installer which has become so common.

After the SD card is done being written, I put it in my Raspberry Pi and booted it up. Lo and behold, (after only four seconds!) I was greeted by an iex prompt.

Driving NeoPixels from a Raspberry Pi

With all that out of the way, we’re ready for the actual blinkenlights. Well, almost.

First, we need a way to drive the 5V NeoPixel data intput using the Raspberry Pi’s 3.3V outputs. One easy way to accomplish this is with a 74AHCT125 Level-Shifter chip, but I didn’t have one laying around and didn’t want to order one. What I did have laying around was an SN74ALS1035N Non-Inverting Buffer. The trick was that the inputs happen to accept a High-level input voltage of only 2V, while the outputs are 5V nominal. Since the outputs are open-collector, I had to use a 1k-ohm pull-up resistor from the output to VCC.

The other issue is that I’m planning to drive a long strip of WS2812B NeoPixels, which draws far more power than the Raspberry Pi can supply. I cut the end off an old 5V cell phone charger and put a header on it to simplify bread-boarding, then added a 3300uF 6.3V electrolytic capacitor that I had lying around. The purpose of the capacitor is to stabilize the voltage being supplied to the strip when the current draw changes suddenly (e.g. when the lights are blinking on and off).

Here’s a crude schematic of the circuit, and a picture of the dead-bug sculpture in all its magesty.

With the hardware interface figured out, I needed a way to generate the required Pulse-Width-Modulation (PWM) pattern to control the LED colors. I did a lot of research about how this works and was considering doing it with a low-cost AVR microcontroller that would interface with the Raspberry Pi. If you’re interested in the details, you should check out the NeoPixel posts on josh.com. This guy did some truly amazing work documenting how the WS2812B “NeoPixel” works, and how to interface efficiently with it.

In the end, it wasn’t necessary, because the rpi_ws281x project makes it possible to directly generate the required patterns using the Raspberry Pi’s hardware PWM and Direct Memory Access (DMA) capabilities.

Building the Elixir Project and Interfacing with C code

This is the part of the project where I learned a lot about the basics of Elixir projects and a few more obscure details about building native C code as part of a project.

To integrate the rpi_ws281x C library with my Elixir code, I chose to write a little C wrapper around it so that it would accept binary pixel data on STDIN, taking come configuration parameters on the command-line. From there, I used a Port in Elixir to ‘safely’ talk to this C code without risking a catastrophic failure of the whole Erlang VM by loading the C code directly using the NIF method.

Here’s how that looks. Also note the use of :code.priv_dir/1, which takes the name of the specified Release and returns the file system path to the priv/ directory within your packaged Application.

nerves_io_neopixel_driver.ex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
defmodule Nerves.IO.Neopixel.Driver do

  use GenServer
  require Logger

  def start_link(settings, opts) do
    Logger.debug "#{__MODULE__} Starting"
    GenServer.start_link(__MODULE__, settings, opts)
  end

  def init(settings) do
    Logger.debug "#{__MODULE__} initializing: #{inspect settings}"
    pin   = settings[:pin]
    count = settings[:count]

    cmd = "#{:code.priv_dir(:nerves_io_neopixel)}/rpi_ws281x #{pin} #{count}"
    port = Port.open({:spawn, cmd}, [:binary])
    {:ok, port}
  end

  def handle_call({:render, pixel_data}, _from, port) do
    Logger.debug "#{__MODULE__} rendering: #{inspect pixel_data}"
    Port.command(port, pixel_data)
    {:reply, :ok, port}
  end

end

The other secret sauce was something I adapted from Frank Hunleth’s elixir_ale project, which also uses some C code for low-level hardware interfacing. In your mix.exs, you just have to define a special Compile task (mine is called Ws281x), then add it to the compilers list further down.

mix.exs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
defmodule Mix.Tasks.Compile.Ws281x do
  def run(_) do
    0 = Mix.Shell.IO.cmd("make priv/rpi_ws281x")
    Mix.Project.build_structure
    :ok
  end
end

defmodule Nerves.IO.Neopixel.Mixfile do
  use Mix.Project

  def project, do: [
    # ...
    compilers: [:Ws281x, :elixir, :app],
  ]
  # ...
end

This tells mix to shell-out to make to build priv/rpi_ws281x, which is the target of the C code I wrapped around the rpi_ws281x library. The priv directory in your project folder ends up getting packaged into the Erlang Release that is burned into the SD card, and accessible using the :code.priv_dir/1 function mentioned earlier. Here is what I have in the Makefile to make that work:

Makefile
1
2
3
4
5
include $(NERVES_ROOT)/scripts/nerves-elixir.mk

priv/rpi_ws281x: src/dma.c src/pwm.c src/ws2811.c src/main.c
  @mkdir -p priv
  $(CC) $(CFLAGS) -o $@ $^

Controlling NeoPixels from Elixir

If you want to dive in and try running the code, download the nerves_io_neopixel repository:

bash
1
2
3
4
mkdir ~/projects
cd ~/projects
git clone git@github.com:GregMefford/nerves_io_neopixel.git
cd nerves_io_neopixel

Now, assuming that you checked out nerves-system-br to your home directory and did the make step earlier, you can source the environment script to set up the cross-compilers, then build the project.

bash
1
2
source ~/nerves-system-br/nerves-env.sh
make

If you’re doing this on a Linux VM with a Windows host, you also need to take one more step to generate the .img file that you need to burn to the SD card:

bash
1
fwup -a -i _images/nerves_io_neopixel.fw -d _images/nerves_io_neopixel.img -t complete

This takes the efficiently-packed “firmware” file with a .fw extension and formats it into a much larger file with the appropriate blank-space offsets so that it can be booted on the target.

bash
1
2
 18M nerves_io_neopixel.fw
329M nerves_io_neopixel.img

From there, you can copy the .img file to the Windows host using WinSCP, burn it to an SD card using Win32DiskImager, and boot from it on the Raspberry Pi.

Once the Pi boots, it loads iex but doesn’t do anything with the LEDs. To make something display, you have to setup which I/O pin to use and how many LEDs are chained together, then render something to them:

iex
1
2
3
Alias Nerves.IO.Neopixel
{:ok, pid} = Neopixel.setup pin: 18, count: 3
Neopixel.render(pid, <<255, 0, 0>> <> <<0, 255, 0>> <> <<0, 0, 255>>)

The second argument to the render function is a binary representing the RGB values of each LED. I have just concatenated three 3-byte binaries here so it’s easier to see where each LED’s configuration begins and ends. It’s not pretty, and it would be tedious to do anything very complicated with just this interface, but it works!

To demonstrate something a bit more fun, I made a small demo to show how you might use the Nerves.IO.Neopixel library in a project. The project implements a scan function that uses the render interface to draw a single red light sweeping back and forth across the strip, Battlestar Galactica style. It initializes a strip with 72 LEDs (since I happened to have a half-meter strip of the 144-LED-per-meter variety) and scans across them at 10 milliseconds per frame.

You can check out the code for the scanner app in my nerves_neopixel_examples repo on GitHub.