Uncategorized

(Another take on) Slope-Aspect Shading in QGIS

I’ve seen a few blog posts recently describing methods of creating slope-aspect maps, both in ArcGIS and QGIS. Both of these implement the combined slope and aspect shading described by Brewer and Marlow. Here’s my take on it….

The technique as originally described uses HSV representation of colour to simultaneously show both the slope and aspect of terrain.

A colour is represented in HSV by it’s Hue, Saturation, and Value (brightness relative to black) parameters. These parameters allow us to describe a colour in terms more suited to describing it’s perceptual appearance. We can adjust values on those axes and broadly perceive those changes, independently, as variations in our perceptual parameters for describing colour.

This scheme encodes the aspect and slope:

  • Perceptually constant brightness
  • Hue mapped to aspect (8 points around the compass/colour wheel)
  • Saturation mapped to slope (grey = flat, saturated=steep)

In the original paper and other implementations of this I’ve seen, slope and aspect are uniquely encoded into an integer range, and a pre-calculated colour palette applied. This is a perfectly valid solution and was appropriate at the time the paper was written given the limitations of typical computer hardware at the time. However, it’s quite brittle and experimentation with the colour and slope mappings is difficult.

Layer Compositing Technique

Alternatively, we can composite two separate layers describing the slope and aspect separately, and rely on alpha blending of the slope layer over the aspect to desaturate flatter areas. The final appearance will be much the same. The computational requirements are slightly higher but we can easily experiment with different colour mappings if desired.

Hue (Aspect)

A discrete colour gradient is applied to aspect values (0-359 degrees). The angle ranges and RGB values for the steepest slopes from the Brewer paper were used. Style file available here.

Saturation (Slope)

A discrete colour gradient is applied to slope values (0-90 degrees). Flat areas use the same grey value (RGB 161,161,161) as Brewer, with an opacity of 1.0 (255). The opacity of each subsequent step is reduced by 1/3. Style file available here.

Compositing

Rendering the slope layer over the aspect layer results in flatter sections being rendered grey, with steep slope sections remaining visible. The results differ slightly from the original implementation as some hand-tweaking of the colour mapping table was performed, and these are not preserved in this method.

Alternative Styles

It’s quite easy with this method to experiment with different rendering techniques.

Linear Gradients

The original implementation discretises the aspect/slope gradients. We can also render either/both with linear interpolation between steps. This preserves subtle topographic features at the expense of making specific values harder to determine from the image and increased clutter.

Discrete intervals
Linear slope/opacity curve, interpolated aspect gradient.

These interpolations are performed in the RGB colour space, and it may be possible to create pleasing results by blending in the HSV or CIE colour spaces.

Art Projects

Stellarium

Temporary sculptural installation, mixing geometry of platonic solids with light and motion
Rob Jones, Saba Komarzynski
Rainbow Serpent Festival, January 2019

This was an installation piece built for Rainbow Serpent Festival. After many years of procrastination, my co-conspirator Saba and I finally decided we’d like to have a crack at doing a temporary outdoor installation art piece. Much to our surprise, they liked our proposal and asked us to build it.

Art Wank

Designed specifically for the festival and site, it was not intended primarily as piece to be considered or interpreted in it’s own right, but more as a facilitator of reflection, creating and holding space within a calm and sheltered area of the site; the slowly evolving shapes and colours dialled for quieter moments of contemplation and introspection.

Design

  • Small stellated dodecahedron with most faces cut away, creating a framework exposing the many axes of symmetry, and allowing light to propagate.
  • Interior perspex dodecahedron, with light filtering and reflective films applied.
  • Interior element rotates, reflecting incident light onto the ground and into trees around the structure.

Tech

  • 12 zoom-focus 300W RGBW LED fixtures
  • Interior piece rotation driven by custom DMX->stepper driver interface running on a humble AVR 8-bit micro.
  • Show control/lighting program built in TouchDesigner.

Construction

  • Modelled and shop drawings created in Fusion360
  • 18/12mm exterior grade ply, CNC cut and drilled, waterproof sealant
  • Built Ikea ‘flat pack’ style, allowing primary construction to be quickly performed on-site, and minimising wastage of standard 2400×1200 ply panels
  • 7mm clear acrylic central element, chemically welded, CMY+mirror films applied
  • Hundreds of bolts and screws

GIS, Open-Source

Propeller AeroPoints – Offline Processing

Retrieve raw Ublox log files and manually PPK post-process it using RTKLIB 

TLDR

  • AeroPoints upload raw ubx-format log data over clear HTTP to AWS S3.
  • Intercept using mitmproxy/wireshark/tcpdump/whatever
  • Extract binary payload
  • Post-process ubx data in RTKLIB; PPK or PPP mode.
  • Enjoy accurate Ground Control Points.

Propeller AeroPoints are sold as a convenient way of placing Ground Control Points on an aerial imagery survey. You lay them out, turn them on, acquire your imagery and after collecting them, turn them on again in the presence of WiFi. Your data is uploaded to the cloud, and by magic the locations of your control points are given to, they claim, an accuracy of 5cm. $6000 for a set of 10, plus $600/year to keep your subscription to their cloud processing service active.

If Propeller stop paying their AWS bill, or you stop paying your subscription, these things become doorstops.

Long story short, I ended up with a set that someone was too cheap to keep up the subscription on, and I had some survey points I wanted to get at. In a commercial setting I’d pony up, but, $600 to get 10 control points was more than I could justify for a home project. And, philosophically, it’s my data. So, let’s have at it.

Reverse Engineering

So what’s actually going on inside these? Claims of kinematic GNSS accuracy for $600 a unit, and their cloud platform suggest it’s almost certain that low cost components are being used. But which?

GNSS Receiver

The ubiquitous Ublox M8-T module is a likely candidate, though I haven’t taken the foam off the back to actually find out for sure. It seems like a destructive process ripping off the foam backing. The M8T doesn’t perform real time RTK corrections by itself, but it doesn’t matter. As position solutions are being post-processed on Propeller’s cloud service, only raw log data from the receiver is required.

Logging and WiFi

The front panel of the AeroPoint tells us that it contains a part with ID 2AHMR-ESP12S. an ESP12. This is an FCC certified/SMT version of the popular ESP8266 WiFi-SoC, and host of a thousand low power embedded projects due it being available retain for a couple of bucks.

This tells me a few things:

  • They’re using external storage, for the 4GB of log space. My guess is an SD card reader connected over SPI.
  • They’re probably not TLS securing the data upload to the cloud. HTTPS support is known to be flaky on the ESP8266. It doesn’t really have enough memory to store the necessary certificates to validate the certificate chain and your application code.
  • Even if they did get TLS working, they’d also need a full over-the-air upgrade stack, because what if a certificate expires or is revoked? Which means they also need a means of securing the OTA process.

My money is on this thing having baked-in firmware and communication in the clear.

And this is the problem with a lot of “IoT” devices. Poor or no security,  unmaintainable software, consigned to rapid obsolescence.

Admittedly, these are essentially GNSS data loggers that are only network connected for minutes at a time, infrequently. Outside of crypto-ned territory,  if ‘the man’ is your adversary, exploiting a buffer overflow via a man in the middle attack on your data logger’s HTTP client, you already know you need to be not running devices like this on your network. Or at least you should.

For everyone else, the real security risk is academic. Rant over. Moving on.

Extracting GNSS log data

The AeroPoint uploads data to an AWS instance over unsecured HTTP. Intercepting this is trivial. My approach in this instance was to use:

See the mitmproxy documentation for a tutorial.

Ensure you have positively verified that traffic flows are saved and that you can extract the binary payload data from other hosts before attempting this process. Once the data is uploaded, it’s deleted from the AeroPoint.

Setup your Access Point with the WPA settings Propeller require, and power up the first AeroPoint. You should see some back and forth to:

  • groundcontrol.propelleraero.com
  • propeller-data-us-east-1.s3-accelerate.amazonaws.com

The initial API calls acquire an access key for the S3 data upload. You want the multi-megabyte payload of the POST calls to S3. The log data is sent as binary, in raw ubx format, no additional compression or packing.

Now, repeat for as many of the AeroPoints as you have. Refer to the mitmproxy documentation if you’re unsure how to do this.

Post Processing in RTKLIB

As with traffic interception, there’s more than one way to achieve the next step and I’m just going to refer you to the documentation.

In my case I loaded the ubx data into RTKLIB, and used PPK processing with a local CORS station acting as the base.

The folks at emlid have some good instructions on PPK techniques using RTKLIB. They’ve done a fair amount of work to improve the performance of RTKLIB on low-end hardware too. Check out their RTKLIB fork and their hardware too. Beyond retrieving the log data from their receiver, their process is applicable here.

In my case, I used RINEX logs from the AUSCORS archive (I’m in Australia) for a local CORS station, Fortunately I had a short enough baseline to a public CORS site for high accuracy solutions.

One valuable thing a Propeller subscription buys is access to several commercial  CORS networks.

For additional accuracy, I waited a couple of weeks for the final GPS/GLONASS orbital data to be published and used the following data from IGS:

Final SP3 ephemeris 
File name format: WWWW/igsWWWWD.sp3.Z https://cddis.nasa.gov/Data_and_Derived_Products/GNSS/orbit_products.html 

Final clock data (30 second) 
File name format: WWWW/igsWWWWd.clk_30.Z https://cddis.nasa.gov/Data_and_Derived_Products/GNSS/clock_products.html
When converting between calendar day/week numbers and GPS almanac weeks, these sites are useful

I hope that was useful to you. Don’t get me wrong, I don’t mean to put down Propeller or their cloud processing product. It looks like a nicely integrated service that I’ve not been able to try. For commercial use their system is likely going to save a heap of time compared to this approach.

A note on accuracy

Propeller don’t mention where the phase centre of the AeroPoint’s antenna is. I have assumed that it’s under the centre of the marker. The reason being that if the antenna was offset, there is no IMU/compass/gyro data present to provide the extra information about the panel’s orientation, which we need to know if the antenna is not in the centre. I also assume that it is directly below the surface of the panel. It may be that a few millimetres of vertical offset are required. If anyone has actually torn one down and can confirm please let me know.

Cartography, GIS, QGIS

Alpha-Channel Hillshading in QGIS

TLDR:

Rather than using a white-black colour gradient map for a hillshade layer, use a black@100% alpha to black@0% alpha gradient map. The effect is similar to the multiply blend mode, and the response curve can be adjusted from a straight line to a nonlinear one to adjust contrast.

I’ve been trying to find a way to create a hillshading layer in QGIS that was visually pleasing, but avoided using QGIS’ advanced layer compositing modes.

While the advanced layer compositing modes (multiply/burn/dodge etc.) look great on screen or when outputting a raster, my workflow involves generating a PDF that retains a maximal amount of vector content. This keeps down file size, and retains the ability to do post-production in say, Illustrator with a view to producing high quality prints.

The effect we want is a decrease in the luminosity of the layers underneath the hillshade layer, emphasising the relief in the terrain while leaving flat areas unchanged. The hillshade layer is a single channel, 8-bit raster. The default behaviour in QGIS is to linearly map a white-black gradient (equal RGB values) producing flat areas are white (Luminosity=R=G=B=255), and very hilly areas could reach black (luminosity=R=G=V=0).

The hillshade layer itself is a raster, which has been generated from a DEM using gdaldem:

gdaldem hillshade dem/vmelev_dtm20m/dtm20m/prj.adf dem/dtm20m_hillshade.tif -compute_edges -combined

The combined mode, is ‘a combination of slope and oblique shading.’ It has the benefit of producing minimal values for flat areas.

Screen Shot 2017-06-16 at 15.20.10
Hillshade layer produced with gdaldem

In insolation, this looks fine as a canvas layer, but we get into trouble when compositing multiple layers. We need to composite it with the layers underneath which involves introducing some element of transparency.

The obvious solution is to make the hillshade partially transparent. The hillshade image was set to be partially transparent in the middle of  the middle of the 3 images below. The problem as you can see is that it has become desaturated. Why? Because in the ‘normal’ alpha channel blending mode the result of composition is the hillshade layer’s colour multiplied by its opacity,  added to the colour of the layer below. As the flat areas are white, the visual effect is to lighten and desaturate the lower layers.

Screen Shot 2017-06-16 at 15.29.03

There are a couple of ways around this. First, we could use a different blending mode, such as ‘multiply’ and no transparency. This multiplies the luminosity of the layer with that of the layer underneath. White (1) = multiply by 1 = no change.

But this is no good here as QGIS doesn’t support the mixing of vector layers and advanced blending modes when writing a PDF. No dice.

How else can we get the same effect?

Create a gradient mapping for the hillshade layer that creates the same effect as layer multiplication blending

We change from having a luminosity gradient to an opacity gradient. Therefore the RGB values will remain constant and instead alpha changes. The net mathematical effect is the same as a white/black gradient map with a multiply blend.

  • White -> 100% Transparent black
  • Black ->  0% Transparent black

The desired effect is visible in the last of the 3 images above. The opacity of the hillshade layer was also reduced to give a more subtle effect.

In the QGIS style for the hillshade layer, the settings to achieve this look like those below:

Screen Shot 2017-06-16 at 15.20.34
Alpha channel gradient map

We can also go one stage further, and use a customised response curve in the alpha channel to adjust the contrast/gamma of the hillshade. Below, some transparency is retained even for the darkest values to reduce the strength of the effect.

Screen Shot 2017-06-16 at 15.23.56
Default linear response curve

Other vector layers can be stacked above the hillshade producing the desired combined effect.

Screen Shot 2017-06-16 at 15.16.55
Complete composite map with vector tree coverage layer, alpha-blended hillshade raster and additional vector layers

Uncategorized

API Access to NBNCo Rollout Data

EDIT  – It’s been brought to my attention this no longer works. Leaving it to gather dust rather than removing it.

NBNCo have an address search tool on their site to lookup service information for an address. The API it uses actually exposes more data than they show on the website. The API is not documented but it’s fairly self explanatory. Just be sure to add the correct referer header.

Unfortunately it probably won’t tell you anything you want to know, such as the availability of FTTP at your home address. Ahem.

Politics aside, I found that supplying only the lat and lng for an address is sufficient for a lookup, though I get some odd results; it’s not clear how and when they’re doing geocoding from the parameters you supply.

In curl-land:

curl -X GET \
'http://www.nbnco.com.au/api/map/search.html?\
lat={decimal latitude}&\
lng={decimal longitude}&\
streetNumber={street number}\&
street={street}\&
postCode={postCode}\&
state={VIC/NSW/TAS/QLD/WA/NT/SA} \
-H 'referer: http://www.nbnco.com.au/connect-home-or-business/check-your-address.html'

If you use Postman (and you should) for API testing, I’ve made a collection that supports this.

You’ll get a JSON response that follows this template:

{
"serviceAvailableAddress": false,
"servingArea": {
"isDisconnectionDatePassed": true,
"techTypeMapLabel": "nbn™ Fibre to the premises (FTTP)",
"techTypeDescription": "An nbn™ Fibre to the premises connection (FTTP) is used in circumstances where an optic fibre line will be run from the nearest available fibre node, to your premises.",
"rfsMessage": "",
"csaId": "CSA300000010862",
"addressStatus": "0",
"serviceType": "fibre",
"id": "fibre:3BRU-A0106",
"serviceStatus": "available",
"disconnectionDate": "02/01/2015",
"description": "XDA",
"serviceCategory": "brownfields",
"techTypeLabel": "Fibre to the premises (FTTP)"
},
"fsams": []
}

GIS, Open-Source

Corner UTM Grid Labels in QGIS Print Composer

In this post I’ll demonstrate generating UTM grid labels in the QGIS print composer with formatting that promotes relevant information is a consistent way. Printing the full 6/7 figure reference on each grid interval is unnecessary and frequently unhelpful as often we need to read off/locate a 3 figure reference.

Abbreviated UTM Grid References?

When reading a UTM grid reference on a typical 1:25000-50000 scale topographic map featuring a 1km grid interval, a 6-figure grid reference (3 figures for Eastings and 3 for Northings) describes a location to 100m precision and is typically sufficient for locating a position on the map.

A full UTM grid reference describes a position to 1m precision. Therefore, we typically highlight the significant figures on the map for ease of using a 6 figure reference with the map.

Of course we might still want to know the full reference, so there should be some full grid references given. For consistency, these are often placed in the corners of the map. To find the full reference for a shortened reference, go to the left/bottom of the map and count up.

Here’s an example from a local state-issued topographic map:

Screen Shot 2017-05-10 at 12.02.15

QGIS print composer has the ability to draw and label a UTM grid. However, it’s styling options are somewhat limited out-of-the-box, restricted to a full reference at each interval.

Rendering sub/superscript labels

Most unicode-compatible fonts contain glyphs for superscript and subscript representation of ordinals:

012345678901234567890123456789

QGIS does not handle superscript/subscript formatting internally, but we can achieve the same effect by transposing the numbers with their unicode sub/super-script equivalent characters. Note that this technique is limited to the common Arabic numerals 0-9.

In the Python code below the function UTMFullLabel performs two operations:

  • Determine the non-significant figures of the reference to conditionally format. The index of those figures in the reference depends on whether the label is an Easting (6) or Northing (7).
  • Transpose those figures to subscript

The function UTMMinorLabel merely returns the significant figures. There are two functions as two separate grids are defined in the print composer, and using two formatting functions avoids also handling grid interval logic in Python.


from qgis.utils import qgsfunction
from qgis.gui import *

@qgsfunction(args="auto", group='Custom')
def UTMMinorLabel(grid_ref, feature, parent):
 return "{:0.0f}".format(grid_ref)[-5:-3]

@qgsfunction(args="auto", group='Custom')
def UTMFullLabel(grid_ref, axis, feature, parent):
 gstring="{:0.0f}".format(grid_ref)
 rstr = gstring[-3:] #3 last characters
 mstr = gstring[-5:-3] #the 5th-4th characters
 #either the 1st or 1-2 for the most sig figs depending if there 6 or 7 digits
 lstr = ''
 if (len(gstring) == 6):
 lstr = gstring[0] #first 2 digits
 elif (len(gstring) == 7):
 lstr = gstring[0:1]
 else:
 return str(len(gstring))
 return "{0}{1}{2}m{3}".format(sub_scr_num(lstr),mstr,sub_scr_num(rstr),'E' if axis == 'x' else 'N')

def sub_scr_num(inputText):
 """ Converts any digits in the input text into their Unicode subscript equivalent.
 Expects a single string argument, returns a string"""
 subScr = (u'\u2080',u'\u2081',u'\u2082',u'\u2083',u'\u2084',u'\u2085',u'\u2086',u'\u2087',u'\u2088',u'\u2089')
 outputText = ''
 for char in inputText:
 charPos = ord(char) - 48
 if charPos <0 or charPos > 9:
 outputText += char
 else:
 outputText += subScr[charPos]
 return outputText

For superscript formatting the transposition is:

supScr = (u'\u2070',u'\u00B9',u'\u00B2',u'\u00B3',u'\u2074',u'\u2075',u'\u2076',u'\u2077',u'\u2078',u'\u2079')

Now that we can generate long and short labels they need to be placed appropriately on the grid:

  • Full labels for the grid crossings closest to the corners of the map
  • Abbreviated labels for all other grid positions

This logic is handled in QGIS’ expression language. This could be handled in Python too by passing in the map extent. In English, it is basically performing the procedure:

  • Get the extent (x/y minimum and maximum values) of the map.
  • Test which is closer to the map boundary
    • The axis minimum/maximum plus/minus the grid interval
    • The supplied grid index being labelled
    • Neither – they’re coincident (i.e. the first label is at the axis)

If the value supplied is closer to the map boundary or they’re co-incident then this must be the first label and should be rendered as a full reference.

CASE
  WHEN  @grid_axis = 'x' THEN
    CASE
      WHEN x_min(map_get(item_variables('main'),'map_extent')) + 1000 >= @grid_number THEN  UTMFullLabel( @grid_number, @grid_axis)
      WHEN x_max(map_get(item_variables('main'),'map_extent')) - 1000 <= @grid_number THEN  UTMFullLabel( @grid_number, @grid_axis)
      ELSE UTMMinorLabel(@grid_number)
    END
  WHEN @grid_axis = 'y' THEN
    CASE
      WHEN y_min(map_get(item_variables('main'),'map_extent')) + 1000 >= @grid_number THEN UTMFullLabel( @grid_number, @grid_axis)
      WHEN y_max(map_get(item_variables('main'),'map_extent')) - 1000 <= @grid_number THEN UTMFullLabel( @grid_number, @grid_axis)
      ELSE UTMMinorLabel(@grid_number)
    END
END

Finally to tie all of this together, in the Print Composer three grids are defined:

  • 1000m UTM grid for lines and labels
  • 1000m UTM grid for external tick marks
  • 1 arc-second interval secondary graticule

Two UTM grids are required as QGIS can either draw grid lines or external ticks. Both were desired in this example.

Screen Shot 2017-05-10 at 12.59.16
Define two 1000m-interval UTM grids and an arc second-interval Lat/Lon grid

Screen Shot 2017-05-10 at 13.00.36

For the grid rendering the labels, set the interval to 1000m, custom formatting as described above and only label latitudes on the left/right and longitudes on the top/bottom.

The formatting of the labels should look something like the screenshot above.

The only thing I haven’t covered here in the interest of clarity is the selective rotation of the Y-axis labels as seen in the state-issued topo map. This could be achieved by using an additional grid and setting the rotation value appropriately.

GIS, Open-Source

Generative Pseudo-Random Polygon Fill Patterns in QGIS

QGIS doesn’t support pseudo-random fill patterns out-of-the-box. However, using the Geometry Generator we can achieve the same effect.

Random fill pattern? Here’s a polygon with such a fill. These are useful if, say, we want to represent an area of intermittent water.

Screen Shot 2017-05-05 at 19.53.11
Clipped Grid with 100% randomness

A typical way to achieve this effect would be to generate a random fill texture and use a pattern fill. This is less computationally intensive to render, however QGIS cannot render a texture fill as a vector even in vector output formats, which means all vector fills will be rasterised. If that means nothing to you, don’t worry.

The typical way to generate a randomised fill pattern is firstly to draw a bounding box around the feature of interest, create a grid of points that covers the feature, then only retain those points that also intersect the feature of interest. In effect, the feature geometry is used as a clipping mask for a uniform grid.

For the points that remain after clipping, we can optionally add an amount of randomness to the X,Y value of each grid intersection between zero and the size of a grid element. With no randomness, of course we see the grid pattern.

The QGIS geometry generator comes in useful here as it can be fed the geometry of a feature, in this case a Polygon, that is fed to a PyQGIS script that returns a multipoint geometry which is then symbolised.

from qgis.core import *
from qgis.gui import *
import math
import random
"""
Define a grid based on the interval and the bounding box of
the feature. Grid will minimally cover the feature and be centre aligned
Create a multi-point geometry at the grid intersections where
the grid is enclosed by the feature - i.e. apply a clipping mask
Random value determines amount of randomness in X/Y within its
grid square a particular feature is allowed to have
"""
@qgsfunction(args='auto', group='Custom')
def fillGrid(xInterval, yInterval, rand, feature, parent):
box = feature.geometry().boundingBox()
#Create a grid that minimally covers the boundary
#using the supplied intervals and centre it
countX = math.ceil(box.width() / xInterval)
countY = math.ceil(box.height() / yInterval)
#Align the grid
gridX = countX * xInterval
gridY = countY * yInterval
dX= gridX - box.width()
dY= gridY - box.height()
xMin = box.xMinimum() - (dX/2)
yMin = box.yMinimum() - (dY/2)
points = []
#+1 to draw a symbol on the n+1th grid element
for xOff in range(countX+1):
for yOff in range(countY+1):
ptX = xMin + xOff*(xInterval) + rand * random.uniform(0,xInterval)
ptY = yMin + yOff*(yInterval) + rand * random.uniform(0,xInterval)
pt = QgsPointXY(ptX,ptY)
point = QgsGeometry.fromPointXY(pt)
if feature.geometry().contains(point):
points.append(pt)
return QgsGeometry.fromMultiPointXY(points)
view raw randGrid.py hosted with ❤ by GitHub

Finally, in the symbology options for the layer, select ‘Geometry Generator’ as the fill for the layer, and call the Python function with values for X/Y intervals and the proportion (0-1) of randomness to add.

Screen Shot 2017-05-05 at 19.40.31

Note that the multipoint geometry returned is: independent of the zoom level, uses the units of the source layer. The eagle eyed may have noticed that in the screenshots above that a projection appears to have been applied. Indeed it has. The source layer uses WGS-84 (degrees), but was projected to Transverse Mercator (metres) for rendering.

Also, the fill is not cached. All redraws will trigger a re-calculation of the fill pattern, which is rather inefficient.

Tackling scale and CRS dependence are topics for a later post.

Edit: 07 Oct 19 – Updated code to use QgsPointXY and edited for clarity.

Electronics, LED, Open-Source

Building an LED Techno Hat

Pam Rocking The Finished Hat

Rainbow Serpent Festival has become my annual ritualistic ‘turn the phone off, dance till my legs hurt, catch up with old friends, meet new ones, reflect and re-focus’ 21st century pilgrimage. It’s also a great place to check out fantastic visual art, and even if you’re not part of the official program there’s plenty of opportunity to roll your own. As the photographer Bill Cunningham once said:

The best fashion show is definitely on the street. Always has been, and always will be

Apparently this is the era of wearable tech, and with addressable colour LEDs coming down in price and up in brightness, density and packaging, let’s make a sound-reactive LED techno hat. You can never have too much eye candy on the dance floor.

The job was threefold:

  1. Get the raspberry pi driving the LED array
  2. Battery power everything from a small backpack
  3. Make it remote-controllable, preferably from a phone or other WiFi device

Pictures during and after construction

Fadecandy powered up on the bench

WS2811/SMD5050 Strip Cutting

Electronics all packed up

Components on the bench

Labelled Components


Videos

 

Ingredients

  • One (awful) hat. This one cost me about AUD$15 on eBay and is as horrendously purple as it looks
  • Lots of epoxy adhesive. It’s the only stuff I could get to stick the LED silicone tubes.
  • Usual craft and electronics bench equipment

Electronics Components

Raspberry Pi

The Pi is easily available and cheap. I wanted to use the Beaglebone but the order didn’t arrive in time. Once overlocked to 1GHz performance ended up being adequate.

The Pi boots into Debian from an SD card. The TP-Link dongle is run as an access point using hostapd + udhcpd. There are plenty of guides on the web about getting this working. I went with the TP-Link as unlike other adaptors the Debian hostapd binary is compatible and I gave up rebuilding the drivers/hostapd after running into a pile of kernel header issues.

All that remained were the installation of the GCC toolchain (build-essential) for building fade candy and the JVM for PixelController. I hijacked an existing init.d script to start both of these at boot.

Pi Audio Input

The Pi doesn’t have an analog audio input. The C-media USB dongle is USB audio class-compliant and worked fine. I had to bastardise the one you can see in the picture, soldering a 100uF electrolytic capacitor across the input pins and borrowing +5V from the USB bus. This was due to the DC offset of the AGC output and it’s power requirements. The adafruit product page for this device discusses this. I set the AGC to 40dB and left it there. Input gain in ALSA was set to ~75.

Software:

Fadecandy Open Pixel Control Server

This communicates with the Fadecandy over USB and exposes an Open Pixel Control socket to where we will send the LED data. It’s also capable of re-mapping the pixel array to compensate for snaked or other non-sequential wiring. This wasn’t needed, but I did take advantage of the colour and gamma correction for the array.

I could have driven the LEDs directly from the Pi via the DMA hack, or added a DMX512 (RS422) output and used an off-the-shelf driver board, but the Fadecandy does so much more. Doing temporal dithering and gamma correction/key frame interpolation in 16bpp colour I’ve found that it handles low brightness levels/fades/transitions in a much more subtle and effective way than I see by bit-banging a 24-bit colour buffer straight into the LEDs.

PixelController LED matrix software

This Java application generates the eye candy. It has a few different visual generators, effects and mixers. It’s also cross-platform due to a Java codebase. Whether the Pi would be up to the task was questionable, but given that the Pi now sports official support from Oracle for the platform in the form of an ARM hard-float JVM (there’s a lot of floating point code in PixelController) I was prepared to give it a shot.

I had grand plans to roll my own effects engine, but time was against me. PixelController had most of what I needed; flexible effects generation/preset control, headless ‘console’ mode running on the Pi and OSC remote control.

PixelController doesn’t talk OPC out-of-the-box nor can it talk natively to the FadeCandy via libUSB, so I wrote a patch to get it to talk to the FadeCandy OPC server over TCP. At the time of writing the patches haven’t been merged, but it’s here on Github if you want to try it.  (Edit: It was merged). There’s no multi-panel support or gamma/colour-correction offload support, but it’s good enough for this scenario.

TouchOSC Android

TouchOSC was used to remote control PixelController.PixelController even contains a layout file. I tweaked the layout a bit for my own preference but this part was a lot easier than expected!

TouchOSC/PixelController find each other using mDNS; no manual config was required.

LED Array Construction

The Fadecandy has 8 outputs, each of which can drive 64 LEDs

I received 4 strips of 144 LEDs. They look like this with the silicone shielding removed.

A bit of simple math, cutting and soldering transforms these into 8×64 strips with 8×8 strips left over. These will be found a home on another project.

Power Considerations

Each WS2812 LED apparently draws 60mA @ 5V at full brightness, but I don’t trust this given they’re Ali Baba specials. Firing up some test code and a current meter I got these values:

Nominal Value:

100% on 64 LEDs -> (60/1000) * 64 = 3.84A @5V

Tested Values:

50% on 64 LEDS -> 1.7A @ 5V

100% on 64 LEDS -> 3.3A @ 5V

They’re pulling a bit less current than expected, and the power curve appears fairly linear.

Scaling this up:

512 LEDS (8*64) -> 26.4A @ 5V (132W) @ 100% brightness

Wow, that’s pulling quite a lot of power, and this thing is going to be running off batteries.

Fortunately, a 12V lead-acid battery will be used with a DC-DC step-down converter for which we get the more conservative 11A @ 12V. A lead acid battery was chosen due to the high output current current handling capabilities. It was not chosen for it’s low weight! A few Li-Po packs were investigated but getting high output current at a sensible price wasn’t proving fruitful.

We also need to take into account the fact that this peak load will very rarely be achieved in practise. In practise it was found that the LEDs are extremely bright and running them at full power was unnecessary most of the time. Unfortunately this meant losing dynamic range in the LED output signal by scaling to 50% or 75% most of the time.

The step-down transformer I used (not the 100W one that I ordered and didn’t receive) was rated to 25W RMS. Peak wasn’t quoted but we can assume its 25/.707= ~ 35W. As I was running 50% max brightness most of the time and given none-white output, I didn’t expect to experience (nor did I) any problems running with this.

Wiring for power

18AWG power wire from LEDs is rated to 16A (up to 300V typically)

2 of these run in parallel from the transformer rates us to 32A @ 5V, which is more than plenty.

In the end I skimped in construction of this given the under speccing of the DC-DC transformer.

Room for improvement?

Pi -> Beaglebone

Performance from the Pi was ok, but it felt underpowered. Effect frame rate was lower than I was hoping for. I expect moving to the Beaglebone with the ARMv7 core with VFP + NEON support in addition to the higher clock rate will yield better results. I’m sure further code optimisation and making use of the GPU in PixelController would have helped too.

Power

The 100W 24V-5V stepdown transformer didn’t arrive in time. This would have required two 12V batteries wired in series and beefing up of the 5V cabling.

Remote Display

The wearer can’t see what the damn thing is doing. Either get an extrovert friend to wear it while you press the buttons, or the remote app needs a remote view.

Actually, the whole thing would run just fine on a phone. Android supports USB OTG host-mode so could drive the fadecandy directly.