Cartography, GIS, QGIS

Alpha-Channel Hillshading in QGIS

TLDR:

Rather than using a white-black colour gradient map for a hillshade layer, use a black@100% alpha to black@0% alpha gradient map. The effect is similar to the multiply blend mode, and the response curve can be adjusted from a straight line to a nonlinear one to adjust contrast.

I’ve been trying to find a way to create a hillshading layer in QGIS that was visually pleasing, but avoided using QGIS’ advanced layer compositing modes.

While the advanced layer compositing modes (multiply/burn/dodge etc.) look great on screen or when outputting a raster, my workflow involves generating a PDF that retains a maximal amount of vector content. This keeps down file size, and retains the ability to do post-production in say, Illustrator with a view to producing high quality prints.

The effect we want is a decrease in the luminosity of the layers underneath the hillshade layer, emphasising the relief in the terrain while leaving flat areas unchanged. The hillshade layer is a single channel, 8-bit raster. The default behaviour in QGIS is to linearly map a white-black gradient (equal RGB values) producing flat areas are white (Luminosity=R=G=B=255), and very hilly areas could reach black (luminosity=R=G=V=0).

The hillshade layer itself is a raster, which has been generated from a DEM using gdaldem:

gdaldem hillshade dem/vmelev_dtm20m/dtm20m/prj.adf dem/dtm20m_hillshade.tif -compute_edges -combined

The combined mode, is ‘a combination of slope and oblique shading.’ It has the benefit of producing minimal values for flat areas.

Screen Shot 2017-06-16 at 15.20.10
Hillshade layer produced with gdaldem

In insolation, this looks fine as a canvas layer, but we get into trouble when compositing multiple layers. We need to composite it with the layers underneath which involves introducing some element of transparency.

The obvious solution is to make the hillshade partially transparent. The hillshade image was set to be partially transparent in the middle of  the middle of the 3 images below. The problem as you can see is that it has become desaturated. Why? Because in the ‘normal’ alpha channel blending mode the result of composition is the hillshade layer’s colour multiplied by its opacity,  added to the colour of the layer below. As the flat areas are white, the visual effect is to lighten and desaturate the lower layers.

Screen Shot 2017-06-16 at 15.29.03

There are a couple of ways around this. First, we could use a different blending mode, such as ‘multiply’ and no transparency. This multiplies the luminosity of the layer with that of the layer underneath. White (1) = multiply by 1 = no change.

But this is no good here as QGIS doesn’t support the mixing of vector layers and advanced blending modes when writing a PDF. No dice.

How else can we get the same effect?

Create a gradient mapping for the hillshade layer that creates the same effect as layer multiplication blending

We change from having a luminosity gradient to an opacity gradient. Therefore the RGB values will remain constant and instead alpha changes. The net mathematical effect is the same as a white/black gradient map with a multiply blend.

  • White -> 100% Transparent black
  • Black ->  0% Transparent black

The desired effect is visible in the last of the 3 images above. The opacity of the hillshade layer was also reduced to give a more subtle effect.

In the QGIS style for the hillshade layer, the settings to achieve this look like those below:

Screen Shot 2017-06-16 at 15.20.34
Alpha channel gradient map

We can also go one stage further, and use a customised response curve in the alpha channel to adjust the contrast/gamma of the hillshade. Below, some transparency is retained even for the darkest values to reduce the strength of the effect.

Screen Shot 2017-06-16 at 15.23.56
Default linear response curve

Other vector layers can be stacked above the hillshade producing the desired combined effect.

Screen Shot 2017-06-16 at 15.16.55
Complete composite map with vector tree coverage layer, alpha-blended hillshade raster and additional vector layers
Uncategorized

API Access to NBNCo Rollout Data

NBNCo have an address search tool on their site to lookup service information for an address. The API it uses actually exposes more data than they show on the website. The API is not documented but it’s fairly self explanatory. Just be sure to add the correct referer header.

Unfortunately it probably won’t tell you anything you want to know, such as the availability of FTTP at your home address. Ahem.

Politics aside, I found that supplying only the lat and lng for an address is sufficient for a lookup, though I get some odd results; it’s not clear how and when they’re doing geocoding from the parameters you supply.

In curl-land:

curl -X GET \
'http://www.nbnco.com.au/api/map/search.html?\
lat={decimal latitude}&\
lng={decimal longitude}&\
streetNumber={street number}\&
street={street}\&
postCode={postCode}\&
state={VIC/NSW/TAS/QLD/WA/NT/SA} \
-H 'referer: http://www.nbnco.com.au/connect-home-or-business/check-your-address.html'

If you use Postman (and you should) for API testing, I’ve made a collection that supports this.

You’ll get a JSON response that follows this template:

{
"serviceAvailableAddress": false,
"servingArea": {
"isDisconnectionDatePassed": true,
"techTypeMapLabel": "nbn™ Fibre to the premises (FTTP)",
"techTypeDescription": "An nbn™ Fibre to the premises connection (FTTP) is used in circumstances where an optic fibre line will be run from the nearest available fibre node, to your premises.",
"rfsMessage": "",
"csaId": "CSA300000010862",
"addressStatus": "0",
"serviceType": "fibre",
"id": "fibre:3BRU-A0106",
"serviceStatus": "available",
"disconnectionDate": "02/01/2015",
"description": "XDA",
"serviceCategory": "brownfields",
"techTypeLabel": "Fibre to the premises (FTTP)"
},
"fsams": []
}

GIS, Open-Source

Corner UTM Grid Labels in QGIS Print Composer

In this post I’ll demonstrate generating UTM grid labels in the QGIS print composer with formatting that promotes relevant information is a consistent way. Printing the full 6/7 figure reference on each grid interval is unnecessary and frequently unhelpful as often we need to read off/locate a 3 figure reference.

Abbreviated UTM Grid References?

When reading a UTM grid reference on a typical 1:25000-50000 scale topographic map featuring a 1km grid interval, a 6-figure grid reference (3 figures for Eastings and 3 for Northings) describes a location to 100m precision and is typically sufficient for locating a position on the map.

A full UTM grid reference describes a position to 1m precision. Therefore, we typically highlight the significant figures on the map for ease of using a 6 figure reference with the map.

Of course we might still want to know the full reference, so there should be some full grid references given. For consistency, these are often placed in the corners of the map. To find the full reference for a shortened reference, go to the left/bottom of the map and count up.

Here’s an example from a local state-issued topographic map:

Screen Shot 2017-05-10 at 12.02.15

QGIS print composer has the ability to draw and label a UTM grid. However, it’s styling options are somewhat limited out-of-the-box, restricted to a full reference at each interval.

Rendering sub/superscript labels

Most unicode-compatible fonts contain glyphs for superscript and subscript representation of ordinals:

012345678901234567890123456789

QGIS does not handle superscript/subscript formatting internally, but we can achieve the same effect by transposing the numbers with their unicode sub/super-script equivalent characters. Note that this technique is limited to the common Arabic numerals 0-9.

In the Python code below the function UTMFullLabel performs two operations:

  • Determine the non-significant figures of the reference to conditionally format. The index of those figures in the reference depends on whether the label is an Easting (6) or Northing (7).
  • Transpose those figures to subscript

The function UTMMinorLabel merely returns the significant figures. There are two functions as two separate grids are defined in the print composer, and using two formatting functions avoids also handling grid interval logic in Python.


from qgis.utils import qgsfunction
from qgis.gui import *

@qgsfunction(args="auto", group='Custom')
def UTMMinorLabel(grid_ref, feature, parent):
 return "{:0.0f}".format(grid_ref)[-5:-3]

@qgsfunction(args="auto", group='Custom')
def UTMFullLabel(grid_ref, axis, feature, parent):
 gstring="{:0.0f}".format(grid_ref)
 rstr = gstring[-3:] #3 last characters
 mstr = gstring[-5:-3] #the 5th-4th characters
 #either the 1st or 1-2 for the most sig figs depending if there 6 or 7 digits
 lstr = ''
 if (len(gstring) == 6):
 lstr = gstring[0] #first 2 digits
 elif (len(gstring) == 7):
 lstr = gstring[0:1]
 else:
 return str(len(gstring))
 return "{0}{1}{2}m{3}".format(sub_scr_num(lstr),mstr,sub_scr_num(rstr),'E' if axis == 'x' else 'N')

def sub_scr_num(inputText):
 """ Converts any digits in the input text into their Unicode subscript equivalent.
 Expects a single string argument, returns a string"""
 subScr = (u'\u2080',u'\u2081',u'\u2082',u'\u2083',u'\u2084',u'\u2085',u'\u2086',u'\u2087',u'\u2088',u'\u2089')
 outputText = ''
 for char in inputText:
 charPos = ord(char) - 48
 if charPos <0 or charPos > 9:
 outputText += char
 else:
 outputText += subScr[charPos]
 return outputText

For superscript formatting the transposition is:

supScr = (u'\u2070',u'\u00B9',u'\u00B2',u'\u00B3',u'\u2074',u'\u2075',u'\u2076',u'\u2077',u'\u2078',u'\u2079')

Now that we can generate long and short labels they need to be placed appropriately on the grid:

  • Full labels for the grid crossings closest to the corners of the map
  • Abbreviated labels for all other grid positions

This logic is handled in QGIS’ expression language. This could be handled in Python too by passing in the map extent. In English, it is basically performing the procedure:

  • Get the extent (x/y minimum and maximum values) of the map.
  • Test which is closer to the map boundary
    • The axis minimum/maximum plus/minus the grid interval
    • The supplied grid index being labelled
    • Neither – they’re coincident (i.e. the first label is at the axis)

If the value supplied is closer to the map boundary or they’re co-incident then this must be the first label and should be rendered as a full reference.

CASE
  WHEN  @grid_axis = 'x' THEN
    CASE
      WHEN x_min(map_get(item_variables('main'),'map_extent')) + 1000 >= @grid_number THEN  UTMFullLabel( @grid_number, @grid_axis)
      WHEN x_max(map_get(item_variables('main'),'map_extent')) - 1000 <= @grid_number THEN  UTMFullLabel( @grid_number, @grid_axis)
      ELSE UTMMinorLabel(@grid_number)
    END
  WHEN @grid_axis = 'y' THEN
    CASE
      WHEN y_min(map_get(item_variables('main'),'map_extent')) + 1000 >= @grid_number THEN UTMFullLabel( @grid_number, @grid_axis)
      WHEN y_max(map_get(item_variables('main'),'map_extent')) - 1000 <= @grid_number THEN UTMFullLabel( @grid_number, @grid_axis)
      ELSE UTMMinorLabel(@grid_number)
    END
END

Finally to tie all of this together, in the Print Composer three grids are defined:

  • 1000m UTM grid for lines and labels
  • 1000m UTM grid for external tick marks
  • 1 arc-second interval secondary graticule

Two UTM grids are required as QGIS can either draw grid lines or external ticks. Both were desired in this example.

Screen Shot 2017-05-10 at 12.59.16
Define two 1000m-interval UTM grids and an arc second-interval Lat/Lon grid

Screen Shot 2017-05-10 at 13.00.36

For the grid rendering the labels, set the interval to 1000m, custom formatting as described above and only label latitudes on the left/right and longitudes on the top/bottom.

The formatting of the labels should look something like the screenshot above.

The only thing I haven’t covered here in the interest of clarity is the selective rotation of the Y-axis labels as seen in the state-issued topo map. This could be achieved by using an additional grid and setting the rotation value appropriately.

GIS, Open-Source

Generative Pseudo-Random Polygon Fill Patterns in QGIS

QGIS doesn’t support pseudo-random fill patterns out-of-the-box. However, using the Geometry Generator we can achieve the same effect.

Random fill pattern? Here’s a polygon with such a fill. These are useful if, say, we want to represent an area of intermittent water.

Screen Shot 2017-05-05 at 19.53.11
Clipped Grid with 100% randomness

A typical way to achieve this effect would be to generate a random fill texture and use a pattern fill. This is less computationally intensive to render, however QGIS cannot render a texture fill as a vector even in vector output formats, which means all vector fills will be rasterised. If that means nothing to you, don’t worry.

The typical way to generate a randomised fill pattern is firstly to draw a bounding box around the feature of interest, create a grid of points that covers the feature, then only retain those points that also intersect the feature of interest. In effect, the feature geometry is used as a clipping mask for a uniform grid.

For the points that remain after clipping, we can optionally add an amount of randomness to the X,Y value of each grid intersection between zero and the size of a grid element. With no randomness, of course we see the grid pattern.

The QGIS geometry generator comes in useful here as it can be fed the geometry of a feature, in this case a Polygon, that is fed to a PyQGIS script that returns a multipoint geometry which is then symbolised.

The Python code is below:

from qgis.core import *
from qgis.gui import *
import math
import random

"""
Define a grid based on the interval and the bounding box of
the feature. Grid will minimally cover the feature and be centre aligned

Create a multi-point geometry at the grid intersections where
the grid is enclosed by the feature - i.e. apply a clipping mask

Random value determines amount of randomness in X/Y within its
grid square a particular feature is allowed to have
"""
@qgsfunction(args='auto', group='Custom')
def fillGrid(xInterval, yInterval, rand, feature, parent):
  box = feature.geometry().boundingBox()

  #Create a grid that minimally covers the boundary
  #using the supplied intervals and centre it
  countX = math.ceil(box.width() / xInterval)
  countY = math.ceil(box.height() / yInterval)

  #Align the grid
  gridX = countX * xInterval
  gridY = countY * yInterval
  dX= gridX - box.width()
  dY= gridY - box.height()
  xMin = box.xMinimum() - (dX/2)
  yMin = box.yMinimum() - (dY/2)

  points = []
  #+1 to draw a symbol on the n+1th grid element
  for xOff in range(countX+1):
    for yOff in range(countY+1):

      ptX = xMin + xOff*(xInterval) + rand * random.uniform(0,xInterval)
      ptY = yMin + yOff*(yInterval) + rand * random.uniform(0,xInterval)

      pt = QgsPoint(ptX,ptY)
      point = QgsGeometry.fromPoint(pt)
      if feature.geometry().contains(point):
        points.append(pt)

  return QgsGeometry.fromMultiPoint(points)

Finally, in the symbology options for the layer, select ‘Geometry Generator’ as the fill for the layer, and call the Python function with values for X/Y intervals and the proportion (0-1) of randomness to add.

Screen Shot 2017-05-05 at 19.40.31

Note that the multipoint geometry returned is independent of the zoom level and also uses the co-ordinate reference system of the source layer, not the display layer. The eagle eyed may have noticed that in the screenshots above that a transformation appears to have been applied to the grid.

Indeed it has. The source layer is in WGS-84, while the map has a UTM projection.

Tackling scale and CRS dependence are topics for a later post.

Electronics, LED, Open-Source

Building an LED Techno Hat

Pam Rocking The Finished Hat

Rainbow Serpent Festival has become my annual ritualistic ‘turn the phone off, dance till my legs hurt, catch up with old friends, meet new ones, reflect and re-focus’ 21st century pilgrimage. It’s also a great place to check out fantastic visual art, and even if you’re not part of the official program there’s plenty of opportunity to roll your own. As the photographer Bill Cunningham once said:

The best fashion show is definitely on the street. Always has been, and always will be

Apparently this is the era of wearable tech, and with addressable colour LEDs coming down in price and up in brightness, density and packaging, let’s make a sound-reactive LED techno hat. You can never have too much eye candy on the dance floor.

The job was threefold:

  1. Get the raspberry pi driving the LED array
  2. Battery power everything from a small backpack
  3. Make it remote-controllable, preferably from a phone or other WiFi device

Pictures during and after construction

Fadecandy powered up on the bench
WS2811/SMD5050 Strip Cutting
Electronics all packed up
Components on the bench
Labelled Components


Videos

 

Ingredients

  • One (awful) hat. This one cost me about AUD$15 on eBay and is as horrendously purple as it looks
  • Lots of epoxy adhesive. It’s the only stuff I could get to stick the LED silicone tubes.
  • Usual craft and electronics bench equipment

Electronics Components

Raspberry Pi

The Pi is easily available and cheap. I wanted to use the Beaglebone but the order didn’t arrive in time. Once overlocked to 1GHz performance ended up being adequate.

The Pi boots into Debian from an SD card. The TP-Link dongle is run as an access point using hostapd + udhcpd. There are plenty of guides on the web about getting this working. I went with the TP-Link as unlike other adaptors the Debian hostapd binary is compatible and I gave up rebuilding the drivers/hostapd after running into a pile of kernel header issues.

All that remained were the installation of the GCC toolchain (build-essential) for building fade candy and the JVM for PixelController. I hijacked an existing init.d script to start both of these at boot.

Pi Audio Input

The Pi doesn’t have an analog audio input. The C-media USB dongle is USB audio class-compliant and worked fine. I had to bastardise the one you can see in the picture, soldering a 100uF electrolytic capacitor across the input pins and borrowing +5V from the USB bus. This was due to the DC offset of the AGC output and it’s power requirements. The adafruit product page for this device discusses this. I set the AGC to 40dB and left it there. Input gain in ALSA was set to ~75.

Software:

Fadecandy Open Pixel Control Server

This communicates with the Fadecandy over USB and exposes an Open Pixel Control socket to where we will send the LED data. It’s also capable of re-mapping the pixel array to compensate for snaked or other non-sequential wiring. This wasn’t needed, but I did take advantage of the colour and gamma correction for the array.

I could have driven the LEDs directly from the Pi via the DMA hack, or added a DMX512 (RS422) output and used an off-the-shelf driver board, but the Fadecandy does so much more. Doing temporal dithering and gamma correction/key frame interpolation in 16bpp colour I’ve found that it handles low brightness levels/fades/transitions in a much more subtle and effective way than I see by bit-banging a 24-bit colour buffer straight into the LEDs.

PixelController LED matrix software

This Java application generates the eye candy. It has a few different visual generators, effects and mixers. It’s also cross-platform due to a Java codebase. Whether the Pi would be up to the task was questionable, but given that the Pi now sports official support from Oracle for the platform in the form of an ARM hard-float JVM (there’s a lot of floating point code in PixelController) I was prepared to give it a shot.

I had grand plans to roll my own effects engine, but time was against me. PixelController had most of what I needed; flexible effects generation/preset control, headless ‘console’ mode running on the Pi and OSC remote control.

PixelController doesn’t talk OPC out-of-the-box nor can it talk natively to the FadeCandy via libUSB, so I wrote a patch to get it to talk to the FadeCandy OPC server over TCP. At the time of writing the patches haven’t been merged, but it’s here on Github if you want to try it.  (Edit: It was merged). There’s no multi-panel support or gamma/colour-correction offload support, but it’s good enough for this scenario.

TouchOSC Android

TouchOSC was used to remote control PixelController.PixelController even contains a layout file. I tweaked the layout a bit for my own preference but this part was a lot easier than expected!

TouchOSC/PixelController find each other using mDNS; no manual config was required.

LED Array Construction

The Fadecandy has 8 outputs, each of which can drive 64 LEDs

I received 4 strips of 144 LEDs. They look like this with the silicone shielding removed.

A bit of simple math, cutting and soldering transforms these into 8×64 strips with 8×8 strips left over. These will be found a home on another project.

Power Considerations

Each WS2812 LED apparently draws 60mA @ 5V at full brightness, but I don’t trust this given they’re Ali Baba specials. Firing up some test code and a current meter I got these values:

Nominal Value:

100% on 64 LEDs -> (60/1000) * 64 = 3.84A @5V

Tested Values:

50% on 64 LEDS -> 1.7A @ 5V

100% on 64 LEDS -> 3.3A @ 5V

They’re pulling a bit less current than expected, and the power curve appears fairly linear.

Scaling this up:

512 LEDS (8*64) -> 26.4A @ 5V (132W) @ 100% brightness

Wow, that’s pulling quite a lot of power, and this thing is going to be running off batteries.

Fortunately, a 12V lead-acid battery will be used with a DC-DC step-down converter for which we get the more conservative 11A @ 12V. A lead acid battery was chosen due to the high output current current handling capabilities. It was not chosen for it’s low weight! A few Li-Po packs were investigated but getting high output current at a sensible price wasn’t proving fruitful.

We also need to take into account the fact that this peak load will very rarely be achieved in practise. In practise it was found that the LEDs are extremely bright and running them at full power was unnecessary most of the time. Unfortunately this meant losing dynamic range in the LED output signal by scaling to 50% or 75% most of the time.

The step-down transformer I used (not the 100W one that I ordered and didn’t receive) was rated to 25W RMS. Peak wasn’t quoted but we can assume its 25/.707= ~ 35W. As I was running 50% max brightness most of the time and given none-white output, I didn’t expect to experience (nor did I) any problems running with this.

Wiring for power

18AWG power wire from LEDs is rated to 16A (up to 300V typically)

2 of these run in parallel from the transformer rates us to 32A @ 5V, which is more than plenty.

In the end I skimped in construction of this given the under speccing of the DC-DC transformer.

Room for improvement?

Pi -> Beaglebone

Performance from the Pi was ok, but it felt underpowered. Effect frame rate was lower than I was hoping for. I expect moving to the Beaglebone with the ARMv7 core with VFP + NEON support in addition to the higher clock rate will yield better results. I’m sure further code optimisation and making use of the GPU in PixelController would have helped too.

Power

The 100W 24V-5V stepdown transformer didn’t arrive in time. This would have required two 12V batteries wired in series and beefing up of the 5V cabling.

Remote Display

The wearer can’t see what the damn thing is doing. Either get an extrovert friend to wear it while you press the buttons, or the remote app needs a remote view.

Actually, the whole thing would run just fine on a phone. Android supports USB OTG host-mode so could drive the fadecandy directly.

Uncategorized

Redirecting stdio to the Android Debug Log from a static/shared library

This is quite a technical post, but this issue seemed to take up most of an afternoon, so perhaps my notes will be useful to someone else…
I have a static library (in this case an audio codec) that is being integrated into an Android OMX/Stagefright audio decoder. There are some issues and it is crashing on certain input streams. This library has some debug (write to stderr) functionality already, but I can’t see it on the Android host as its going to stdio. So… how to get this into the Android debug log?
The OMX wrapper is built with the Android AOSP toolchain, but the library isn’t Android specific, and due to the use of assembler in its source and the differences between ARM assembler and GNU assembler, the library needs to be built with RVCT (RVDS/DS-5) and not the Android NDK.
Although Android is using bionic rather than libc I have managed to get away with using the ARM arm_linux stdc headers without problem., i.e:
 LDINCLUDE = /usr/local/DS-5/include/
The output static library from RVDS is placed somewhere in the AOSP/NDK build tree and the library is linked against during build with a few additions to the modules Android.mk:

LOCAL_PATH:= $(call my-dir)

MYLOCAL_PATH := $(LOCAL_PATH)

include $(CLEAR_VARS)


# Tell the build system about the existing (pre-built) library.

# Note: The library name appears here and in the LOCAL_WHOLE_STATIC_LIBRARIES below.

# Also, the path is relative to the path of this makefile

LOCAL_PREBUILT_LIBS += prebuilt/my_lib.a



include $(BUILD_MULTI_PREBUILT)



LOCAL_PATH := $(MYLOCAL_PATH)

include $(CLEAR_VARS)



LOCAL_SRC_FILES := myDecoder.cpp



LOCAL_C_INCLUDES := \

        frameworks/av/media/libstagefright/include \

        frameworks/av/include/media/stagefright \

        frameworks/native/include/media/openmax \

        $(LOCAL_PATH)/src \

        $(LOCAL_PATH)/include \



LOCAL_CFLAGS := -DOSCL_UNUSED_ARG= -DOSCL_IMPORT_REF= 

LOCAL_STATIC_LIBRARIES := my_lib



LOCAL_SHARED_LIBRARIES := libstagefright libstagefright_omx libstagefright_foundation libutils libcutils



LOCAL_MODULE := libstagefright_soft_mydec



LOCAL_MODULE_TAGS := optional



include $(BUILD_SHARED_LIBRARY)
I have only experienced one problem with doing this, which is that if building with the RVCT linux headers, calling printf(stderr, “msg”) produces a linkage error against aeabi_stderr when attemping to link the static library into the Android library:
/media/Android_Build/build/prebuilts/gcc/linux-x86/arm/arm-linux-androideabi-4.6/bin/../lib/gcc/arm-linux-androideabi/4.6.x-google/../../../../arm-linux-androideabi/bin/ld: out/target/product/toro/obj/STATIC_LIBRARIES/dec_lib_intermediates/my_lib.a(fmi_api.o): in function ddpi_fmi_checkframe:sub_dec/fmi_api.c(.text+0x1e8): error: undefined reference to '__aeabi_stderr'
This isn’t unexpected, as I’m building with the RVCT libc rather than bionic. To work around this, link against the NDK/AOSP c headers. In my case they’re at:
/home/rob/android_ndk/android-ndk-r7/platforms/android-14/arch-arm/usr/include
I still had one error at this point, as RVCT was looking for linux_rvct.h. Including this in the path solved the last build error, and the library now links during the AOSP build.
Lets look at the library contents with nm. The library with the problem:
axdd.o:

00000000 t $a

000001dc t $a

00000178 t $d

         U __aeabi_stderr

00000000 T xdd_init

000001dc T xdd_seek

0000001c T xdd_unp

         U bso_init

         U bso_rewind

         U fprintf
And now:
axdd.o:

00000000 t $a

000001d8 t $a

00000174 t $d

00000000 a          U __sF

00000000 T xdd_init

000001d8 T xdd_seek

0000001c T xdd_unp

         U bso_init

         U bso_rewind

         U fprintf



__aeabi_stderr is now __sF
Bionic defines sF:
#define stderr (&__sF[2])
All looking good!
The last job is to get dalvik to pipe stderr to the system log. This is covered at: http://developer.android.com/tools/debugging/debugging-log.html
In this case, /data/local.prop did not exist and I had to add the property to build.prop via adb:
cat "log.redirect-stdio=true" >> /system/build.prop
This persists after a reboot, so should have been picked up by dalvik when it started.
Outside the scope of this document, the decoder shared library was then pushed into the phone’s filesystem. However there still wasn’t anything in the log.
Searching around, it looks as though the stderr() output isn’t being piped into the log as redirect-stdio only refers to the dalvik VM. What is needed is to include the Android Native logging API and use the ___android_log_write() method.
See:
Rebuild the library and here we go…. log data appears in the adb log:
01-02 01:25:30.734: E/mydec_subdec(129): FATAL ERROR:  auxdatal > frmsz

01-02 01:25:30.734: E/mydec_subdec(129): Error occurred in:

01-02 01:25:30.734: E/mydec_subdec(129): my_dec/xdd.c (line 88)

In this case a macro was being used:
#define         ERR_PRINTERRMSG(a)               __android_log_print(ANDROID_LOG_ERROR,"ddp_subdec","\n\nFATAL ERROR:  %s\n\nError occurred in:\n%s (line %d)\n\n", (a),__FILE__,__LINE__)
In this specific case of this library some aux data is out of bounds and the assert on this was crashing. In case you were interested.
01-02 01:55:08.328: V/subdec_axdd(2841): auxdatal=4868

01-02 01:55:08.804: V/subdec_axdd(2841): auxdatal=6870

01-02 01:55:10.023: V/subdec_axdd(2841): auxdatal=6951

01-02 01:55:10.453: V/subdec_axdd(2841): auxdatal=8404

01-02 01:55:12.398: V/subdec_axdd(2841): auxdatal=2260

01-02 01:55:13.453: V/subdec_axdd(2841): auxdatal=3476

01-02 01:55:13.757: V/subdec_axdd(2841): auxdatal=6356

01-02 01:55:14.171: V/subdec_axdd(2841): auxdatal=11460

01-02 01:55:15.757: V/subdec_axdd(2841): auxdatal=6916

01-02 01:55:16.320: V/subdec_axdd(2841): auxdatal=9332

01-02 01:55:16.398: V/subdec_axdd(2841): auxdatal=10950

01-02 01:55:16.421: V/subdec_axdd(2841): auxdatal=8532

01-02 01:55:16.632: V/subdec_axdd(2841): auxdatal=11476

01-02 01:55:16.781: V/subdec_axdd(2841): auxdatal=13015

01-02 01:55:17.500: V/subdec_axdd(2841): auxdatal=6870

01-02 01:55:17.929: V/subdec_axdd(2841): auxdatal=3940
The only real downside to this method is that the source of the static library is needed. If that is true, then this is a useful tool for debugging the library on an Android host. If we don’t have the source, then this won’t work and a more generic way of intercepting stdio output would be required in the operating system itself.
Uncategorized

NSW Topo Maps on your iOS/Android device

In this post I looked at accessing raster topographic data through LPI’s ArcGIS server, and constructing an offline atlas that could be used on a mobile device. The next logical step was to see if any of the mapping applications for Android/iOS were capable of viewing live data from the server. I looked at a couple of applications that advertise support for the necessary feature, specifically compatibility with the  TMS API, which the ArcGis MapServer handles correctly, one must assume at least semi-intentionally.

First up, the ArcGIS application itself on iOS – free. This works just fine, and the map is accessed by importing the required map of the LPI server into ArcGIS online. This can be done by passing the URL of the map in question to ArcGIS.com, then ‘saving’ the map to an ArcGIS online account. After logging into the same account in the ArcGIS app, the saved map can be accessed and viewed. Presumably the ArcGIS REST API is being used behind the scenes. The issue that rules out use of this app for remote area use was the lack of offline caching. To be fair, that’s not what this app is intended to be used for.

Second up, also in iOS, Galileo. This app supports more-or-less the same schema XML as MOBAC. I imported the customMapSource I used in MOBAC without any problem, and caching works as expected. I haven’t tested it for navigation beyond basic GPS positioning.

The next app I looked at was AlpineQuest, on Android. This also supports TMS custom online maps, but uses its own XML schema. Generating this was fairly painless. I’ve created a suitable map source that includes the old/new NSW topo maps and the ‘web’ topographic map. You can download it here. Caching works just fine. I personally found this the most usable of all the mobile apps, but YMMV.

I discussed this with a friend whilst out canyoning at the weekend and decided to paste in the same map area from various map sources. Several people have described the trails marked on local topo maps as being accurate in the same way that CityRail timetables aren’t. The ‘official’ NSW maps, Google, and Openstreetmap sources are shown here. Make your own mind up….

The last two images are the result of an experiment to add hill-shading to the NSW topo maps. AlpineQuest allows for multiple layers with transparency. I couldn’t get the OSM hillshade layers to render, but the Google Terrain overlay adds some depth to the image, though the artefacts from extra data in the terrain layer were, I thought, off-putting.

LPI Topo Map
NSW Topo Series 2
Hike Bike Map (OSM?) 
Google Maps
NSW Topo Series 1
Cycle/Hike Map
Google Terrain
NSW Topo + Google Terrain
NSW Topo + (30%) Google Terrain