Detecting the new Amazon Dash buttons

There’s been discussion[1][2][3] about how the new Dash Buttons (model #JK291P) have been difficult to detect on the network. In theory, the buttons (including the older model #JK76PL) broadcast ARP probes to the local network each time the button is pressed. The issue is that the published filters from FamiLAB and this Medium post and this repo and several others all filter by ARP packets, and then also by the source IP == The idea is that you only get one of these packets for each button press. However, with the new buttons, you don’t always get one of the packets! On my network, after the first button press, the button broadcasts a DHCP request first, and then the ARP broadcast has a source corresponding to the IP the button has acquired. Rather than trying to be clever about what the buttons will send and narrowing down to a single packet per button press, we can just filter by time. After all, it takes at least 5 seconds to repeat press a button, and all the packets received from button will come in much less than 5 seconds. So we can simply ignore all packets subsequently received within 5 seconds.

I’ve also modified the script to filter the sniffing by MAC address, which is slightly more efficient then filtering by ARPs, especially if there are a lot devices on your network.

It turns out that there’s a nice API to WhatsApp. I took the easy way and call the demo app from the script to send messages.

# derived from FamiLAB
import logging # for the following line
logging.getLogger("scapy.runtime").setLevel(logging.ERROR) # suppress IPV6 warning on startup
from scapy.all import * # for sniffing for the ARP packets
import requests # for posting to the IFTTT Maker Channel
import time
import code
print('Init done.')

lasttime = {}
buttons = { 'goldfish': '11:22:33:44:55:66',
            'ziploc': 'aa:bb:cc:dd:ee:ff'}

def arp_display(pkt):
    wa_login = '15555552121'
    wa_password = '[somepassword]'
    wa_dest = '15555551212'
    wa_msg = '"Button was pressed!"'
    cmd = 'yowsup-cli demos -l '+wa_login+':'+wa_password+' -s '+wa_dest+' '+wa_msg
    if not pkt.haslayer(Ether):
    print pkt[Ether].src, pkt.summary()

    # ignore additional packets received within min_interval seconds
    mac = pkt[Ether].src
    min_interval = 5    # seconds
    global lasttime
    if not (mac in lasttime):
        interval = min_interval + 1     # we haven't see this; generate a fire
        interval = time.time() - lasttime[mac]

    if interval >= min_interval:
        lasttime[mac] = time.time()
        if mac == buttons['goldfish']:
            # post to IFTTT
        elif mac == buttons['ziploc']:
            # send a WhatsApp message
            subprocess.Popen(cmd, shell=True)
            print 'We got a stray packet from an unknown device. Probably just Skynet.'

f = " or ".join(["ether host " + buttons[button] for button in buttons])
print f
print(sniff(iface="en1", prn=arp_display, filter=f, store=0))
Posted in Pub | Leave a comment

Scapy on Mac OS X 10.11 and beyond

Scapy is a Python library that does low-level packet manipulation. It’s also useful in playing with Amazon Dash Buttons. There are plenty of older instructions on getting it running on Mac OS X, but many of these don’t work on Mac OS X 10.11/10.12 where System Integrity Protection (SIP) is enabled by default. Here’s a script to do it for you:


echo "We need root permissions to install libdnet"
sudo echo "OK"
curl -O ""
python --user
$PIP_BIN install scapy --user
$PIP_BIN install pypcap --user
curl -L -o
cd libdnet-libdnet-1.12
sudo make install
cd python
python install --user

A few notes:

  • You will need the Command Line Tools package to compile libdnet. If you don’t have them installed already, you should be automagically prompted by the OS.
  • We use the --user flag to install locally in our home directory, avoiding problems with SIP (which protects system files, including parts of the shared Python directories)
  • We use pypcap rather than pcapy, because the latter does not work
  • The latest version of libdnet is 1.12 (contrary to the SourceForge page, which indicates that the latest is 1.11), and can be found on GitHub
Posted in Pub | Leave a comment

Getting a Myanmar visa in Chiang Mai

This information is current as of April 2016.

Getting a tourist visa to Myanmar in Chiang Mai is quick and straightforward. There is a conveniently located consulate with very friendly staff, and no appointment is required. Here’s what you’ll need:

  • Your passport
  • A photocopy of your passport photo page
  • Two passport photos
  • 800 THB cash (or 1600 THB for rush service – I’m not sure how fast this is)
  • Travel dates
  • Address in Myanmar
  • Your previous employment information

If you really want to print your own form and fill it out ahead of time, you can find one on their embassy’s website. You can get these forms and fill them out at the Chiang Mai consulate, and they even provide pens and glue-sticks to attach a photo to your application. After submitting your documents, they will review them (~10 minutes), ask you to make any necessary changes, and then you will pay the 800 THB application fee (cash only). The consulate will then give you a receipt and let you know when to return for your passport.

  • Normal service time is two days (e.g. drop off Tuesday, pick up Thursday).
  • Visa application hours are 09:00-12:00 Monday – Friday. Pick up hours are 15:30-16:30 (normally two days later).
  • Although there is a sign at the entrance to the embassy saying something about needing bus/air tickets and a hotel booking, this is not requested when submitting the application.
  • Despite Google Maps indicating that the consulate is closed, it’s really open (since July 2015).
  • It is less expensive to obtain a visa ahead of time rather than getting a visa on arrival (800 THB ≈ $22.75 vs. $50 for eVisa on Arrival)
Posted in Pub | 2 Comments

Astro Teller on moonshot thinking outside Google[x]

“I believe everyone here is a moonshot thinker, in your hearts[…] I believe that most of us are not in a context where we can be as open minded, as honest, as dispassionate when appropriate, as authentic as we want to be, as our natural selves would be. And I think you need to ask your context for that opportunity. And if it’s really not going to give it to you, be humble, try a few times, and if it doesn’t work, go find a new context. You all deserve to be able to let the best part of yourselves out.”

Posted in Pub | 1 Comment

A Chromebook… for $199

I picked up an Acer C720 Chromebook a few days ago to see what all the rage was about. After all, I spend most of my time in the browser already, where I create, read, update, and delete things. Do I really need all the excess of a full-fledged operating system? Can I get by with a $199 laptop that is cheap enough and light enough that I can take it traveling everywhere? I won’t go into details on the hardware – there are plenty of reviews already – but I did want to lay down some thoughts on the experience. I used Code Jam as a litmus test for Chromebooks as a simple development machine, and this is how it turned out:

  • This thing is fast. When I hear “Celeron”, I think machines made by Compaq or eMachines sitting next to CRTs that are dog-slow and will probably break in 3 months. This Celeron, however, is Haswell microarchitecture, and with minimal requirements of Chrome OS, makes the machine feel fast. It also helps that Chromebooks come with flash storage. This means excellent boot times (far faster than any Mac/Windows machine), and general snappiness.
  • This thing is light. It only has an 11″ screen, but compared to my unibody Macbook Pro 15″, it’s a much better travel companion. And the battery really does last all day (it’s rated for 8.5 hours, and that’s pretty much what I get).
  • The screen sucks. The TN panel has terrible viewing angles. The HP Chromebook 11 is the only budget Chromebook with an IPS display right now, but its processor is roughly half as fast.
  • Some things are really hard that are trivial on the conventional OS’s:
    • Try remapping the caps lock key to control on an external keyboard. In Chrome OS, you can easily do that with the internal keyboard, but not for external keyboards. It took me about two hours of reading through bug reports to find a hack.
    • Unzipping files – Chrome OS has built-in functionality that mounts zip files as ejectable drives – but it doesn’t work with all zip formats. For example, Code Jam solutions. The workaround for this was downloading and unzipping on my server and using a remote text editor to read the file.
    • Transferring files to/from remote servers – you can’t use TeamViewer (there’s no app), you can’t use SFTP (there’s an app but it costs money). In Code Jam, my workaround for the input/output files was copy and paste. Less than ideal.
  • Some issues are unique to the architecture of Chrome OS:
    • Flaky internet makes Chrome OS suck – you’re in some purgatory between good internet and no internet at all, which means that many apps are spinning around trying to connect rather than going into their (reduced functionality) offline modes.
    • The NaCl SSH client is sometimes insufficient. For example, Koding requires an SSH proxy server, and Chrome OS can’t do that. The workaround is to use Koding’s web terminal interface.
  • And some issues are just because Chrome OS is younger and not as polished:
    • I had frequent bluetooth disconnects for my headset and mouse. For my headset, the only solution was often to reboot (resetting the bluetooth power didn’t help).
    • While you can set “natural scrolling” for the trackpad, you can’t for an external mouse.
    • Scrolling with a bluetooth mouse was often quite jumpy scrolling pages at a time, even at the lowest sensitivity settings.
  • But it does work. My Code Jam “stack” consisted of a Koding VM and a Zed text editor client on Chrome set up to work with remote files on the VM. Pretty simple, and it worked pretty well (aside from the occasional Zed save hiccups that didn’t push the last character I typed).

Ultimately, this Chromebook is not something that I’d want as a development replacement machine. Chrome OS is just a wee bit too limiting. I sorely miss some of the niceties I have on Mac OS X – TotalTerminal, Alfred, Transmit, and Adium to name a few. Maybe in a few months we’ll see Haswell+ Chromebooks with an IPS display, and I’ll have a nice travel companion (assuming where I travel to has good internet).

Posted in Pub | Leave a comment

Compiling rippled on Ubuntu

There are instructions for compiling rippled on Ubuntu, but they’re not entirely clear. Sometimes you just want a script to do everything for you. Here’s something I cobbled together for Ubuntu 12.04 (you may want to modify the tag for whatever the current release of rippled is):

set -e
sudo apt-get update; sudo apt-get -y upgrade
sudo apt-get install -y git scons ctags pkg-config protobuf-compiler libprotobuf-dev libssl-dev python-software-properties
sudo apt-add-repository -y ppa:boost-latest/ppa
sudo apt-get update
sudo apt-get install -y libboost1.55-all-dev
git clone
cd rippled
git checkout tags/0.23.0
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install -y gcc-4.8 g++-4.8
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.6 60 --slave /usr/bin/g++ g++ /usr/bin/g++-4.6
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 40 --slave /usr/bin/g++ g++ /usr/bin/g++-4.8
sudo update-alternatives --set gcc /usr/bin/gcc-4.8

Alternatively, you can just pull down the binary I compiled and install the associated dependencies:

set -e
sudo apt-get update; sudo apt-get -y upgrade
sudo apt-get install -y protobuf-compiler libprotobuf7 python-software-properties
sudo apt-add-repository -y ppa:boost-latest/ppa
sudo apt-get update
sudo apt-get install -y libboost1.55-all
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install -y gcc-4.8 g++-4.8
curl -O ""
chmod +x rippled
./rippled -u

Someone also submitted a package to Debian, but that will take a while to trickle downstream.

Posted in Pub | Leave a comment

Seagate 2TB Backup Plus Portable Drive disassembly

Seagate has developed a new 2.5″ laptop drive with a 2 TB capacity. Previous 2 TB laptop drives had a thickness of 15 mm, making them unsuitable for placement in most laptops. In contrast, the new Seagate/Samsung Spinpoint M9T ST2000LM003 (HN-M201RAD/AVN) has a thickness of only 9.5mm. However, this drive is not yet being sold through normal consumer sales channels. However, these drives are being sold as part of the Seagate Backup Plus line, inside a nice little USB 3.0 enclosure. These Backup Plus drives are also much cheaper (about 50% less) than the drives on eBay (step 3: profit?). Note, however, that you get no warranty (Seagate warranty validation: “The product you identified was sold as a system component”). The only tricky part is actually getting the drive out of the enclosure, especially if you’re trying not to damage the case.

The top of the case is made of stamped metal, and is held on by a combination of double sided tape and small clips.

Top of Seagate Backup Plus case

Because there is more tape near the USB plug, it is easiest to begin prying the case off from the other end. Start by jamming something thin under the edge of the top of the case (see photo below). To avoid damage, you’ll probably want to use a nylon spudger (while screwdrivers work fine they will probably cause some cosmetic damage to the case). You can slide the spudger around the edge of the case, releasing the two clips at the end.

Spudger in Seagate Backup Plus Portable Drive

I ended up jamming the spudger further into the top of the case (on top of the hard drive) and prying gently, which also released two of the side clips (but being careful not to deform to the top too much).

Samsung Backup Plus - clips released

After that, you just have to work the spudger around releasing the rest of the clips, and the top will come off. The drive itself is held into the bottom part of the case by two plastic pegs that protrude slightly from the case into the screw holes of the drive on the side away from the USB plug. They can be easily released with a little gentle prying.

Seagate Backup Plus - bottom case

Success! A 2TB internal laptop drive at last!

Posted in Pub | 2 Comments

Backporting packages – bandwidthd on Precise

bandwidthd is a great tool to graph traffic on your network interfaces, especially when you’re acting as an ISP for the better part of the city. Unfortunately, bandwidthd on Ubuntu Precise Pangolin (12.04) stopped getting updated at 2.0.1+cvs20090917-4.1. This package contains a pretty serious bug that causes bandwidthd to lose all data every 6 hours or so. Fortunately, there’s a more recent package (2.0.1+cvs20090917-5) with the fix committed, but it’s only available for Ubuntu 12.10+. (NB: 2.0.1+cvs20090917-7 has a dependency on a higher version of Apache than 12.04 has, so we’re ignoring it.) Solution: backport! (Here’s a dpkg that you can just dpkg -i if you don’t want to do it yourself.)

It’s actually not very difficult to do a backport, but it’s not trivial the find the correct instructions.

  • Grab all sources from LaunchPad: .orig.tar.gz, .debian.tar.gz, .dsc
  • dpkg-source -x *.dsc
    cd bandwidthd-2.0.1+cvs20090917
    dpkg-buildpackage -rfakeroot -uc -b
    cd ..
    dpkg -i bandwidthd_2.0.1+cvs20090917-5_amd64.deb
  • Done!

You will probably also need to install build dependencies, but just apt-get whatever you need.

This could potentially all be automated with backportpackage, but I haven’t investigated yet.

Posted in Pub | 1 Comment

Getting Kali Linux on a Cubieboard

Here’s an abbreviated tutorial for getting Kali Linux (1.0ish) running on a Cubieboard. (In case you’re wondering, Kali is the successor to BackTrack.) I use Berryboot as the bootloader, which allows us to multiboot and use compressed file systems (>50% compression, saving ~2 GB for a full install of Kali), and also makes it easy to swap around and play with different operating systems.

Installing the quick and dirty way

Do step 1 below.

Download one of these:

Now skip to step 10 below.

Installing the dirtier way

What you’ll need: a USB flash drive (~4 GB), Debian/Ubuntu Linux (e.g. Kali on VMware works great).

  1. Begin by installing Berryboot onto a microSD card. The easiest way is to empty the microSD slot, boot into the built-in Android system on the Cubieboard NAND, download the Berryboot APK, and follow the instructions.
  2. Install some tools on your Linux box:
    apt-get -y install squashfs-tools kpartx
  3. Now we need to build a squashfs image of the rootfs that Berryboot can boot form. We’ll do this the easy way: by modifying the Kali Raspberry Pi image. Download the image onto your Linux box (you do have a Linux box, don’t you?):
    curl -O ""
  4. Unzip the Kali Raspberry Pi image:
    gunzip kali-linux-1.0-armel-raspberrypi.img.gz
  5. Mount the image:
    kpartx -av kali-linux-1.0-armel-raspberrypi.img
    mount /dev/mapper/loop0p2 /mnt
  6. Modify the mount point:
    sed -i 's/^\/dev\/mmcblk/#\0/g' /mnt/etc/fstab
  7. We need to configure the kernel modules that will be loaded. Note that Berryboot ships its own kernel modules, so we don’t need the actual blobs themselves. Create or overwrite these files (these are borrowed from the Cubieboard hwpack):

    # /etc/modules: kernel modules to load at boot time.
    # This file contains the names of kernel modules that should be loaded
    # at boot time, one per line. Lines beginning with "#" are ignored.
    #For SATA Support
    #Display and GPU


    # Workaround for dropping connections because power save
    options 8192cu rtw_power_mgnt=0 rtw_enusbss=0
  8. Now we need to package up the rootfs we customized into a squashfs image:
    mksquashfs /mnt ~/kali-on-cubie.img -comp lzo -e lib/modules
  9. Clean up:
    umount /mnt
    kpartx -d kali-linux-1.0-armel-raspberrypi.img
  10. Now we need to get the image onto the microSD card so that Berryboot can use it. Copy your new image (kali-on-cubie.img) to a USB flash drive.
  11. Boot your Cubieboard with Berryboot (i.e. the microSD card inserted). Click “Edit Menu”. Click (and hold) “Add OS”, and you’ll get a dropdown menu; select “Install from USB stick”, and choose the kali-on-cubie.img we just created.
  12. Close the menu editor. Boot Kali.
  13. Eat cake to celebrate.

Installing the unicycle caving way

Instead of using the prebuilt ARM image for Raspberry Pi, we’ll build the rootfs “from scratch” (kind of). This lets us use the armhf architecture, which Google tells me is better than armel (Cubieboard has a more modern processor that supports the faster armhf instructions, whereas Raspberry Pi only supports armel). Also, we can customize the image, leaving out packages which we might not need in a memory/storage constrained environment.

Basically, all we need to do is to build the rootfs following the instructions that are conveniently laid out in the Kali docs. For posterity, I preserve the scripts below.

After building the rootfs, we resume at step 6 above, of course substituting the actual path of our new rootfs for /mnt (which is the path of the rootfs that would have been mounted from the prebuilt image).

apt-get install debootstrap qemu-user-static

# define which packages you want here. If you want everything, add "kali-linux-full". See also this kali.list.chroot for ideas.
export packages="xfce4 kali-menu wpasupplicant kali-defaults initramfs-tools uboot-mkimage nmap openssh-server"
export architecture="armhf"

cd ~
mkdir -p arm-stuff
cd arm-stuff/
mkdir -p kernel
mkdir -p rootfs
cd rootfs

debootstrap --foreign --arch $architecture kali kali-$architecture
cp /usr/bin/qemu-arm-static kali-$architecture/usr/bin/

cd ~/arm-stuff/rootfs
LANG=C chroot kali-$architecture /debootstrap/debootstrap --second-stage

cat << EOF > kali-$architecture/etc/apt/sources.list
deb kali main contrib non-free
deb kali/updates main contrib non-free

echo "kali" > kali-$architecture/etc/hostname

cat << EOF > kali-$architecture/etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp

cat << EOF > kali-$architecture/etc/resolv.conf

export MALLOC_CHECK_=0 # workaround for LP: #520465
export LC_ALL=C
export DEBIAN_FRONTEND=noninteractive

mount -t proc proc kali-$architecture/proc
mount -o bind /dev/ kali-$architecture/dev/
mount -o bind /dev/pts kali-$architecture/dev/pts

cat << EOF > kali-$architecture/debconf.set
console-common console-data/keymap/policy select Select keymap from full list
console-common console-data/keymap/full select en-latin1-nodeadkeys

cat << EOF > kali-$architecture/third-stage
dpkg-divert --add --local --divert /usr/sbin/invoke-rc.d.chroot --rename /usr/sbin/invoke-rc.d
cp /bin/true /usr/sbin/invoke-rc.d

apt-get update
apt-get install locales-all
#locale-gen en_US.UTF-8

debconf-set-selections /debconf.set
rm -f /debconf.set
apt-get update
apt-get -y install git-core binutils ca-certificates initramfs-tools uboot-mkimage
apt-get -y install locales console-common less nano git
echo "root:toor" | chpasswd
sed -i -e 's/KERNEL\!=\"eth\*|/KERNEL\!=\"/' /lib/udev/rules.d/75-persistent-net-generator.rules
rm -f /etc/udev/rules.d/70-persistent-net.rules
apt-get --yes --force-yes install $packages

rm -f /usr/sbin/invoke-rc.d
dpkg-divert --remove --rename /usr/sbin/invoke-rc.d

rm -f /third-stage

chmod +x kali-$architecture/third-stage
LANG=C chroot kali-$architecture /third-stage

cat << EOF > kali-$architecture/cleanup
rm -rf /root/.bash_history
apt-get update
apt-get clean
rm -f cleanup

chmod +x kali-$architecture/cleanup
LANG=C chroot kali-$architecture /cleanup

umount kali-$architecture/proc
umount kali-$architecture/dev/pts
umount kali-$architecture/dev/

cd ..

These instructions were adapted from BerryBoot’s docs.

Posted in Pub | Tagged , , | 3 Comments

MagTool is now open source!

MagTool, my erstwhile magnetic stripe project, is now open source. Go forth and fork. =)

Posted in Pub | Leave a comment