Ahoy!

A few weeks ago I decided that it was time to take the plunge and move over to NBNco’s new FTTB offering. Sadly this meant I had to leave behind my native dual-stack with Internode but the potential to take my 6/0.7Mbit ADSL2+ and turn it into 100/40Mbit was too appealing.

So here we are, as promised my link syncs are 107/44Mbit, Yeehah! On the downsides;

  • iiNet hands out DHCP leases for VDSL, no more static IP’s. (Yes, I locked myself out many times)
  • Bridging PPPoE to my Edgerouter is not an option since now I need the iiNet CPE to manage the SIP ATA
  • My native dual-stack is gone, no more IPv6 in the house. Good-bye sweet DHCPv6-PD prince!

Building a tunnelbroker

Despite my 1337 Hurricane Electric IPv6 certificate I figured I have a bit of a way to go to truly grok IPv6. So while the idea of just configuring the HE.net tunnelbroker was tempting, I, like always, made life hard for myself instead! Lets dig in!

I use Vultr to host a KVM instances nice and close to me in Australia, since international transit tends to suck . Incidentally their IPv6 network has left beta and has been maturing over the last 12 months. Great! I’ll just route some of that address space from the instance to my home router (a Ubiquiti Edgerouter lite) and I will be back on the IPv6 Internet.

The tunnel

                        .----------------------------.                        .----------------------------.
                        |           syd01            |                        |           erl01            |
                        |----------------------------|                        |----------------------------|
                        | eth0:   198.51.100.56/23   |     OpenVPN - vtun0    | eth1:   203.0.113.1/24     |
  .----------.          |         2001:DB8:5400::/64 |.----------------------.|         2001:DB8:2000::/67 |
  | Internet |----------|                            ||       SIT sit1       ||                            |
  '----------'          | tun0:   192.0.2.1          |'----------------------'| tun0:   192.0.2.2          |
                        |                            |                        |                            |
                        | sit1:   2001:DB8::1/66     |                        | sit1:   2001:DB8::2/66     |
                        '----------------------------'                        '----------------------------'

Let’s talk about this diagram quickly. syd01 is the Vultr instance and erl01 is my home router. Not in the picture is the upstream hardware doing IPv4 NAT on eth0 for my FTTB connection.

In order to get IPv6 routes back into my home network I needed to use a 6in4 tunnel using SIT (rfc4213), however there is no mechanisms in the SIT protocol to provide privacy or authentication. So, we turn to our old friend OpenVPN and configure a layer 3 site-to-site link tun0 then route the SIT traffic across tun0.

At this point it seems pretty simple right?

The details

Wrong. Or at least I was. There is a bunch of dirty hacks involved in this. Lets just jot them down quickly before we get into configurations.

  1. Splitting a /64 is considering bad practice. This breaks SLAAC and requires the implementation of stateful routing.

    From my basic understanding the reasoning here is that IPv6 addresses are 128bit, with the leading most significant 64bits making up the network address and the least significant 64bits making up the host address. Therefore protocols like SLAAC can safely assume that the smallest IPv6 subnet can be a /64 and any address space further down obviously belongs to that network.

  2. CentOS 6 and specifically the Linux kernel 2.6.32.* is way too old, just hurry up and deploy CentOS 7.
  3. Why CentOS 7? The Vultr instance is getting its eth0 addresses via SLAAC. The issue related to kernel version is support for accepting Router Advertisements at the same time as IPv6 forwarding is enabled. (Broken prior to 2.6.37)
  4. Continuing on about the on-link differences. We implement a real hack on syd01 where we need to proxy Neighbour Discovery Protocol (think IPv4 ARP but via ICMPv6). This method has been panned for being “the new NAT”, but in this case we need to split the /64 so there are no other options left. More on this later.

Ok, with that list of pain out of the way. Lets actually unpack what this solution looked like.

What did we actually deliver?

Not much really. The tunnels performance is pretty bad. I can only get about 10Mbit via IPv6, but as a proof of concept and a learning exercise I am considering that a first pass success. Hosts on my local home network have global IPv6 routing from 2001:DB8:2000::/67 as well as existing IPv4 connectivty.

SYD01 configuration

A few moving parts to capture here on CentOS 7.

Kernel Configuration

  /etc/sysctl.conf
  ...
  # Enable global forwarding
  net.ipv6.conf.all.forwarding = 1
  # Accept IPv6 RA, despite forwarding enabled
  net.ipv6.conf.all.accept_ra = 2
  # Proxy neighbour discovery protocol to downstream routers
  net.ipv6.conf.all.proxy_ndp = 1

Network Configuration

  /etc/sysconfig/network-scripts/ifcfg-eth0
  TYPE="Ethernet"
  BOOTPROTO="dhcp"
  DEFROUTE="yes"
  PEERDNS="yes"
  PEERROUTES="yes"
  IPV4_FAILURE_FATAL="no"
  IPV6INIT="yes"
  IPV6_AUTOCONF="yes"
  IPV6_DEFROUTE="yes"
  IPV6_PEERDNS="yes"
  IPV6_PEERROUTES="yes"
  IPV6_FAILURE_FATAL="no"
  NAME="eth0"
  DEVICE="eth0"
  ONBOOT="yes"
  
  /etc/sysconfig/network-scripts/ifcfg-sit1
  DEVICE=sit1
  BOOTPROTO=none
  ONBOOT=yes
  IPV6INIT=yes
  IPV6_TUNNELNAME=sit1
  IPV6TUNNELIPV4=192.0.2.2
  IPV6TUNNELIPV4LOCAL=192.0.2.1
  IPV6ADDR=2001:DB8::1/66
  IPV6_MTU=1280
  TYPE=sit

OpenVPN

Nothing to complicated here. A site-to-site UDP tunnel using a static key (no PFS for you).

  mode p2p
  rport 1194
  lport 1194
  remote home.example.org
  proto udp
  dev-type tun
  dev vtun0
  secret /etc/openvpn/home.key
  persist-key
  persist-tun
  ifconfig 192.0.2.1 192.0.2.2
  float
  script-security 2
  status /var/log/openvpn_status_home.log
  log-append /var/log/openvpn_home.log
  keepalive 10 60
  cipher AES-128-CBC
  auth SHA256
  user nobody
  group nobody

FirewallD

There is a bit of hackery here because FirewallD doesn’t really support complex (?) use cases like routing, good thing its still just iptables.

  firewall-cmd --direct --permanent --add-rule ipv6 filter FORWARD 0 -i sit1 -j ACCEPT
  firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 -i tun0 -p 41 -j ACCEPT
  firewall-cmd --complete-reload

This does still need to be cleaned up a little bit but should give you the right direction.

ndppd

Here is the real hacks / magic depending on your perspective.

Lead down the right path by Sean Groarke and his write up at https://www.ipsidixit.net/2010/03/24/239/ I got to the point where all my routing was working. I could ping interface-to-interface but when I tried to get from my home LAN out to the internet? Dropped by the remote end of sit1. What was going on?

  ip -6 neigh add proxy 2001:DB8::3 dev eth0

And suddenly my ping is working? Thanks Sean!

As it turns out the root cause here is that the SLAAC addressing being used between the Vultr upstream and the actual VM instance, so subsequently when the upstream router sends a neighbour discovery packet seeking 2001:DB8::3 there is no way to find the address on the L2 link on eth0. So invoke the Linux NDP proxy for 2001:DB8::3 and suddenly for each incoming discovery of 2001:DB8::3 on eth0 we will actually now forward the packet.

This solution works ok, but how do I deal with all the other clients in the house?

There are a couple of daemons that will manage this for you and is more or less the part where “we re-invented NAT”. Now there is a spinning daemon which watches for NS messages and adds the appropriate NDP proxy entires. I do feel like this is a valid use case but it is also easy to see how no, actually a /64 is really the smallest IPv6 subnet.

Somehow I ended up building ndppd, I have opened some work I want to do (get it into EPEL and support SystemD) but for now I have just started the daemon manually.

  /etc/ndppd.conf
  route-ttl 30000
  proxy eth0 {
     router yes
     timeout 500
     ttl 30000
     rule 2001:DB8::/64 {
        static
     }
  }
  
  $ ndppd -d

Great! If you run a ping now from your home network out to an Internet address like ipv6.google.com chances are it should be working. If not tcpdump -i sit1 icmp6 is your friend!

ERL01 configuration

On the Ubiquiti side things are pretty standard. These boxes rock!

Inteface Configuration

Even though we are using dhcpv6-server the Edgerouter must still run the router advertisement function. The difference to watch for here is setting the managed-flag, other-config-flag and autonomous-flag to ensure clients make use of dhcpv6-server’s direction.

Is now a good time to bring up that your Android device won’t work? - See http://www.techrepublic.com/article/androids-lack-of-dhcpv6-support-poses-security-and-ipv6-deployment-issues/

  ethernet eth1 {
      address 192.168.1.1/24
      address 2001:DB8:2000:1/67
      duplex auto
      ipv6 {
          router-advert {
              managed-flag true
              other-config-flag true
              prefix 2001:DB8:2000:1/67 {
                  autonomous-flag false
              }
              send-advert true
          }
      }
      speed auto
  }
  openvpn vtun0 {
      description syd1.example.org
      encryption aes128
      hash sha256
      local-address 192.0.2.2 {
      }
      local-port 1194
      mode site-to-site
      protocol udp
      remote-address 192.0.2.1
      remote-host syd1.example.org
      remote-port 1194
      shared-secret-key-file /config/auth/home.key
  }
  tunnel tun0 {
      address 2001:DB8::2/66
      description "IPv6 Tunnel"
      encapsulation sit
      local-ip 192.0.2.2
      mtu 1280
      remote-ip 192.0.2.1
  }

Service Configuration

Since we have no SLAAC here the stateful dhcpv6-server needs to hand out name-servers and manage IP assignment within the allocated space 2001:DB8:2000::2 -> 2001:DB8:2000::1999.

  dhcpv6-server {
      shared-network-name lan {
          subnet 2001:DB8:2000::/67 {
              address-range {
                  prefix 2001:DB8:2000::/67 {
                  }
                  start 2001:DB8:2000::2 {
                      stop 2001:DB8:2000::1999
                  }
              }
              name-server 2001:4860:4860::8888
              name-server 2001:4860:4860::4444
          }
      }
  }

Closing thoughts

  • If your thinking hey this would be sweet too I don’t want to use SixXS either! Ping Vultr. I raised support case ZXQ-36YYP asking for an additional /64 so I could just run radvd. If there is enough intrested hopefully we can see this become more of a reality.
  • I really miss a decent Australian ISP. This excercise has really made me appreciate the elegance of a DHCPv6-PD tail assigning me a /56.
  • Network operators need to give up on the idea that IP addresses are the “Golden Goose”. Drip feeding out little /120 subnets to networks is incredibly frustrating and PLEASE instead consider monetizing transit proper by enabling DHCPv6-PD rather than thinking your $1 per IPv4 strategy makes sense for IPv6. (Props to Linode who anecdotally are onboard with /48’s if you ask for it)
  • Continuing on network operators, if you haven’t seen it yet go watch Owen DeLong talk to IPv6 enterprise addressing, then please come back and defend handing out really really tiny subnets.

Ahoy!

Today I spent some time hacking on AWS after having been faced with the issues of VPC private connectivity in my day job.

Its only a quick post today but the tldr is that, yes using either Amazon Direct Connect’s or AWS VPN service will enable you to shift the outbound connectivity away from AWS itself and back to your corporate routers. The code to have a play with this approach yourself is at https://github.com/Kahn/aws-outbound-bgp

When your considering this there is a few quick pros and cons.

Pros:

  • Failover is incredibly quick compared to solutions based on auto-scaling groups. BGP will is configured with 30 second timers by default by Amazon and a dead route will be removed even quicker.
  • You can work around the single route only options presented to you via the AWS console.
  • You can scale the solution out past N+1 if your so inclined
  • Your VPC stays more secure by sending your internet bound traffic back to your existing corporate environment ISP’s (Your watching those links right?) rather than the nearest IGW

Cons:

  • You loose literally all of the network elasticity that using AWS affords you by congesting your existing links with AWS originated traffic
  • Your now going to have to get some more of that PI space to address these hosts as your not going to be able to have Amazon EIP’s easily either.

In short I think this solution will is appealing to bigger enterprises who can easily handle the sort of transit bandwidth they require. Being able to simplfy routing across their AWS footprint (think multiple accounts per organization) does make the job of a security team and network team much easier.

Meanwhile, I am sure a small startup just wanting to avoid getting their office 4G link DOS’ed by server traffic will really not care at all about these options and just stick with NAT or HTTP Proxies.

That is all I have time for now but I hope this can help next time your considering how to get traffic out of your private VPCs. If you have any questions please feel free to use the comments or hit me up online.

Ahoy!

Its been quite some time since I set out to complete part two of my home lab write up. Naturally, a lot has changed!!

Physical changes

In part one I discussed by physical layer, this is changed slightly with the addition of my FTTH connection and deprecation of the old crappy FritzBox and related ADSL hardware.

Along with the FTTH connection I switched to using a Juniper SRX210 for CPE after having issues with the Ubiquiti ERL’s and DHCPv6-PD. These days I am up and running with native dual stack thanks to Internode.

I also persisted for a while to use the Rocket M2’s for the PPPoE backhaul before giving in and running cable to make the most of my 100/40 connection. The 2.4Ghz link could only manage ~110Mbps simplex and I configured QOS to ensure only 35Mbps was used in upload. In practice while the radio link was fast enough I was too purist to keep it.

Here is how it looks earlier this year;

You will see the most exciting addition for me was to lash out and replace the Dovado GO with an Opengear ACM5504-5-LR-I. A huge thanks for the guys at Opengear for all the work they have put into their eco-system and their products. When I deploy one of these boxes it does everything I can possibly ask of it. Want to use PCRE to parse incomming SMS? They have done that. Triggers on environmental alarms? Done that. Automatically multi-homed failover? Tick. SSH to a serial port shell? Yup! Great work guys, please keep it up! (Plus we all know what kind of franken raspi + 4x USB serial adaptors would have been like). If anyone is interested in a more detailed write up on how I am using the ACM5504 please feel free to comment with your questions.

The Network

As promised I have some old diagrams which captured the layer 1 and a hybrid layer 2 / layer 3 diagram. At the moment I have not done a complete layer 3 only routing diagram since I am busily changing things with my DN42 infrastructure. You will get a pretty good idea of my internal OSPF network as well as the different segments and routes anyway.

Layer 1

Nothing unexpected here, astute readers will note I stopped using LACP LAG’s and moved back to single gigabit links. The reasoning behind this change was that the Ubiquiti ERL’s cannot use hardware offloading on GNU\Linux bonded interfaces. So in reality your able to get better VLAN routing and throughput on a single link than a bonded one. That being said if you want redundancy at L1 then go ahead and take the hit.

Layer1

Layer 2 / 3

Here is the really fun part. To decipher the diagram I used a basic key;

  • Ovals are subnets
  • Recetangles are physical routers
  • Circles are destination networks

Layer 2/3

As you can see in this design its a fairly crude campus style L3 design. RTR01 acts the the core while there are two distinct physical networks. The Out Of Band (OOB) network and the campus network.

The campus network is actually the wrong way around and the diagram is incorrectly showing a 100Mbps link between the two ERL’s. The reason for this is simply because of the 3 port count on a ERL. To get 1Gbps to the servers from both guest / wifi and the desktop network it meant using 4 wired ports. Obviously that means a transit link needed to exist between the two. As it turns out this was a fun OSPF implementation anyway.

That is all I have time for now but I hope this can inspire you to pursue your own lab environment. If you have any questions please feel free to use the comments or hit me up online.

TLDR

I am updating my GPG keys

Process

I am cleaning up my GPG keys along the way as part of my DR plan. At the moment I have a mess of keys everywhere! Most legacy keys have expired, been revoked or ultimately lost through mismanagement. My aim is to arrive at a point where I can carry with me my identity for both SSH and GPG.

I am generating a day-to-day GPG key to use via my Yubikey NEO. I will also create a backup GPG smart card using a spare German Privacy Foundation Cryptostick v1.2. Finally, the master copy remains on a physically secured USB key(s).

What follows below is notes on recreating this whole thing from scratch again.

Trusted Live Environment

To generate new keys I first boot a ancient laptop using an isostick and a copy of the Fedora 21 Workstation image. After verifying the sha256sum I copied the ISO to the isostick and can boot via the virtual CD-ROM.

Once booted and presented with the installers grub menu, select Troubleshooting then Test this media & start Fedora Live. At this stage I also press tab and remove the quiet rhbg arguments so that I get feedback on the boot process.

At the gnome prompt click Try Fedora to continue into the live OS.

Extending the Live Environment

The first time you boot you will need to connect your environment to the Internet and grab some packages for offline use later.

sudo yum install yum-utils
yumdownloader --resolve ykpers-devel libyubikey-devel libusb-devel autoconf gnupg gnupg2-smime pcsc-lite pcsc-lite-libs
yumdownloader --resolve gnupg-agent libpth20 pinentry-curses libccid pcscd scdaemon libksba8

Full deps in the case of resolve not working, it seems that pre-installed packages are not pulled into the download directory leaving you with broken deps.

autoconf-2.69-16.fc21.noarch.rpm
elfutils-0.161-2.fc21.x86_64.rpm
elfutils-libelf-0.161-2.fc21.x86_64.rpm
elfutils-libs-0.161-2.fc21.x86_64.rpm
glibc-2.20-7.fc21.x86_64.rpm
glibc-common-2.20-7.fc21.x86_64.rpm
gnupg-1.4.18-4.fc21.x86_64.rpm
gnupg2-smime-2.0.25-2.fc21.x86_64.rpm
libgudev1-216-17.fc21.x86_64.rpm
libusb-devel-0.1.5-5.fc21.x86_64.rpm
libusbx-devel-1.0.19-2.fc21.x86_64.rpm
libyubikey-1.11-3.fc21.x86_64.rpm
libyubikey-devel-1.11-3.fc21.x86_64.rpm
m4-1.4.17-6.fc21.x86_64.rpm
nss-softokn-freebl-3.17.4-1.fc21.x86_64.rpm
pcre-8.35-8.fc21.x86_64.rpm
pcsc-lite-1.8.13-1.fc21.x86_64.rpm
pcsc-lite-ccid-1.4.18-1.fc21.x86_64.rpm
pcsc-lite-libs-1.8.13-1.fc21.x86_64.rpm
systemd-216-17.fc21.x86_64.rpm
systemd-compat-libs-216-17.fc21.x86_64.rpm
systemd-libs-216-17.fc21.x86_64.rpm
systemd-python-216-17.fc21.x86_64.rpm
systemd-python3-216-17.fc21.x86_64.rpm
ykpers-1.16.1-1.fc21.x86_64.rpm
ykpers-devel-1.16.1-1.fc21.x86_64.rpm

After yum has downloaded the packages to your working directory copy them to media you can attach to your offline machine.

Reboot to start a clean instance and mount your storage containing the downloaded packages.

yum localinstall *

Now we can get started generating keys!

RAID

While some people might trust their cold storage, since I am using what are actually very crappy USB keys I thought I might RAID then just in case one decides to flip a bit on me.

Creating the array

First lets test both disks are actually OK

dd if=/dev/zero of=/dev/disk/by-id/YOUR-DISK

Create the RAID partitions

fdisk /dev/disk/by-id/YOUR-DISK
n
p
1
enter
enter
t
fd
w

Create the array

mdadm --create md0 -n 2 -l 1 /dev/sdc1 /dev/sdd1

Lets format it!

mkfs.ext4 /dev/md0
mkdir /tmp/raid
mount /dev/md0 /tmp/raid

Further use

Once we are done we will stop the array to unplug the USB keys

umount /dev/md0
mdadm -S md0

To re-use this array next time we scan for existing RAID with both disks plugged in

mdadm --assemble -s

Generate the master key

After opening your terminal window, you need to update the environment variables for gnupg.

export GNUPGHOME=/tmp/raid/gnupg
mkdir $GNUPGHOME

Note: You cannot mount this on a vfat volume as gpg-agent will not be able to open a unix socket.

Now we update our config

cat > $GNUPGHOME/gpg.conf
personal-cipher-preferences AES256 AES192 AES
personal-digest-preferences SHA512 SHA384 SHA256 SHA224
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES ZLIB BZIP2 ZIP Uncompressed
use-agent

Note: You can’t opt out of 3DES and SHA with this configuration. gnupg will automatically add them to the trailing end of your preferences.

After the configuration is done we generate the master key to be used only for signing operations.

    gpg2 --gen-key
    Your Selection? 4
    4096
    2y
    y
    Sam Wilson
    [email protected]
    O

Now we can add our extra UID’s

    gpg2 --edit-key <YOUR KEY ID>
    adduid
    Sam Wilson
    [email protected]
    O
    uid 1
    primary
    save

Generate a revocation certificate

    gpg2 --output $GNUPGHOME/../revocation-certificate.txt --gen-revoke <YOUR KEY ID>
    1
    Created during key creation, emergency use only.

Backup the private keys to ascii

gpg2 -a --export-secret-keys <YOUR KEY ID> > $GNUPGHOME/../masterkey.txt

Generate sub keys

First we generate separate 2048 bit RSA keys for signing, authentication and encryption.

    gpg2 --expert --edit-key <YOUR KEY ID>
    addkey
    4
    2048
    2y
    y
    y
    addkey
    6
    2048
    2y
    y
    y
    addkey
    8
    s
    e
    a
    q
    2048
    2y
    y
    y
    save

Lets backup our sub keys.

gpg2 -a --export-secret-keys <YOUR KEY ID> > $GNUPGHOME/../mastersubkeys.txt
gpg2 -a --export-secret-subkeys <YOUR KEY ID> > $GNUPGHOME/../subkeys.txt

Also backup the $GNUPGHOME binary content in case we need to roll back GPG during later steps.

You can print to hard copy the text files we are created now.

Configure Yuibkey NEO

I found starting up gpg2 --card-edit as liveuser failed to open the smartcard. Running as root resolves the issue.

Lets configure the Yubikey NEO!

    gpg2 --card-edit
    admin
    passwd
    3
    12345678 # Default admin passwd
    <NEWPIN>
    <NEWPIN>
    1
    123456 # Default user passwd
    <NEWPIN>
    <NEWPIN>
    q
    name
    Wilson
    Sam
    lang
    en
    url
    https://www.cycloptivity.net/KEYID.txt
    sex
    m
    quit

Note: If your looking for a random pin generator try < /dev/urandom tr -cd 0-9 | head -c 6.

Now we start to move our sub keys to hardware. This is a one way operation and will leave the backups we took earlier as the only copy of your sub keys (except for the smart card of course!).

    gpg2 --edit-key <YOUR KEY ID>
    toggle
    key 1
    keytocard
    1
    key 1
    key 2
    keytocard
    2
    key 2
    key 3
    keytocard
    3
    save

Note: At this point, I actually hit this bug, after raising a case with Yubico to get a new unit I got started again.

Public key

You need to generate a copy of your master public key to share with the world. Lets make that available quickly.

gpg2 --export -a <KEY ID> >> $GNUPGHOME/../pub.asc

Take a copy of the pub.asc file to your daily laptop along with your Yuibkey.

Setup day to day laptop

Desktop Environments

While technically I should have been able to configure XFCE on my laptop to disable ssh-agent I found this had no effect on Fedora 21.

xfconf-query -c xfce4-session -p /startup/ssh-agent/enabled -n -t bool -s false

Instead its much easier to just hack on this via trusty ~/.bashrc

killall ssh-agent gpg-agent > /dev/null 2>&1
eval $(gpg-agent --daemon --enable-ssh-support)

GPG

On your daily machine we can now publish your primary pub.key as well as import the smartcard for daily use.

To generate the private key stubs and inform your daily GPG of the smartcard run

gpg2 --card-status

After which you should see your smartcard and offline master listed in

gpg2 --list-secret-keys

Other pain points

Smartcard fetch

There is an open issue here where running

gpg2 --card-edit
fetch

actually fails for me where GPG will search for the smartcards signing key ID while actually getting the master offline key ID. Thus the operation fails.

As a workaround you can totally pull the key with curl =\

https://www.cycloptivity.net/6E03FC34.txt | gpg2 --import

Missing Serial Numbers

In Donncha O’Cearbhaill’s very helpful post I found the key to swapping smartcards to avoid the “Please insert Serial Number X” error.

When changing cards first drop your private key

rm ~/.gnupg/private-keys-v1.d/<PRIVATE KEY ID>.key

Then, after a reboot import your smartcard details

gpg2 --card-status
ssh-add -l

References

These posts helped me out a lot when writing this! YMMV

https://help.riseup.net/en/security/message-security/openpgp/best-practices

http://blog.josefsson.org/2014/06/23/offline-gnupg-master-key-and-subkeys-on-yubikey-neo-smartcard/

https://github.com/herlo/ssh-gpg-smartcard-config/blob/master/YubiKey_NEO.rst

http://www.bradfordembedded.com/2013/12/yubikey-smartcard/

http://ekaia.org/blog/2009/05/10/creating-new-gpgkey/

https://www.debian-administration.org/users/dkg/weblog/97

https://developers.yubico.com/ykneo-openpgp/ResetApplet.html

https://josefsson.org/key-transition-2014-06-22.txt

http://www.bootc.net/archives/2013/06/09/my-perfect-gnupg-ssh-agent-setup/

http://dst.lbl.gov/ksblog/2011/05/xfce-without-gpg-agent/

http://docs.xfce.org/xfce/xfce4-session/advanced

http://donncha.is/2014/07/problems-using-an-openpgp-smartcard-for-ssh-with-gpg-agent/

Ahoy!

Its been a while and a lot has happened over the last few months. Of particular note is that the Cloudrouter.org team at IIX launched their public beta on the 31st of March!

Head over to https://cloudrouter.org/ to check out the latest on the project.

For those of you this side of the ditch, I have created a quick script to drag a copy of the repository to Brisbane. Your mirror link is http://repo.cycloptivity.net/cloudrouter.org/.

Personally, I plan in letting a few of these routers loose in DN42 over the coming months!