Ahoy!

It has been a bit quiet around here for a while. I’ve been busy at work helping to build a new logging platform on top of Splunk and recently attended Kiwicon X which was fricken amazing. Having met lot’s of interesting folk during the con and with a few super relevant (Shout out to Karit for his SDR + GPS hacking talk) presentations I figured it was time to dust off the old SDR that has been dumping 1090 for ADS-B since god knows when and actually start to listen to the radio. The astute among you will see where this is going …

Yup, I am studying for my Amateur Operator’s Certificate of Proficiency (Standard) (AOCP(S). It feels long over due to stop just playing with ISM microwave band’s and get a bit more understanding of all of the various digital modes we take for granted when passing packets around.

Some history though! Many years ago during my NTFreenet days I ran into some members of VK8DA and actually attended a meeting at their Fannie Bay site. But back then it didn’t stick. The SDR revolution (if you can call it that) wouldn’t start for a couple more years and cheap and readily available SOC’s like Raspberry Pi were just not a thing.

Fast forward to now and I’ve started participating in the Waverley Amateur Radio Society VK2BV. Listening in with an RTL-SDR to their FM nets has been a quaint reminder of the simple but effective technology that we have taken for granted and I feel probably neglected in the light of streaming services like Soundcloud (Trust me it still blows my mind streaming HBR1 via 4G LTE while walking into work with nothing noticeable at all - https://www.youtube.com/watch?v=0kzjqBacF1k).

I am hoping to do a bunch of interesting things with WICEN and emergency response packet radio. The things you can achieve these days with 5 volt equipment and emerging technology like LoRa and Outernet is super super exciting.

So uh, stay tuned!

Ahoy!

This instalment of my blog is as usual a bit of a rant and a thinly veiled “Documentation to myself when I next forget how to do this”.

So I’ll set the scene - I have been gradually migrating things back to my home based hosting in Sydney (what with stable power and not abysmal VDSL2 NBN) rather than hosting in AWS, Vultr and about fifty other random SaaS services.

Not to mention that I got very excited to get on board with Redhats Developer program, So what is one to do? Install it obviously.

Well - things have changed a bit since I did my RHCE on RHEL 5 it would seem. First lets cover what we are trying to actually do here

There are four 2TB SATA disks in a five bay hot swap caddy. As you can imagine I want a semi-respectable performance so I chose RAID10. Since I can’t boot on RAID10 I need to also create a RAID1 for /boot. We also want to use LVM because we are not scrubs (or on AWS).

md0 - RAID 1 /boot
  sda1
  sdb1
  sdc1
  sdd1
md1 - RAID 10 LVM Physical Volume
  sda2 
  sdb2
  sdc2
  sdd2
LVM PV
  rhel_vg
    lv_root - /
    lv_home - /home
    lv_var - /var
    lv_swap

Ok that plan looks great - install time!

Installing

Note: Total caveat here is EFI. Since my old X9SCL+ hasn’t yet shed MBR based booting I don’t need to stuff around with EFI partitions. So I have skipped them since I was beyond patience by that point. If you need EFI I am keen to hear what lengths it takes to mirror the boot volumes.

So I apologize for what is normally a blog light on screen shots but since this all anaconda GUI here we go.

  1. Select your physical disks.

    rhel7-disk-select.png

  2. Don’t click the temping “Click here to create them automatically” button. Nooo we click the + sign button at the bottom left to add our first boot partition.

    • Mount point = /boot
    • Desired Capacity = 500M
  3. Now we need to change the /boot partition on /dev/sda1 to a RAID1 array.

    1. Click the Device Type accordion menu then select RAID.
    2. The RAID level defaults to RAID1. Keep this for our boot volumes.
    3. Click Update Settings to save this partitions configuration. If you clicked away already joke is on you - start from 1. again. The number of times I fell for this is embarrassing.

    rhel7-raid-1-boot.png

  4. Ok now we assign our root mount point by clicking + again.

    • Mount point = /
    • Desired Capacity = 10G
  5. If your thinking “Oh yeah we just did this, I select RAID in the Device Type here”, your going to hit one of my pet peeves. No actually we Modify the Volume Group (Think about it afterwards and it makes sense).

    1. Click Modify
    2. Click the RAID Level accordion menu then select your desired RAID level (We use RAID10 in this example).
    3. Click Save

    rhel7-lvm-with-RAID.png

  6. Finally we just need to add a swap volume. Clicketh the + button once more.

    • Mount point = swap
    • Desired Capacity = 1G

Believe it or not the swap volume is the least painful since it is just an additional logical volume on the existing RAID10 physical volume. If you wanted it in another volume group on a different physical volume I feel for you son.

Ok so we are installing at last! Shortly your system should boot and your layout will be something like this.

rhel7-installed.png

Traps

So that all seems fairly easy? Well hopefully yes! But in case you come unstuck like I did here are the traps I hit.

Zero any disks first

If your reinstalling over an old deployment your almost sure to hit blivet bugs during the partitioner which annoying only happens after you spend a silly amount of time clicking around the partitioner GUI and the installer actually starts trying to get going.

So first boot to rescue mode and create new MSDOS labels on each disk you plan to use for the install but do not create any partitions.

The bugs you will hit look something like this;

:The following was filed automatically by anaconda:
:anaconda 21.48.22.56-1 exception report
:Traceback (most recent call first):
:  File "/usr/lib/python2.7/site-packages/blivet/formats/__init__.py", line 405, in destroy
:    raise FormatDestroyError(msg)
:  File "/usr/lib/python2.7/site-packages/blivet/deviceaction.py", line 651, in execute
:    self.format.destroy()
:  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 377, in processActions
:    action.execute(callbacks)
:  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 374, in doIt
:    self.devicetree.processActions(callbacks)
:  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 224, in turnOnFilesystems
:    storage.doIt(callbacks)
:  File "/usr/lib64/python2.7/site-packages/pyanaconda/install.py", line 186, in doInstall
:    turnOnFilesystems(storage, mountOnly=flags.flags.dirInstall, callbacks=callbacks_reg)
:  File "/usr/lib64/python2.7/threading.py", line 764, in run
:    self.__target(*self.__args, **self.__kwargs)
:  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run
:    threading.Thread.run(self, *args, **kwargs)
:FormatDestroyError: error wiping old signatures from /dev/mapper/vg01-rootfs: 1

Expanding an existing RAID’ed PV

So you noticed your disks are a little light on the way of free space eh? Turns out that if you didn’t fully allocate your disks during partitioning the installer no longer fills your disks for you. That physical volume is exactly large enough to fit the logical volumes you asked for and not a errant gigabyte more.

So lets expand the disks because actually we really did want to add more logical volumes. Just not during install time.

  1. fdisk each physical member of the RAID array

    1. Delete the partition containing the RAID array - d
    2. Recreate the partition - n
    3. Change the partition type - t, you will want fd for mdadm arrays
  2. reboot to clear your kernel partition tables
  3. Expand the array via mdadm

    $ mdadm –grow /dev/md1 –size=max

  4. Finally resize the PV

    $ pvresize /dev/md1

Ahoy!

A couple of ShipIt’s ago I noticed a snazzy looking Atlassian Connect plugin for Bitbucket called Aerobatic. On the tin it said;

Smart hosting for static web sites. Ideal for Jekyll sites, JavaScript
single page apps, and any HTML / CSS / JavaScript web site. Link your
Bitbucket repo, push your code, and your site is updated automatically. CDN,
SSL, custom domains, API proxy, and more.

A few months later and having read (only a few) pages complaining about Github pages lack of certain features I thought it was time to take a look at what this add-on can do. Turns out I migrated last weekend and this blog is now coming at you from Aerobatic. Woo!

Peeking under the hood

Installing Aerobatic is about as painful as logging into a new SaaS service using Oauth. The Getting Started page has literally two steps. Not very exciting - But the really exciting stuff is a little bit deeper.

Real HTTPS

In one of the articles I linked earlier, Eric Mill pointed out the issues with Github pages HTTPS support at the time - Have a read https://konklone.com/post/github-pages-now-sorta-supports-https-so-use-it.

So what is the problem?

  • Github only support HTTPS on *.github.io domains
  • Github doesn’t support custom domains with HTTPS
  • A hack for custom domains using Cloudflare’s “Flexible SSL” doesn’t provide end-to-end encryption

How does Aerobatic help us here? By leveraging the Amazon Certificate Manager (ACM) to issue free TLS certificates for any domain that is validated by ACM they can configure a AWS Cloudfront distribution with end-to-end encryption and delegating cipher suite hardening to AWS (which they update regularly - just have a look at your ELB policies).

The process for ACM domain validation is no different to that of other email based certificate validation. So yes, your still getting certificates from vending machines. At least now you know the value of them.

After that configuring Aerobatic to use the CNAME is trivial. It will take a fair while for the Cloudfront distribution to be configured.

Jekyll Plugins

Aerobatic’s docs on automated builds is quite a cool read and gives you a good idea what is going on. By passing your Jekyll payload to AWS Lambda they can provide more support for plugins than the current Github pages solution.

The really exciting part is that this architecture affords Aerobatic a path to support more static site generators like Hexo, Pelican and Hugo.

Private git repository

Don’t get me wrong - I am a open source advocate and am quite happy to contribute interesting projects. But pushing commits that say things like “fix typo” makes me feel bad for polluting your inbox. So Bitbucket makes a really great choice there where the HTML and user facing content (like this post) are available for all to see. Meanwhile my shameless typo’s and spelling issues can be fixed quietly.

The bad parts

None that I am aware of but if any of the below concerns you, well - good luck with that.

That is all for now folks!

Ahoy!

A few weeks ago I decided that it was time to take the plunge and move over to NBNco’s new FTTB offering. Sadly this meant I had to leave behind my native dual-stack with Internode but the potential to take my 6/0.7Mbit ADSL2+ and turn it into 100/40Mbit was too appealing.

So here we are, as promised my link syncs are 107/44Mbit, Yeehah! On the downsides;

  • iiNet hands out DHCP leases for VDSL, no more static IP’s. (Yes, I locked myself out many times)
  • Bridging PPPoE to my Edgerouter is not an option since now I need the iiNet CPE to manage the SIP ATA
  • My native dual-stack is gone, no more IPv6 in the house. Good-bye sweet DHCPv6-PD prince!

Building a tunnelbroker

Despite my 1337 Hurricane Electric IPv6 certificate I figured I have a bit of a way to go to truly grok IPv6. So while the idea of just configuring the HE.net tunnelbroker was tempting, I, like always, made life hard for myself instead! Lets dig in!

I use Vultr to host a KVM instances nice and close to me in Australia, since international transit tends to suck . Incidentally their IPv6 network has left beta and has been maturing over the last 12 months. Great! I’ll just route some of that address space from the instance to my home router (a Ubiquiti Edgerouter lite) and I will be back on the IPv6 Internet.

The tunnel

                        .----------------------------.                        .----------------------------.
                        |           syd01            |                        |           erl01            |
                        |----------------------------|                        |----------------------------|
                        | eth0:   198.51.100.56/23   |     OpenVPN - vtun0    | eth1:   203.0.113.1/24     |
  .----------.          |         2001:DB8:5400::/64 |.----------------------.|         2001:DB8:2000::/67 |
  | Internet |----------|                            ||       SIT sit1       ||                            |
  '----------'          | tun0:   192.0.2.1          |'----------------------'| tun0:   192.0.2.2          |
                        |                            |                        |                            |
                        | sit1:   2001:DB8::1/66     |                        | sit1:   2001:DB8::2/66     |
                        '----------------------------'                        '----------------------------'

Let’s talk about this diagram quickly. syd01 is the Vultr instance and erl01 is my home router. Not in the picture is the upstream hardware doing IPv4 NAT on eth0 for my FTTB connection.

In order to get IPv6 routes back into my home network I needed to use a 6in4 tunnel using SIT (rfc4213), however there is no mechanisms in the SIT protocol to provide privacy or authentication. So, we turn to our old friend OpenVPN and configure a layer 3 site-to-site link tun0 then route the SIT traffic across tun0.

At this point it seems pretty simple right?

The details

Wrong. Or at least I was. There is a bunch of dirty hacks involved in this. Lets just jot them down quickly before we get into configurations.

  1. Splitting a /64 is considering bad practice. This breaks SLAAC and requires the implementation of stateful routing.

    From my basic understanding the reasoning here is that IPv6 addresses are 128bit, with the leading most significant 64bits making up the network address and the least significant 64bits making up the host address. Therefore protocols like SLAAC can safely assume that the smallest IPv6 subnet can be a /64 and any address space further down obviously belongs to that network.

  2. CentOS 6 and specifically the Linux kernel 2.6.32.* is way too old, just hurry up and deploy CentOS 7.
  3. Why CentOS 7? The Vultr instance is getting its eth0 addresses via SLAAC. The issue related to kernel version is support for accepting Router Advertisements at the same time as IPv6 forwarding is enabled. (Broken prior to 2.6.37)
  4. Continuing on about the on-link differences. We implement a real hack on syd01 where we need to proxy Neighbour Discovery Protocol (think IPv4 ARP but via ICMPv6). This method has been panned for being “the new NAT”, but in this case we need to split the /64 so there are no other options left. More on this later.

Ok, with that list of pain out of the way. Lets actually unpack what this solution looked like.

What did we actually deliver?

Not much really. The tunnels performance is pretty bad. I can only get about 10Mbit via IPv6, but as a proof of concept and a learning exercise I am considering that a first pass success. Hosts on my local home network have global IPv6 routing from 2001:DB8:2000::/67 as well as existing IPv4 connectivty.

SYD01 configuration

A few moving parts to capture here on CentOS 7.

Kernel Configuration

  /etc/sysctl.conf
  ...
  # Enable global forwarding
  net.ipv6.conf.all.forwarding = 1
  # Accept IPv6 RA, despite forwarding enabled
  net.ipv6.conf.all.accept_ra = 2
  # Proxy neighbour discovery protocol to downstream routers
  net.ipv6.conf.all.proxy_ndp = 1

Network Configuration

  /etc/sysconfig/network-scripts/ifcfg-eth0
  TYPE="Ethernet"
  BOOTPROTO="dhcp"
  DEFROUTE="yes"
  PEERDNS="yes"
  PEERROUTES="yes"
  IPV4_FAILURE_FATAL="no"
  IPV6INIT="yes"
  IPV6_AUTOCONF="yes"
  IPV6_DEFROUTE="yes"
  IPV6_PEERDNS="yes"
  IPV6_PEERROUTES="yes"
  IPV6_FAILURE_FATAL="no"
  NAME="eth0"
  DEVICE="eth0"
  ONBOOT="yes"
  
  /etc/sysconfig/network-scripts/ifcfg-sit1
  DEVICE=sit1
  BOOTPROTO=none
  ONBOOT=yes
  IPV6INIT=yes
  IPV6_TUNNELNAME=sit1
  IPV6TUNNELIPV4=192.0.2.2
  IPV6TUNNELIPV4LOCAL=192.0.2.1
  IPV6ADDR=2001:DB8::1/66
  IPV6_MTU=1280
  TYPE=sit

OpenVPN

Nothing to complicated here. A site-to-site UDP tunnel using a static key (no PFS for you).

  mode p2p
  rport 1194
  lport 1194
  remote home.example.org
  proto udp
  dev-type tun
  dev vtun0
  secret /etc/openvpn/home.key
  persist-key
  persist-tun
  ifconfig 192.0.2.1 192.0.2.2
  float
  script-security 2
  status /var/log/openvpn_status_home.log
  log-append /var/log/openvpn_home.log
  keepalive 10 60
  cipher AES-128-CBC
  auth SHA256
  user nobody
  group nobody

FirewallD

There is a bit of hackery here because FirewallD doesn’t really support complex (?) use cases like routing, good thing its still just iptables.

  firewall-cmd --direct --permanent --add-rule ipv6 filter FORWARD 0 -i sit1 -j ACCEPT
  firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 -i tun0 -p 41 -j ACCEPT
  firewall-cmd --complete-reload

This does still need to be cleaned up a little bit but should give you the right direction.

ndppd

Here is the real hacks / magic depending on your perspective.

Lead down the right path by Sean Groarke and his write up at https://www.ipsidixit.net/2010/03/24/239/ I got to the point where all my routing was working. I could ping interface-to-interface but when I tried to get from my home LAN out to the internet? Dropped by the remote end of sit1. What was going on?

  ip -6 neigh add proxy 2001:DB8::3 dev eth0

And suddenly my ping is working? Thanks Sean!

As it turns out the root cause here is that the SLAAC addressing being used between the Vultr upstream and the actual VM instance, so subsequently when the upstream router sends a neighbour discovery packet seeking 2001:DB8::3 there is no way to find the address on the L2 link on eth0. So invoke the Linux NDP proxy for 2001:DB8::3 and suddenly for each incoming discovery of 2001:DB8::3 on eth0 we will actually now forward the packet.

This solution works ok, but how do I deal with all the other clients in the house?

There are a couple of daemons that will manage this for you and is more or less the part where “we re-invented NAT”. Now there is a spinning daemon which watches for NS messages and adds the appropriate NDP proxy entires. I do feel like this is a valid use case but it is also easy to see how no, actually a /64 is really the smallest IPv6 subnet.

Somehow I ended up building ndppd, I have opened some work I want to do (get it into EPEL and support SystemD) but for now I have just started the daemon manually.

  /etc/ndppd.conf
  route-ttl 30000
  proxy eth0 {
     router yes
     timeout 500
     ttl 30000
     rule 2001:DB8::/64 {
        static
     }
  }
  
  $ ndppd -d

Great! If you run a ping now from your home network out to an Internet address like ipv6.google.com chances are it should be working. If not tcpdump -i sit1 icmp6 is your friend!

ERL01 configuration

On the Ubiquiti side things are pretty standard. These boxes rock!

Inteface Configuration

Even though we are using dhcpv6-server the Edgerouter must still run the router advertisement function. The difference to watch for here is setting the managed-flag, other-config-flag and autonomous-flag to ensure clients make use of dhcpv6-server’s direction.

Is now a good time to bring up that your Android device won’t work? - See http://www.techrepublic.com/article/androids-lack-of-dhcpv6-support-poses-security-and-ipv6-deployment-issues/

  ethernet eth1 {
      address 192.168.1.1/24
      address 2001:DB8:2000:1/67
      duplex auto
      ipv6 {
          router-advert {
              managed-flag true
              other-config-flag true
              prefix 2001:DB8:2000:1/67 {
                  autonomous-flag false
              }
              send-advert true
          }
      }
      speed auto
  }
  openvpn vtun0 {
      description syd1.example.org
      encryption aes128
      hash sha256
      local-address 192.0.2.2 {
      }
      local-port 1194
      mode site-to-site
      protocol udp
      remote-address 192.0.2.1
      remote-host syd1.example.org
      remote-port 1194
      shared-secret-key-file /config/auth/home.key
  }
  tunnel tun0 {
      address 2001:DB8::2/66
      description "IPv6 Tunnel"
      encapsulation sit
      local-ip 192.0.2.2
      mtu 1280
      remote-ip 192.0.2.1
  }

Service Configuration

Since we have no SLAAC here the stateful dhcpv6-server needs to hand out name-servers and manage IP assignment within the allocated space 2001:DB8:2000::2 -> 2001:DB8:2000::1999.

  dhcpv6-server {
      shared-network-name lan {
          subnet 2001:DB8:2000::/67 {
              address-range {
                  prefix 2001:DB8:2000::/67 {
                  }
                  start 2001:DB8:2000::2 {
                      stop 2001:DB8:2000::1999
                  }
              }
              name-server 2001:4860:4860::8888
              name-server 2001:4860:4860::4444
          }
      }
  }

Closing thoughts

  • If your thinking hey this would be sweet too I don’t want to use SixXS either! Ping Vultr. I raised support case ZXQ-36YYP asking for an additional /64 so I could just run radvd. If there is enough intrested hopefully we can see this become more of a reality.
  • I really miss a decent Australian ISP. This excercise has really made me appreciate the elegance of a DHCPv6-PD tail assigning me a /56.
  • Network operators need to give up on the idea that IP addresses are the “Golden Goose”. Drip feeding out little /120 subnets to networks is incredibly frustrating and PLEASE instead consider monetizing transit proper by enabling DHCPv6-PD rather than thinking your $1 per IPv4 strategy makes sense for IPv6. (Props to Linode who anecdotally are onboard with /48’s if you ask for it)
  • Continuing on network operators, if you haven’t seen it yet go watch Owen DeLong talk to IPv6 enterprise addressing, then please come back and defend handing out really really tiny subnets.

Ahoy!

Today I spent some time hacking on AWS after having been faced with the issues of VPC private connectivity in my day job.

Its only a quick post today but the tldr is that, yes using either Amazon Direct Connect’s or AWS VPN service will enable you to shift the outbound connectivity away from AWS itself and back to your corporate routers. The code to have a play with this approach yourself is at https://github.com/Kahn/aws-outbound-bgp

When your considering this there is a few quick pros and cons.

Pros:

  • Failover is incredibly quick compared to solutions based on auto-scaling groups. BGP will is configured with 30 second timers by default by Amazon and a dead route will be removed even quicker.
  • You can work around the single route only options presented to you via the AWS console.
  • You can scale the solution out past N+1 if your so inclined
  • Your VPC stays more secure by sending your internet bound traffic back to your existing corporate environment ISP’s (Your watching those links right?) rather than the nearest IGW

Cons:

  • You loose literally all of the network elasticity that using AWS affords you by congesting your existing links with AWS originated traffic
  • Your now going to have to get some more of that PI space to address these hosts as your not going to be able to have Amazon EIP’s easily either.

In short I think this solution will is appealing to bigger enterprises who can easily handle the sort of transit bandwidth they require. Being able to simplfy routing across their AWS footprint (think multiple accounts per organization) does make the job of a security team and network team much easier.

Meanwhile, I am sure a small startup just wanting to avoid getting their office 4G link DOS’ed by server traffic will really not care at all about these options and just stick with NAT or HTTP Proxies.

That is all I have time for now but I hope this can help next time your considering how to get traffic out of your private VPCs. If you have any questions please feel free to use the comments or hit me up online.