Windows Subsystem for Linux / Bash on Ubuntu on Windows

Installing Bash on Ubuntu on Windows while behind a proxy server doesn’t work.

I’m reinstalling Bash on Ubuntu on Windows on my work laptop at home, where I’m not behind the work firewall and the need for the work proxy server.
WSL happily activates and downloads Ubuntu from the Windows store while at work, but once it fires up Ubuntu and starts running Apt to install updates, it chokes, as Ubuntu, and Apt, aren’t configured to use the proxy server. This means I have to cancel the install, which results in a working system, but it doesn’t complete its setup. I can fire up Bash, but it always logs in as root (doesn’t get to the user setup step). Once logged in I can configure Apt to use the proxy, and set the proxy environment and run the apt updates, but it still hasn’t gone through the full install process cleanly. This is a weakness in Microsoft / Canonical’s design. Ubuntu should either inherit the proxy config from Windows, or have a way to configure it in the setup, so it can perform a clean install.
I figured I’d give it a try from home, where it doesn’t need to go through a proxy, and see if it will properly complete the install. This worked perfectly on my personal laptop.

Edited: We have success! The prompt for a username means we got past the blocker seen at work.
Bash on Ubuntu on Windows install username prompt

HTML email rant

A rant on the wrong wait to use the mutipart/alternative MIME type in email.

So there are five different ways to do email with HTML in it. Only one of them is correct. Commercial entities should know better.

Either they

  1. Don’t include a text/plain (Walgreens, GNS3)
  2. Include a text/plain that is just a duplicate of the text/html(IFTT)
  3. Include a text/plain that is the text of the text/html, with all the href links, which are useless in a text/plain context(Zillow, Royal Caribbean)
  4. Include an actual plaintext message in the text/plain, but the content of that message is just their legalese and a message to go to a particular URL if your mail reader can’t display text/html(Chase, Adobe)
  5. Include a text/plain version that contains exactly the same text as the text/html version, but without any of the HTML markup, making it actualy readable to mere human beings.(Patreon).

Bad SSL security

I see GNS3 Academy still hasn’t fixed their SSL certificate.

For a site teaching about networking, which includes network security, this is head-shakingly bad.

Minor Home Network Rewiring

After some minor home network rewiring (2 additional Cat6 cables from network rack to desk, re-tipped all Ethernet cables with keystone jacks, installed in a 4-gang surface mount box, patch cable from the computers to the new keystone jacks. Unfortunately one of the original cables I had running between the rack and my desk is only 20′, so it is now too short for the new path, so only 4 actual connections to the desk. I’ll replace that later.) I’m rather pleased with my Internet performance today:

23 ms ping, 49.45 Mbps Download, 5.44 Mbps Upload, AT&T Internet, Keller, TX, < 50 mi

To Do: Install patch panel in the network rack, re-tip these cables into the back of the patch panel, install patch cables from panel to switch. Re-tip the cables going from the network rack to the Cisco lab bench the same way. Install some split loom or spiral conduit around these cable runs to keep them dressed neatly.

Keystone Jacks
4-gang surface mount box
25′ cat-6 Ethernet cables
5′ split loom. I should have ordered longer.

Day in the life of a Systems Administrator

Day in the life of a Unix Systems Administrator

Wow, been almost a year since I blogged anything. I’m getting lazy.

So what’s the daily life of a systems administrator like? Here was today:

The plan coming this morning: Begin quarterly “Vulnerability Audit Report”.

What did I do?
Windows server starts alerting on CPU at midnight, again. We fixed the problem on Tues. Why is it alerting again? Of course it corrects itself before I can get logged in and doesn’t go off again all day. Send an email to the person responsible for the application on that server to ask if the app was running any unusually CPU intensive jobs. Respond with a screenshot showing times CPU alerts went off. Get response of “nothing unusual”. As usual.

We updated the root password on all Unix servers last week. Get a list of 44 systems from a coworker that still have the old root password.
Check the list, confirm all still have the old root password.
Check the list against systems that were updated via Ansible. All on the Ansible list. No failures when running the Ansible playbook to update the root password. All spot-checks that the new root password was in effect at the time showed task was working as expected.
Begin investigating why these systems still have the old root password.
Speculation during team scrum that Puppet might be resetting the root password.
Begin testing a hypothesis that root password was, in fact, changed, but something else is re-setting it back to the old password.
Manually update root password on one host. Monitor /etc/shadow to see if it changes again after setting the password. (watch -d ls -l /etc/shadow)
Wait some more.
Wait 27 minutes, BOOM! /etc/shadow gets touched.
Investigate to see if Puppet is the culprit. I know nothing about Puppet. I’m an Ansible guy. The puppet guy (who knows just enough to have set up the server and built some manifests and get Puppet to update root the last time the root password was changed before I started working here.) is out today.
Look at log files in /var/log. Look at files in /etc/puppet on puppet server. Try to find anything that mentions “passw(or)?d&&root” (did I mention I’m not a puppet guy?). Find a manifest that says something about setting the root password, but it references a variable. Can’t find where the value of that variable is set.
Look some more at the target host. See in log files that it’s failing to talk to the Puppet server, so continuing to enforce the last set of configuration stuff it got. Great, fixing this on the Puppet server won’t necessarily fix all the clients that have been allowed to lose connectivity that no one noticed (entropy can be a bitch.)
Begin looking at what to change on the client (other than just “shut down the Puppet service” and “kill it with fire!”). Realize it’s much faster to surf all the files and directories involved with “mc”.
Midnight Commander not installed. Simple enough, “yum install mc”.
Yum: “What, you want to install something in the base RHEL repo? HAH! Entropy, baby! I have no idea what’s in the base repo.”.
Me: “Hold my beer.” (This is Texas, y’all.)
(No, not really. CTO frowns on drinking during work hours or drinking while logged into production systems. Or just drinking while logged in…)
OK, so more like:
“Hold my Diet Coke.”
Yum: “Red Hat repos? We don’t need no steeeenking Red Hat repos!”

Start updating Yum repo cache. Run out of space in /var. Discover when this server was built, it was built with much too small a /var. Start looking at what to clean up.
Fix logrotate to compress log files when it rotates them, manually compress old log files.
/var/lib/clamav is one of the larger directories. Oh, look, several failed DB updates that never got cleaned up.
Clean up the directory, run freshclam. Gee, clamav DB downloads sure are taking a long time given that it’s got a GigE connection to the local DatabaseMirror. Check Freshclam config. Yup, the local mirror is configured… external mirror ALSO configured. Dang it. Fix that. ClamAV DB updates no much faster.
Run yum repo cache update again. Run out of disk space again. Wait… why didn’t Nagios alert that /var was full?
Oh, look, when /var was made a separate partition, no on updated Nagios to monitor it.
Log into Nagios server to update config file for this host. Check changes into Git. Discover there have been a number of other Nagios changes lately that haven’t been checked into Git. Spend half an hour running git status / diff / add / delete / commit / push to get all changes checked into Git repo.
Restart Nagios server (it doesn’t like reloads. Every once in a while it goes bonkers and sends out “The sky is falling! ALL services on ALL servers are down! Run for your lives! The End is nigh!” if you try a simple reload.
Hmm… if Nagios is out of date for this host, is Cacti…
Update yum cache again. Run out of disk space again.
Good thing this is a VM, with LVM. Add another drive in vSphere, pvcreate, swing your partner, vgextend, lvresize -r, do-si-do!
yum repo cache update… FINALLY!
What was I doing again? Oh, right, install Midnight Commander…
Why? Oh yeah, searching for a Puppet file for….?
Right, root password override.

Every time I log into a server it seems like I find a half dozen things that need fixing. Makes you not want to log into anything, so you can actually get some work done. Oh, right, entropy…

LoneStar Overnight’s broken web page

I told them I’d give them 24 hours to respond. I actually gave them over a month. The only response was the automated “We’ve received your tech support email.” They have, to date, done jack all to fix any of the problems I alerted them to. So I’m going public.

To whom it may concern,

There are serious issues with the security of your public web site.

A search on Google and Duck-Duck-Go for “Lonestar Overnight” or “Lone Star Overnight” results in a list of links, the first of which points to The rest point to various pages below, all of which appear to be on the same server. However the GoDaddy-signed SSL / TLS certificate installed on that server contains only the name “*” and the SANs (“Subject Alternative Names”) “*” and “”. There is no SAN for any hostname.

A potential customer or user clicking on any of the links would receive a message from their browser that the certificate does not match the hostname. This can cost you business, as customers look elsewhere, to a competitor whose web server is perceived to be “trustable”.

You’ve already lost business from anyone looking for “”. BTW, your web page does not contain the words “Lone Star Overnight” anywhere. Obviously, Like IBM, SGI, AT&T and other companies of decades past, you have pivoted to branding yourselves as “LSO”. You might communicate this to Amazon, in order to properly identify your service when they select you to perform delivery of their packages. At this time Amazon states that the courier is “Lone Star Overnight”.

The last is purely a marketing glitch, however, there are far more serious problems with your web site’s security.

You are using very weak, obsolete, encryption. This makes your customers vulnerable to man in the middle attacks.
You are still using 512-bit “export” grade encryption, and potentially vulnerable to the FREAK attack.
You ARE vulnerable to the POODLE attack.
You are still using SSL version 3.0, TLSv1.0 and TLSv1.1. All of which are considered obsolete.
You do NOT support the current TLSv1.2 standard.
You are using older, vulnerable versions of the RC4 protocols.
You do not support secure renegotiation.
You do not support Forward Secrecy

I am a highly skilled Unix Systems Administrator for a Fortune 500 company. These are BASIC items. I identified most of them using the well regarded, and free, SSL security scanning tool from Qualys Labs ( to review your site, the results of which can be viewed here and here. To be clear: this is NOT an attempt to extort money from you or solicit my skills to you. IT Security is a highly specialized area of IT administration and I do not posses the knowledge, skills or interest, to do it justice. I am contacting you as a concerned “customer”, who regularly receives Amazon deliveries facilitated by your service. If you don’t have an IT Security expert on your staff, I would highly recommend you get in touch with an IT security consultant who can assist you with auditing your systems security and developing a remediation plan.

If there is no response to this message within 24 hours, I will post it publicly on a number of social media sites, to warn other potential customers that they should not use your web site.

Also, the tracking number for my recent “same day delivery” given by Amazon does not work on your site. It doesn’t even give an error that it could not be found. It simply reloads the page. Someone should probably look into that glitch on your site as well. I’m probably not the only Amazon customer awaiting a delivery who is trying to track their package.

For the record, it appears they fixed the tracking number glitch. At least as of today I am able to get the status of the package I’m waiting for. While typing this, I heard a car door outside and my delivery was on my doorstep when I checked.

Uverse speed throttling

Large uploads on Uverse kill download bandwidth.

So it turns out if you’re uploading something on a Uverse connection, they kill your download bandwidth.

I had been poking around in iTunes, looking at the section of the store that shows what other members of your Apple Family have “purchased” (in quotes, because even “free” apps and music show up as “purchases”). The Wife had purchased several albums (or at least songs) that I would not have purchased myself, but wouldn’t mind having a copy, since we’d already paid for it. I clicked to download them (mostly songs from our high school days), then went to watch some YouTube videos. Normally we have enough bandwidth to handle this just fine, but the video kept stuttering (play for two seconds, pause for four seconds to download the next two seconds worth of video, play for two, pause for four for the next two seconds of playback download). I switched back to iTunes and saw that what should have taken about 3 seconds per song was predicting six MINUTES or more.
The Cisco ASA showed a lot of OUTGOING bandwidth being used, and very little incoming. Well that was odd. I wasn’t uploading anything that I knew of.

Speediest showed my download speed to be 5Mbps and upload of about 77Kbps. WELL below normal.

So, drop to terminal, do a tcpdump and low and behold lots of packets going out to Apple IP addresses (I’m sure I could have found this out from ASDM, but I don’t know the interface well enough yet and I do know tcpdump.)

Turns out when I stuck the SD card from my camera into the iMac and told Photos to download 10GB worth of video I shot today, it dutifully did so, then began uploading that to iCloud. There doesn’t seem to be a setting to permit uploading photos, but not video. With a 12Mbps down / 1.5Mbps up Uverse connection, 10+GB is going to take a WHILE to upload (especially since it was only uploading at about 500Kbps).

It would seem Uverse will only let you use either upload or download at any given time, but not both. If they’re going to screw you like that, they could at least give you a reach around and let you do it at the same SPEED in either direction.

(Of course it’s possible the throttling of download speed is due to the TCP/HTTPS “ACK”s coming back from Apple signaling receipt of the upload packets and readiness for the next upload packet, but those shouldn’t take much bandwidth at all. Barely more than the TCP/IP header and a few bits of payload, I would think.)

Edit: As soon as I stopped (really, paused for one day) the “Photos” upload, my download bandwidth came roaring back: 15Mbps down (from a connection that is technically only supposed to be 12Mbps…) / 1.5Mbps up on and my iTunes downloads were completing in seconds.

How not to “describe” your products on web sites

In which I go off on people who use the same item description on multiple online sales listings, each with a variety of features.

Cisco ASA5505-UL-BUN-K9 ASA 5505 Security Appliance vs Cisco ASA5505-50-BUN-K9 Asa 5505 Security Appliance vs Cisco ASA5505-SEC-BUN-K9 ASA 5500 Series Adaptive Security Router Appliance
Yeah, because I enjoy digging through Cisco’s web site to figure out which features are activated by a “UL-BUN-K9” vs a “50-BUN-K9” vs a “SEC-BUN-K9” license. I already have to know a little bit about Cisco to identify that string of characters refers to the IOS license version in the first place.

Seriously, if you’re going to sell this stuff on Amazon, don’t use the same description (of the hardware) for every one of them. That’s like putting up 5 different Toyota Corollas on a web site, each with a different VIN and price, but the same stock photo and describing them all as “A popular compact car” and leaving it to the potential buyer to decipher the VIN to find out what options each one has. “Let’s see, a ‘C’ in the 10th digit means it’s a 2012 model year, or maybe a 1982…”

Yes, I know. Someone will probably point out that if you’re shopping for Cisco equipment, you should probably be able to decipher the Cisco IOS license codes.

When default allow rules… don’t.

Now that I have a power supply for the Cisco ASA, I’m trying to get it up and running to sit at the edge of my home network, so I can pull the router to be part of my Cisco lab and it’s driving me crazy.
It’s default config as set up by the ASDM setup wizard is supposed to permit all traffic from the “inside” (high security zone) to the “outside” (low security zone). That’s all fine and dandy, until the default NAT/PAT config, which LOOKS like it says “NAT / PAT all traffic from ‘inside’ to the ‘outside’ IP address” doesn’t.

I don’t want to spend a lot of time learning the intricacies of the ASA OS right now. I’d rather spend it on IOS and working toward the CCENT / CCNA…

Adding my network to Cacti

Geeking with Cacti.

So, geeking out this evening, adding my entire home network infrastructure to Cacti, to track how it’s doing.
I’d already set up all my VM’s, the Cisco router and Uverse gateway, and my two hosted servers at Rackspace and Linode months ago.
Tonight I added my ESXi server and both Cisco switches. Of course, not much to see on most of the switch ports, since the only port in use on one of them is the uplink to the other switch (which means the only traffic on that port is Cacti polling it’s SNMP daemon). But it’s interesting, none the less.
I’ll probably do the same on the Cisco lab I build for CCNA study.

More Windows patching

More Win2k8 patching

Well it looks like that Win2k8 server successfully patched. Or at least got past the .NET Framework 3.5 patches that were hanging it up. Now I’m going through a series of .NET Framework 4.0 patches. Making snapshots every step of the way for quick rollback if it decides to throw a wobbly at any time.

Windows patching hell

A Unix sys admin struggling with patching Windows servers.

Never thought I’d end up babysitting MS Windows server patching and pulling my hair out as it takes an hour or more to install 100+ patches, reboot, 30 minutes “finalizing” the updates, declare it “failed” and 90 more minutes “reverting” the installs before rebooting again, wash, rinse repeat, until you successfully tell it which patch NOT to install.
I’m a Unix admin for Pete’s sake. There’s a reason I don’t (normally) do Windows. The only time a Linux server takes so long to boot is when it’s running on bare metal that takes 30 minutes to POST and/or it has lots of LUNs assigned and it takes a while to sort them all out.
I was hoping to have this 2008 server to a state that I could start installing the software it needs by the end of the day.

FINALLY it finished reverting and rebooting. Luckily it didn’t back out 100+ updates. The only one left to install is the one troublesome update that should be done last, because it causes this problem if you don’t.

Nope, spoke too soon. Had it re-check for updates and it now says ALL of the updates from the last go round still need to be installed. But now I see there’s a second update that partners with the known one, so hopefully de-selecting that one as well will fix the issue.

(And in another in a list of first that came with this job: never thought I’d be adding a new “Windows” sub-category under the System Administration category of this blog.)

The Fort Worth Botanic Gardens, Memorial Day, 2015

Fort Worth Botanic Gardens, Memorial Day weekend, 2015, with Kem, Martin, Mandie and Lyla.


Flickr Album Gallery Powered By: WP Frank

IPv6 has come to Uverse

More than a year after my 3800HGV-B Uverse modem actually acknowledged that such a thing as “IPv6” existed, it appears it is actually making it available for use.
Now to see if I can get my Cisco router to play nice…

Uverse modem IPv6 configuration

The Spam Folder

Server-side vs. client-side spam filtering.

After my post about what causes mail to go to the spam folder, a reader1 asked:

So why did I have to tell my new computer and new email system a dozen times that Facebook posts of various types were not spam before I could get it to stop throwing them all in my spam folder.

Continue reading “The Spam Folder”

Blog crashing browser

My own blog crashes my own browser.

Well that’s just lovely. My own blog is crashing my web browser. I can access the “admin” page (and thus make posts) just fine, but loading the main screen takes forever and eventually crashes that browser tab. This is a Linux box running an older Core 2 Duo CPU and 4 gigs of RAM. It should take more than a blog page to do that.
Suspicion so far falls on the cross-posting plugin that makes my blog posts appear on Facebook, Twitter, Google+, etc, and allow people visiting my blog to “share” posts on those services. Watching the Chrome developer tools while the page loads, it’s taking forever to pull up links to sites I’ve certainly never directly linked to, such as “”.
Oddly enough, some of the worst offenders are google syndication and google ad services. I don’t run google ad words or on my blog…

What causes email to go to the spam folder?

A quick guide to some of the things ISPs look for to decide if it should go to the Inbox or the spam folder.

Recently a former colleague reached out to me on Linkedin to ask:

I have a question regarding email delivery. What cause emails to go into someone’s spam email box? I understand that there maybe(sic) filters that looks at the content to make that determination. I would think there are many other factors.

I replied:

Yes, there’s quite a number of things that can cause mail to go to the spam folder. The contents of the message are a big factor. Of course every ISP applies different rules, so what causes mail to go into the spam folder of a Yahoo! mailbox will differ from what matches the rules on Gmail, or Hotmail, etc. Some ISPs will allow certain mail through, but put it in the Spam folder that other ISPs would just reject outright when the sending mail server connects to send it.

Are you having a specific problem that you’re trying to solve?

He responded:

I don’t have a specific problem. Just interested in understanding how spam filtering works. Since I know an expert, why not ask directly.

Are there headers the ISP look at to validate the email?

I wrote up a quick primer on some of the esoterica of spam filtering.
This is by no means comprehensive, and not guaranteed 100% accurate.

Continue reading “What causes email to go to the spam folder?”

Ansible and Variables

A basic explanation of Ansible and a discussion of variable usage.

I’ve been talking about Ansible on Facebook lately and the other day a friend asked me about Ansible and variables. I gave her a quick explanation, then told her I’d do a more thorough writeup that would be easier to follow than my “stream of consciousness” explanation given in FB messages.
It occurred to me that I’m planning to do a “lunch and learn” on Ansible at work soon, and I could re-use the same material, so I’ll just post this publicly. I plan for this to be the first in a series on DevOps, integration, idempotent, configuration management and Ansible. So without further ado…

For those who have not seen my posts on Facebook, Ansible is a configuration management tool for provisioning, deploying and configuring, servers and applications. It is one of a series of such tools that have come out in the last few years, such as Puppet, Chef and Saltstack. It is designed to be fast, easy to use, power, efficient and secure. It is serverless and agentless. It aims to be idempotent.

I can’t speak to Puppet, Chef or Saltstack as I’ve never used them.

Addressing these one at a time, not necessarily in the order presented above:

  • Secure
  • Everything is done through SSH tunnels. No passwords, no configuration files, are ever sent over the network in the clear. Set up your SSH keys and you don’t have to worry about typing passwords either.
    There is no agent software running on the managed machines, so there’s nothing to hack.

  • Easy to use
  • “I wrote Ansible because none of the existing tools fit my brain. I wanted a tool that I could not use for 6 months, come back later, and still remember how it worked.”
    Michael DeHaan
    Ansible project founder

  • Efficient
  • No agents, just SSH (or PowerShell with Windows, but I won’t get into that.) The only software required on the managed machine is an SSH daemon and Python.

  • Serverless and Agentless
  • As I’ve already mentioned, there’s no agent running on the managed server. If you can ssh into it and run Python, you’re good to go.
    There is no central server, full of manifests, menus, etc. You can run it from your desktop or laptop. Again, if you have Python, you’re good to go (Python has its own implementation of the OpenSSH client.) Just make sure you back up your playbook and roles. Git is a great place for this!

  • idempotency
  • The is one of the most important! It means you should be able to run your Ansible script against a managed host at any time, and not break it. If anything is not configured the way it is supposed to be, the ansible script will put it back the way it should be. Shell scripts have to be written very carefully to detect if something doesn’t need to be done. It’s also notoriously difficult to modify files with shell scripts (unless you’re really good with tools like sed and awk, or perhaps Perl…)

Some vocabulary before we begin:

  • playbook
  • A file defining which hosts you want to manipulate and what roles you want to apply to those hosts, as well as what tasks you want to run.

  • roles
  • A defined list of tasks to be run when the role is called, as well as any files to be installed, templates to be applied, dependency information, etc.

  • inventory
  • A file listing every server you will manage with Ansible, and what groups they belong to. A host can belong to any number of groups, including none at all, and groups can be members of other groups.

  • host_vars & group_vars
  • Directories with files containing variables specific to certain hosts (host_vars) and host groups (group_vars). These variables are used in your tasks and roles.

Now, on with the discussion of variables. Here was Kathryn’s original question:

How do variables work with dependencies in roles? Meaning, if a role is dependant on another, can it access the variables of the other at run time?

I started to answer with an example we use at work: we have a “common” role that sets up some users with specific UIDs that we want on all our servers, and an “apache” role that depends on that common role (e.g.: it needs the wwww user created by common). Kathryn further asked:

Okay, say “application” depends on “common” and “common” has default variables… would “application” pick up “common”‘s defaults?

Yes! For example, we have in our “common” role, a task with a file which pushes out customized /etc/sudoers.d files, depending on what the server will do, what environment it will be in, etc. One of the tasks looks like this:

NOTE: the language used to write Ansible files, Yaml, is whitespace sensitive, however due to the limitations of HTML and my WordPress config, the whitespace is removed from my examples. Do not just cut and paste and expect it to work. You will need to adjust the leading spacing on all lines.

- name: Sudoers - push sudoers.d/hadoop_conf
template: >
when: hadoop_cluster is defined

Note the last line: “when: hadoop_cluster is defined”. “hadoop_cluster” is a variable. This variable isn’t actually defined in our role, but rather in the playbook, or in a host_var or group_var file. In this case we have a group_vars/all_hadoop file. Any task run on any server that is part of the “all_hadoop” group in the inventory will have the variables defined in this group_var file. This file contains:
# file: group_vars/all_hadoop

hadoop_cluster: true

In this case “hadoop_cluster” is defined, and has a value of “true”. Our task above doesn’t care about the value, only that the variable is defined at all. If I run the above task on the server “namenode1”, and “namenode1” is in a group called “all_hadoop” in my inventory file, it will inherit the variables in group_vars/all_hadoop, “hadoop_cluster” is defined, so the task will be run.
Another role or task, which might be part of “common” role or in a completely different role, will be able to access the same variable and act on it. That role / task might actually care about the value of the role, and would be able to see that value. Or it might just care that the variable is defined.

Another example: I built a role for a set of servers at work. In our development environment we wanted to allow the developers actually writing the code for the applications to run on those servers to be able to use sudo to gain root access. I added another task to the same file as our Hadoop example above:
- name: Sudoers - push sudoers.d/nova_conf
template: >
when: allow_project_sudo is defined

In our inventory, the development servers for this project are in a “dev_project” group, and there’s a group_vars/dev_project file that defines “allow_project_sudo”. We also have a “production_project” group in our inventory which contains the production servers for this project. The “allow_project_sudo” variable is NOT defined in group_vars/production_project, so that sudoers file is not pushed out.

Directly addressing Kathryn’s question about one role being able to call variables “defined” by another role (although I’ve already addressed the fact that roles don’t really “define” variables, they just access them), I have this task:
- name: Build ssh key files
assemble: >
src={{ item.user }}_ssh_keys
dest=/home/{{ item.user }}/.ssh/authorized_keys
owner={{ item.user }}
group={{ }}
- { user: 'projectuser', group: 'projectgroup' }
when: allow_project_sudo is defined

Again, we look to see if “allow_projecgt_sudo” is defined; if so, we build a .ssh/authorized_keys file for the user “projectuser”, allowing all those same devs to ssh into the server as that user. This task also includes the intriguing and useful “with_items”. This allows for a form of looping, such that it will actually perform this task for each item listed in the “with_items” block, redefining the “item.user” and “” variables used in the src, dest, owner and group lines in the task.
We actually define two variables in our “with_items”. Each line in “with_items” is an “item”. In this case we have two variables (basically an associative array), and we can reference the key/value pairs in the array. “item.user” has the value “project user”. “” has the value “projectgroup”. Thus our “assemble” becomes, on the first iteration of “with_items”:

assemble: >

This basically says “grab all the files (presumably ssh key files) in the directory “projectuser_ssh_keys” (stored inside a directory in our role) and build, on the managed host, a file called “authorized_keys” in the directory /home/projectuser/.ssh, make that file owned by projectuser:projectgroup, with -rw——- permissions. Oh, and back up the original file first, just in case.

Testing out Windows Live Writer

Just messing around with the Windows Live blog client.

Not really a big fan of MS freebie "non-commercial" tools, but Windows Live Mail is a big step up from Outlook Express. Just kind of curious how this works.

More Geocaching

Heading out for an afternoon of geocaching with Kem.
We’re going to try to hit 10 caches in one day!


I got a new Android phone the other day (Tmobile Vibrant /Samsung Galaxy S) that comes pre-installed with Swype. I’m not as fast or proficient as the guy in the demo videos yet, but it’s a hell of a lot faster than taping.
Anyone else have it, and what do you think of it?

Manipulating maildirs at the filesystem level

Let’s here it for being able to manipulate you mail directory structure at the file system level and still be able to access it through Thunderbird.


DJBDNS must run as two separate instances to bind to both an IPv4 and IPv6 addresss.

Tip: When patching DJB’s “dnscache” for IPv6, you can’t just tell it to bind to both the IPv4 and IPv6 addresses. You will need to run two separate instances, one binding to the IPv4 address, one binding to the IPv6 address.
I haven’t checked, but I’m betting my tinydns instance is also not binding to both addresses and will have to be run as two separate instances as well.

The AT&T tech just finished installing the Uverse modem and I just completed the “registration”. First thing I did was hit of course.

Speed test

Not bad. Not bad at all, when I was quoted “12Mbps”. 10MBPs actual is pretty good.

Fixing Vmware virtual disks

Having hosed a Gentoo guest on a VMware ESXi host by filling the partition (which VMware really doesn’t like) then attempting to fix it by mounting the partition in anther guest and fsck’ing it first, I got the error message “the parent virtual disk has been modified since the child was created” when I tried to boot the original Gentoo guest.
Googling pointed me to a nice post at Recovering VMware snapshot after parent changed.
Step two lists the following caveat:

“Look at the size of the snapshot virtual hard disk. If it is more than 2GB and you’re running a 32-bit OS, or it is more than the amount of memory that you have available, the following method will probably not work. You’re welcome to try though.”

I found this wasn’t an issue as it appears (at least as of ESXi 4.x) VMware has separated the vmdk “header” and “data”, putting the “header” in the “hostname.vmdk” file and the actual data in “hostname-flat.vmdk”. The original vmdk is now only a couple of hundred bytes and easily edited in vi. Grabbing the CID from the Gentoo.vmdk and modifying parentCID in Gentoo000001.vmdk had me back up and running (at least to the point that I could now boot the Gentoo guest, using an Ubuntu ISO so I could access the file system and clean it up. I moved /home to a new partition, fixing the space issue).
Next time, I’ll just be smart and build all systems with LVM, then I can just add more physical extents when I need more space.

Twitter Updates for 2009-11-22

  • Back from honeymoon, just opened wedding gifts. #

Powered by Twitter Tools

Google’s Holiday Gift: Free Wi-Fi at Airports

Cool. Now why couldn’t they have done this YESTERDAY, when it was useful to me?

Google’s Holiday Gift: Free Wi-Fi at Airports.

Twitter Updates for 2009-11-06

  • RT @feather802 Currently fulfilling my primary purpose in life: Cat Bed – We were doing that this morning, to. #

Powered by Twitter Tools

Twitter Updates for 2009-11-05

  • Rep. McCaul is an idiot making any such statemtns without more informtion. #FtHoodShootings #

Powered by Twitter Tools

Twitter Updates for 2009-11-03

  • RT @tomservo79 Marriage is a Lovecraftian ritual meant to summon Yogsothoth. Gay marriages are all about Cthulu. The divide is pretty clear. #
  • RT @choochoobear It might be a little longer before the drippy faucet in my tub is fixed. – I hear they have pills for that now. #

Powered by Twitter Tools

Twitter Updates for 2009-11-02

  • RT @nprpolitics Tech alert: The @nprtechteam is live-tweeting's switch from Oracle to MySQL tonight. #
  • RT @BrentSpiner the only rolls I regret have complex carbs. — now there's an actor who knows where his bread is buttered. #
  • RT @martinbogo @strongbow Heh, and Mr. Spiner is looking a bit like you and me these days … – now be nice! #

Powered by Twitter Tools

Twitter Updates for 2009-10-31

  • Who wants a Google Wave invite? #

Powered by Twitter Tools

Twitter Updates for 2009-10-29

  • #tmh20 OK, get on with the zombie gag already. #
  • Obviously: there's a Black Lantern controlling them. #tmh20 #
  • #tmh20 don't break the 4th wall, guys! #
  • It's a 20 yo Mercedes. Hotwire the damn thing. #tmh20 #
  • I hope you have some sanitizer. #tmh20 #
  • Zombies and "your moma" cracks? #tmh20 #
  • He's eating cat and rat brains, not lamb and human brain. #tmh20 #
  • No! Don't let him get behind you! #tmh20 #

Powered by Twitter Tools

Twitter Updates for 2009-10-23

  • reeeeeevertb! Ouch! Can barely understand what you're saying. #tmh19 #
  • I, for one, am glad zombies are replacing "vampires" as the chic horror meme #
  • That's just filled with AWESOME! RT @grantimahara YES! This is exactly the cake that I wanted!! RT @thinkgeek: #

Powered by Twitter Tools

Twitter Updates for 2009-10-22

  • Playing Farkle #
  • is seriously hooked on Facebook games. #

Powered by Twitter Tools

Twitter Updates for 2009-10-21

  • Damit, I've been wanting to meet @choochoobear for a long time, he's going to be at a local con… ON MY WEDDING DAY! TANJ! #

Powered by Twitter Tools

Twitter Updates for 2009-10-20

  • Just finished addressing a bunch of wedding invitations. Now no more than I want to about MS Ofice mail merge. #

Powered by Twitter Tools


Nifty tool I just read about that tells you what will happen next time you reboot your Windows system. The idea being when you install an app that insists you must reboot to complete the install, this tool will tell you what’s going to happen.
Read about it here:

Twitter Updates for 2009-10-13

  • Stop
    Andrew Edelstein #

Powered by Twitter Tools

Guardian blocked from reporting Parliament

Guardian newspaper gagged from reporting the proceedings of Parliament

For the first time in history, a British news paper is blocked from reporting the proceedings of Parliament.
A law firm, Carter-Ruck, representing an oil company successfully obtained a gag order preventing the Guardian from reporting that a member of parliament has asked a question of a cabinet minister regarding the actions of the oil company, Trafigura, in dumping toxic waste in Ivory Coast.
This is apparently possible due to a the creation of the British Supreme Court earlier this month.

Twitter Updates for 2009-10-09

  • RT @supersiblings Wha… wha? Obama got a noble prize for NASA bombing the moon because Marge Simpson is in Playboy? #twittermash #
  • Anyone on Google Wave? I can haz invite? #
  • /me waves at Nija (unfortunately, not Google Wave…) #

Powered by Twitter Tools

Twitter Updates for 2009-10-08

  • Working on learing PERL once and for all. #

Powered by Twitter Tools

Twitter Updates for 2009-10-06

  • Apparently someone really wants to get into my gmail account. 10 attempts to reset my password since 3:00. #
  • I love it when customers have a "DOH!" moment and realize the error they're seeing is THEIR fault. #

Powered by Twitter Tools

Twitter Updates for 2009-09-28

  • Those of you on LJ, check your friends page for important news re: wedding. #
  • Enyoying my lovely dinner from Nikki's #
  • Dinner done (delicious Chicken Aristocrat. If you live in mid-cities, I really recommend Nikki's). Time for WoW. #

Powered by Twitter Tools

Twitter Updates for 2009-09-25

Powered by Twitter Tools

Twitter Updates for 2009-09-23

  • RT @joedecker My apologies for the next tweet, but I actually want a 7D #
  • Win a new Canon 7D (or 2500 photo scans) from @ScanCafe & Scott Bourne. Pls RT. Details here: #
  • Win a new Canon 5DMKII (or $2500 Gift Cert) from @OPGear & Scott Bourne. Pls RT. Details here: #
  • I wish Yoono worked for LiveJournl the way it works for Twitter. #
  • Yooono devs/support follows twitter. Who knew? #

Powered by Twitter Tools

Twitter Updates for 2009-09-21

Powered by Twitter Tools

Twitter Updates for 2009-09-20

  • Heading out for late dinner with Kestrel #

Powered by Twitter Tools

Twitter Updates for 2009-09-18

  • Question for the cloud: When you find something you want to blog about, but don't have time right now, how do you mark/save it? #
  • Hmm… My profile pic disappereard mysteriously… #

Powered by Twitter Tools

Question for the blogosphere

When you run into a site or blog post somewhere on the ‘Net that you want to blog about, but you don’t have time to do so right now, what tool do you use to save or mark it to come back to later, or remind yourself to write your blog post?
For example, amuse blogged about a post by Jason. Some thoughts occured to me and I decided I wanted to write a full blog post, rather than just a quick tweet, but I’m on the phone with a customer right now (luckily he’s busy adjusting his firewall right now). I wanted to save both links to include in my blog post, but couldn’t think of a way to do that easily. OK, I’ve just included them in THIS blog post, but the thought occurred to me that those of you who do a lot of time surfing and blogging and commenting on other blogs (via your own posts) must have some sort of tools or other system to say “I want to blog about this, so let’s save this in my list of things to write about later today / this week, where it’s easy to come back to it”.