Windows Subsystem for Linux / Bash on Ubuntu on Windows

Installing Bash on Ubuntu on Windows while behind a proxy server doesn’t work.

I’m reinstalling Bash on Ubuntu on Windows on my work laptop at home, where I’m not behind the work firewall and the need for the work proxy server.
WSL happily activates and downloads Ubuntu from the Windows store while at work, but once it fires up Ubuntu and starts running Apt to install updates, it chokes, as Ubuntu, and Apt, aren’t configured to use the proxy server. This means I have to cancel the install, which results in a working system, but it doesn’t complete its setup. I can fire up Bash, but it always logs in as root (doesn’t get to the user setup step). Once logged in I can configure Apt to use the proxy, and set the proxy environment and run the apt updates, but it still hasn’t gone through the full install process cleanly. This is a weakness in Microsoft / Canonical’s design. Ubuntu should either inherit the proxy config from Windows, or have a way to configure it in the setup, so it can perform a clean install.
I figured I’d give it a try from home, where it doesn’t need to go through a proxy, and see if it will properly complete the install. This worked perfectly on my personal laptop.

Edited: We have success! The prompt for a username means we got past the blocker seen at work.
Bash on Ubuntu on Windows install username prompt

Bad SSL security

I see GNS3 Academy still hasn’t fixed their SSL certificate.

For a site teaching about networking, which includes network security, this is head-shakingly bad.

Minor Home Network Rewiring

After some minor home network rewiring (2 additional Cat6 cables from network rack to desk, re-tipped all Ethernet cables with keystone jacks, installed in a 4-gang surface mount box, patch cable from the computers to the new keystone jacks. Unfortunately one of the original cables I had running between the rack and my desk is only 20′, so it is now too short for the new path, so only 4 actual connections to the desk. I’ll replace that later.) I’m rather pleased with my Internet performance today:

23 ms ping, 49.45 Mbps Download, 5.44 Mbps Upload, AT&T Internet, Keller, TX, < 50 mi
Speedtest.net

To Do: Install patch panel in the network rack, re-tip these cables into the back of the patch panel, install patch cables from panel to switch. Re-tip the cables going from the network rack to the Cisco lab bench the same way. Install some split loom or spiral conduit around these cable runs to keep them dressed neatly.

Keystone Jacks
4-gang surface mount box
25′ cat-6 Ethernet cables
5′ split loom. I should have ordered longer.

Day in the life of a Systems Administrator

Day in the life of a Unix Systems Administrator

Wow, been almost a year since I blogged anything. I’m getting lazy.

So what’s the daily life of a systems administrator like? Here was today:

Plan coming in in the morning: Begin quarterly “Vulnerability audit report”.

What did I do?
Windows server starts alerting on CPU at midnight, again. We fixed the problem on Tues. Why is it alerting again?
Of course it corrects itself before I can get logged in and doesn’t go off again all day. Send email to person responsible for the application on that server to ask if the app was running any unusually cpu intensive jobs. Respond with screenshot showing times CPU alerts went off. Get response of “nothing unusual”. As usual.

We updated the root password on all Unix servers last week. Get a list of 44 systems from coworker that still have the old root password.
Check the list, confirm all still have old root password.
Check the list against systems that were updated via Ansible. All on the Ansible list. No failures when running the Ansible playbook to update the root password. All spot-checks that the new root password was in effect at the time showed task was working as expected.
Begin investigating why these systems still have the old root password.
Speculation during team scrum that Puppet might be resetting the root password.
Begin testing hypothesis that root password was, in fact, changed, but something else is re-setting it back to the old password.
Manually update root password on one host. Monitor /etc/shadow to see if it changes again after setting password. (watch -d ls -l /etc/shadow)
Wait.
Wait.
Wait some more.
Wait 27 minutes, BOOM! /etc/shadow gets touched.
Investigate to see if Puppet is the culprit. I know nothing about Puppet. I’m an Ansible guy. The puppet guy (who knows just enough to have set up the server and built some manifests and get Puppet to update root the last time the root password was changed, before I started working here.) is out today.
Look at log files in /var/log. Look at files in /etc/puppet on puppet server. Try to find anything that mentions “passw(or)?d&&root” (did I mention I’m not a puppet guy?). Find a manifest that says something about setting the root password, but it references a variable. Can’t find where the value of that variable is set.
Look some more at the target host. See in log files that it’s failing to talk to the Puppet server, so continuing to enforce the last set of configuration stuff it got. Great, fixing this on the Puppet server won’t necessarily fix all the clients that have been allowed to lose connectivity that no one noticed (entropy can be a bitch.)
Begin looking at what to change on the client (other than just “shut down the Puppet service” and “kill it with fire!”). Realize it’s much faster to surf all the files and directories involved with “mc”.
Midnight Commander not installed. Simple enough, “yum install mc”.
Yum: “What, you want to install something in the base RHEL repo? HAH! Entropy, baby! I have no idea what’s in the base repo.”.
Me: “Hold my beer.” (This is Texas, y’all.)
(No, not really. CTO frowns on drinking during work hours, or drinking while logged into production systems. Or just drinking while logged in…)
OK, so more like:
Me:
“Hold my Diet Coke.”
Yum: “Red Hat repos? We don’t need no steeeenking Red Hat repos!”
Me:

Start updating Yum repo cache. Run out of space in /var. Discover when this server was built, it was built with much too small a /var. Start looking at what to clean up.
Fix logrotate to compress log files when it rotates them, manually compress old log files.
/var/lib/clamav is one of the larger directories. Oh, look, several failed DB updates that never got cleaned up.
Clean up directory, run freshclam. Gee, clamav DB downloads sure are taking a long time given that it’s got a GigE connection to the local DatabaseMirror. Check Freshclam config. Yup, local mirror is configured… external mirror ALSO configured. Dang it. Fix that. ClamAV DB updates no much faster.
Run yum repo cache update again. Run out of disk space again. Wait… why didn’t Nagios alert that /var was full?
Oh, look, when /var was made a separate partition, no on updated Nagios to monitor it.
Log into Nagios server to update config file for this host. Check changes into Git. Discover there have been a number of other Nagios changes lately that haven’t been checked into Git. Spend half an hour running git status / diff / add / delete / commit / push to get all changes checked into Git repo.
Restart nagios server (it doesn’t like reloads. Every once in a while it goes bonkers and sends out “The sky is falling! ALL services on ALL servers are down! Run for your lives! The End is nigh!” if you try a simple reload.
Hmm… if Nagios is out of date for this host, is Cacti…
Update yum cache again. Run out of disk space again.
Good thing this is a VM, with LVM. Add another drive in vSphere, pvcreate, swing your partner, vgextend, lvresize -r, do-si-do!
yum repo cache update… FINALLY!
What was I doing again? Oh, right, install Midnight Commander…
Why? Oh yeah, searching for a Puppet file for….?
Right, root password override.

Every time I log into a server it seems like I find a half dozen things that need fixing. Makes you not want to log into anything, so you can actually get some work done. Oh, right, entropy…

LoneStar Overnight’s broken web page

I told them I’d give them 24 hours to respond. I actually gave them over a month. The only response was the automated “We’ve received your tech support email.” They have, to date, done jack all to fix any of the problems I alerted them to. So I’m going public.

To whom it may concern,

There are serious issues with the security of your public web site.

A search on Google and Duck-Duck-Go for “Lonestar Overnight” or “Lone Star Overnight” results in a list of links, the first of which points to www.lso.com. The rest point to various pages below www.lonestarovernight.com, all of which appear to be on the same server. However the GoDaddy-signed SSL / TLS certificate installed on that server contains only the name “*.lso.com” and the SANs (“Subject Alternative Names”) “*.lso.com” and “lso.com”. There is no SAN for any lonestarovernight.com hostname.

A potential customer or user clicking on any of the www.lonestarovernight.com links would receive a message from their browser that the certificate does not match the hostname. This can cost you business, as customers look elsewhere, to a competitor whose web server is perceived to be “trustable”.

You’ve already lost business from anyone looking for “lonestarovernight.com”. BTW, your web page does not contain the words “Lone Star Overnight” anywhere. Obviously, Like IBM, SGI, AT&T and other companies of decades past, you have pivoted to branding yourselves as “LSO”. You might communicate this to Amazon, in order to properly identify your service when they select you to perform delivery of their packages. At this time Amazon states that the courier is “Lone Star Overnight”.

The last is purely a marketing glitch, however, there are far more serious problems with your web site’s security.

You are using very weak, obsolete, encryption. This makes your customers vulnerable to man in the middle attacks.
You are still using 512-bit “export” grade encryption, and potentially vulnerable to the FREAK attack.
You ARE vulnerable to the POODLE attack.
You are still using SSL version 3.0, TLSv1.0 and TLSv1.1. All of which are considered obsolete.
You do NOT support the current TLSv1.2 standard.
You are using older, vulnerable versions of the RC4 protocols.
You do not support secure renegotiation.
You do not support Forward Secrecy

I am a highly skilled Unix Systems Administrator for a Fortune 500 company. These are BASIC items. I identified most of them using the well regarded, and free, SSL security scanning tool from Qualys Labs (qualys.com) to review your site, the results of which can be viewed here and here. To be clear: this is NOT an attempt to extort money from you or solicit my skills to you. IT Security is a highly specialized area of IT administration and I do not posses the knowledge, skills or interest, to do it justice. I am contacting you as a concerned “customer”, who regularly receives Amazon deliveries facilitated by your service. If you don’t have an IT Security expert on your staff, I would highly recommend you get in touch with an IT security consultant who can assist you with auditing your systems security and developing a remediation plan.

If there is no response to this message within 24 hours, I will post it publicly on a number of social media sites, to warn other potential customers that they should not use your web site.

Also, the tracking number for my recent “same day delivery” given by Amazon does not work on your site. It doesn’t even give an error that it could not be found. It simply reloads the page. Someone should probably look into that glitch on your site as well. I’m probably not the only Amazon customer awaiting a delivery who is trying to track their package.

For the record, it appears they fixed the tracking number glitch. At least as of today I am able to get the status of the package I’m waiting for. While typing this, I heard a car door outside and my delivery was on my doorstep when I checked.

Uverse speed throttling

Large uploads on Uverse kill download bandwidth.

So it turns out if you’re uploading something on a Uverse connection, they kill your download bandwidth.

I had been poking around in iTunes, looking at the section of the store that shows what other members of your Apple Family have “purchased” (in quotes, because even “free” apps and music show up as “purchases”). The Wife had purchased several albums (or at least songs) that I would not have purchased myself, but wouldn’t mind having a copy, since we’d already paid for it. I clicked to download them (mostly songs from our high school days), then went to watch some YouTube videos. Normally we have enough bandwidth to handle this just fine, but the video kept stuttering (play for two seconds, pause for four seconds to download the next two seconds worth of video, play for two, pause for four for the next two seconds of playback download). I switched back to iTunes and saw that what should have taken about 3 seconds per song was predicting six MINUTES or more.
The Cisco ASA showed a lot of OUTGOING bandwidth being used, and very little incoming. Well that was odd. I wasn’t uploading anything that I knew of.

Speediest showed my download speed to be 5Mbps and upload of about 77Kbps. WELL below normal.

So, drop to terminal, do a tcpdump and low and behold lots of packets going out to Apple IP addresses (I’m sure I could have found this out from ASDM, but I don’t know the interface well enough yet and I do know tcpdump.)

Turns out when I stuck the SD card from my camera into the iMac and told Photos to download 10GB worth of video I shot today, it dutifully did so, then began uploading that to iCloud. There doesn’t seem to be a setting to permit uploading photos, but not video. With a 12Mbps down / 1.5Mbps up Uverse connection, 10+GB is going to take a WHILE to upload (especially since it was only uploading at about 500Kbps).

It would seem Uverse will only let you use either upload or download at any given time, but not both. If they’re going to screw you like that, they could at least give you a reach around and let you do it at the same SPEED in either direction.

(Of course it’s possible the throttling of download speed is due to the TCP/HTTPS “ACK”s coming back from Apple signaling receipt of the upload packets and readiness for the next upload packet, but those shouldn’t take much bandwidth at all. Barely more than the TCP/IP header and a few bits of payload, I would think.)

Edit: As soon as I stopped (really, paused for one day) the “Photos” upload, my download bandwidth came roaring back: 15Mbps down (from a connection that is technically only supposed to be 12Mbps…) / 1.5Mbps up on speedtest.net and my iTunes downloads were completing in seconds.

How not to “describe” your products on web sites

In which I go off on people who use the same item description on multiple online sales listings, each with a variety of features.

Cisco ASA5505-UL-BUN-K9 ASA 5505 Security Appliance vs Cisco ASA5505-50-BUN-K9 Asa 5505 Security Appliance vs Cisco ASA5505-SEC-BUN-K9 ASA 5500 Series Adaptive Security Router Appliance
Yeah, because I enjoy digging through Cisco’s web site to figure out which features are activated by a “UL-BUN-K9” vs a “50-BUN-K9” vs a “SEC-BUN-K9” license. I already have to know a little bit about Cisco to identify that string of characters refers to the IOS license version in the first place.

Seriously, if you’re going to sell this stuff on Amazon, don’t use the same description (of the hardware) for every one of them. That’s like putting up 5 different Toyota Corollas on a web site, each with a different VIN and price, but the same stock photo and describing them all as “A popular compact car” and leaving it to the potential buyer to decipher the VIN to find out what options each one has. “Let’s see, a ‘C’ in the 10th digit means it’s a 2012 model year, or maybe a 1982…”

Yes, I know. Someone will probably point out that if you’re shopping for Cisco equipment, you should probably be able to decipher the Cisco IOS license codes.

When default allow rules… don’t.

Now that I have a power supply for the Cisco ASA, I’m trying to get it up and running to sit at the edge of my home network, so I can pull the router to be part of my Cisco lab and it’s driving me crazy.
It’s default config as set up by the ASDM setup wizard is supposed to permit all traffic from the “inside” (high security zone) to the “outside” (low security zone). That’s all fine and dandy, until the default NAT/PAT config, which LOOKS like it says “NAT / PAT all traffic from ‘inside’ to the ‘outside’ IP address” doesn’t.

I don’t want to spend a lot of time learning the intricacies of the ASA OS right now. I’d rather spend it on IOS and working toward the CCENT / CCNA…

Adding my network to Cacti

Geeking with Cacti.

So, geeking out this evening, adding my entire home network infrastructure to Cacti, to track how it’s doing.
I’d already set up all my VM’s, the Cisco router and Uverse gateway, and my two hosted servers at Rackspace and Linode months ago.
Tonight I added my ESXi server and both Cisco switches. Of course, not much to see on most of the switch ports, since the only port in use on one of them is the uplink to the other switch (which means the only traffic on that port is Cacti polling it’s SNMP daemon). But it’s interesting, none the less.
I’ll probably do the same on the Cisco lab I build for CCNA study.

More Windows patching

More Win2k8 patching

Well it looks like that Win2k8 server successfully patched. Or at least got past the .NET Framework 3.5 patches that were hanging it up. Now I’m going through a series of .NET Framework 4.0 patches. Making snapshots every step of the way for quick rollback if it decides to throw a wobbly at any time.

Windows patching hell

A Unix sys admin struggling with patching Windows servers.

Never thought I’d end up babysitting MS Windows server patching and pulling my hair out as it takes an hour or more to install 100+ patches, reboot, 30 minutes “finalizing” the updates, declare it “failed” and 90 more minutes “reverting” the installs before rebooting again, wash, rinse repeat, until you successfully tell it which patch NOT to install.
I’m a Unix admin for Pete’s sake. There’s a reason I don’t (normally) do Windows. The only time a Linux server takes so long to boot is when it’s running on bare metal that takes 30 minutes to POST and/or it has lots of LUNs assigned and it takes a while to sort them all out.
I was hoping to have this 2008 server to a state that I could start installing the software it needs by the end of the day.

FINALLY it finished reverting and rebooting. Luckily it didn’t back out 100+ updates. The only one left to install is the one troublesome update that should be done last, because it causes this problem if you don’t.

Nope, spoke too soon. Had it re-check for updates and it now says ALL of the updates from the last go round still need to be installed. But now I see there’s a second update that partners with the known one, so hopefully de-selecting that one as well will fix the issue.

(And in another in a list of first that came with this job: never thought I’d be adding a new “Windows” sub-category under the System Administration category of this blog.)

IPv6 has come to Uverse

More than a year after my 3800HGV-B Uverse modem actually acknowledged that such a thing as “IPv6” existed, it appears it is actually making it available for use.
Now to see if I can get my Cisco router to play nice…

Uverse modem IPv6 configuration

The Spam Folder

Server-side vs. client-side spam filtering.

After my post about what causes mail to go to the spam folder, a reader1 asked:

So why did I have to tell my new computer and new email system a dozen times that Facebook posts of various types were not spam before I could get it to stop throwing them all in my spam folder.

Continue reading “The Spam Folder”

What causes email to go to the spam folder?

A quick guide to some of the things ISPs look for to decide if it should go to the Inbox or the spam folder.

Recently a former colleague reached out to me on Linkedin to ask:

I have a question regarding email delivery. What cause emails to go into someone’s spam email box? I understand that there maybe(sic) filters that looks at the content to make that determination. I would think there are many other factors.

I replied:

Yes, there’s quite a number of things that can cause mail to go to the spam folder. The contents of the message are a big factor. Of course every ISP applies different rules, so what causes mail to go into the spam folder of a Yahoo! mailbox will differ from what matches the rules on Gmail, or Hotmail, etc. Some ISPs will allow certain mail through, but put it in the Spam folder that other ISPs would just reject outright when the sending mail server connects to send it.

Are you having a specific problem that you’re trying to solve?

He responded:

I don’t have a specific problem. Just interested in understanding how spam filtering works. Since I know an expert, why not ask directly.

Are there headers the ISP look at to validate the email?

I wrote up a quick primer on some of the esoterica of spam filtering.
This is by no means comprehensive, and not guaranteed 100% accurate.

Continue reading “What causes email to go to the spam folder?”

Ansible and Variables

A basic explanation of Ansible and a discussion of variable usage.

I’ve been talking about Ansible on Facebook lately and the other day a friend asked me about Ansible and variables. I gave her a quick explanation, then told her I’d do a more thorough writeup that would be easier to follow than my “stream of consciousness” explanation given in FB messages.
It occurred to me that I’m planning to do a “lunch and learn” on Ansible at work soon, and I could re-use the same material, so I’ll just post this publicly. I plan for this to be the first in a series on DevOps, integration, idempotent, configuration management and Ansible. So without further ado…

For those who have not seen my posts on Facebook, Ansible is a configuration management tool for provisioning, deploying and configuring, servers and applications. It is one of a series of such tools that have come out in the last few years, such as Puppet, Chef and Saltstack. It is designed to be fast, easy to use, power, efficient and secure. It is serverless and agentless. It aims to be idempotent.

I can’t speak to Puppet, Chef or Saltstack as I’ve never used them.

Addressing these one at a time, not necessarily in the order presented above:

  • Secure
  • Everything is done through SSH tunnels. No passwords, no configuration files, are ever sent over the network in the clear. Set up your SSH keys and you don’t have to worry about typing passwords either.
    There is no agent software running on the managed machines, so there’s nothing to hack.

  • Easy to use
  • “I wrote Ansible because none of the existing tools fit my brain. I wanted a tool that I could not use for 6 months, come back later, and still remember how it worked.”
    Michael DeHaan
    Ansible project founder

  • Efficient
  • No agents, just SSH (or PowerShell with Windows, but I won’t get into that.) The only software required on the managed machine is an SSH daemon and Python.

  • Serverless and Agentless
  • As I’ve already mentioned, there’s no agent running on the managed server. If you can ssh into it and run Python, you’re good to go.
    There is no central server, full of manifests, menus, etc. You can run it from your desktop or laptop. Again, if you have Python, you’re good to go (Python has its own implementation of the OpenSSH client.) Just make sure you back up your playbook and roles. Git is a great place for this!

  • idempotency
  • The is one of the most important! It means you should be able to run your Ansible script against a managed host at any time, and not break it. If anything is not configured the way it is supposed to be, the ansible script will put it back the way it should be. Shell scripts have to be written very carefully to detect if something doesn’t need to be done. It’s also notoriously difficult to modify files with shell scripts (unless you’re really good with tools like sed and awk, or perhaps Perl…)

Some vocabulary before we begin:

  • playbook
  • A file defining which hosts you want to manipulate and what roles you want to apply to those hosts, as well as what tasks you want to run.

  • roles
  • A defined list of tasks to be run when the role is called, as well as any files to be installed, templates to be applied, dependency information, etc.

  • inventory
  • A file listing every server you will manage with Ansible, and what groups they belong to. A host can belong to any number of groups, including none at all, and groups can be members of other groups.

  • host_vars & group_vars
  • Directories with files containing variables specific to certain hosts (host_vars) and host groups (group_vars). These variables are used in your tasks and roles.

Now, on with the discussion of variables. Here was Kathryn’s original question:

How do variables work with dependencies in roles? Meaning, if a role is dependant on another, can it access the variables of the other at run time?

I started to answer with an example we use at work: we have a “common” role that sets up some users with specific UIDs that we want on all our servers, and an “apache” role that depends on that common role (e.g.: it needs the wwww user created by common). Kathryn further asked:

Okay, say “application” depends on “common” and “common” has default variables… would “application” pick up “common”‘s defaults?

Yes! For example, we have in our “common” role, a task with a file which pushes out customized /etc/sudoers.d files, depending on what the server will do, what environment it will be in, etc. One of the tasks looks like this:

NOTE: the language used to write Ansible files, Yaml, is whitespace sensitive, however due to the limitations of HTML and my WordPress config, the whitespace is removed from my examples. Do not just cut and paste and expect it to work. You will need to adjust the leading spacing on all lines.

- name: Sudoers - push sudoers.d/hadoop_conf
template: >
src=sudoers_hadoop_conf.j2
dest=/etc/sudoers.d/hadoop_conf
owner=root
group=root
mode=0440
when: hadoop_cluster is defined

Note the last line: “when: hadoop_cluster is defined”. “hadoop_cluster” is a variable. This variable isn’t actually defined in our role, but rather in the playbook, or in a host_var or group_var file. In this case we have a group_vars/all_hadoop file. Any task run on any server that is part of the “all_hadoop” group in the inventory will have the variables defined in this group_var file. This file contains:
# file: group_vars/all_hadoop

hadoop_cluster: true

In this case “hadoop_cluster” is defined, and has a value of “true”. Our task above doesn’t care about the value, only that the variable is defined at all. If I run the above task on the server “namenode1”, and “namenode1” is in a group called “all_hadoop” in my inventory file, it will inherit the variables in group_vars/all_hadoop, “hadoop_cluster” is defined, so the task will be run.
Another role or task, which might be part of “common” role or in a completely different role, will be able to access the same variable and act on it. That role / task might actually care about the value of the role, and would be able to see that value. Or it might just care that the variable is defined.

Another example: I built a role for a set of servers at work. In our development environment we wanted to allow the developers actually writing the code for the applications to run on those servers to be able to use sudo to gain root access. I added another task to the same file as our Hadoop example above:
- name: Sudoers - push sudoers.d/nova_conf
template: >
src=sudoers_project_conf.j2
dest=/etc/sudoers.d/project_conf
owner=root
group=root
mode=0440
when: allow_project_sudo is defined

In our inventory, the development servers for this project are in a “dev_project” group, and there’s a group_vars/dev_project file that defines “allow_project_sudo”. We also have a “production_project” group in our inventory which contains the production servers for this project. The “allow_project_sudo” variable is NOT defined in group_vars/production_project, so that sudoers file is not pushed out.

Directly addressing Kathryn’s question about one role being able to call variables “defined” by another role (although I’ve already addressed the fact that roles don’t really “define” variables, they just access them), I have this task:
- name: Build ssh key files
assemble: >
src={{ item.user }}_ssh_keys
dest=/home/{{ item.user }}/.ssh/authorized_keys
owner={{ item.user }}
group={{ item.group }}
mode=0600
remote_src=false
backup=yes
with_items:
- { user: 'projectuser', group: 'projectgroup' }
when: allow_project_sudo is defined

Again, we look to see if “allow_projecgt_sudo” is defined; if so, we build a .ssh/authorized_keys file for the user “projectuser”, allowing all those same devs to ssh into the server as that user. This task also includes the intriguing and useful “with_items”. This allows for a form of looping, such that it will actually perform this task for each item listed in the “with_items” block, redefining the “item.user” and “item.group” variables used in the src, dest, owner and group lines in the task.
We actually define two variables in our “with_items”. Each line in “with_items” is an “item”. In this case we have two variables (basically an associative array), and we can reference the key/value pairs in the array. “item.user” has the value “project user”. “item.group” has the value “projectgroup”. Thus our “assemble” becomes, on the first iteration of “with_items”:

assemble: >
src=projectuser_ssh_keys
dest=/home/projectuser/.ssh/authorized_keys
owner=projectuser
group=projectgroup
mode=0600
remote_src=false
backup=yes

This basically says “grab all the files (presumably ssh key files) in the directory “projectuser_ssh_keys” (stored inside a directory in our role) and build, on the managed host, a file called “authorized_keys” in the directory /home/projectuser/.ssh, make that file owned by projectuser:projectgroup, with -rw——- permissions. Oh, and back up the original file first, just in case.

Manipulating maildirs at the filesystem level

Let’s here it for being able to manipulate you mail directory structure at the file system level and still be able to access it through Thunderbird.

DJBDNS and IPv6

DJBDNS must run as two separate instances to bind to both an IPv4 and IPv6 addresss.

Tip: When patching DJB’s “dnscache” for IPv6, you can’t just tell it to bind to both the IPv4 and IPv6 addresses. You will need to run two separate instances, one binding to the IPv4 address, one binding to the IPv6 address.
I haven’t checked, but I’m betting my tinydns instance is also not binding to both addresses and will have to be run as two separate instances as well.

I’ve been a busy little geek

So far this week I’ve:
Finally gotten a working Xen system that will boot a Debian guest.
Successfully installed ispCP on the Debian guest.
Built another Debian guest to be an OpenVPN server.
Successfully built an OpenVPN server and got two clients to connect from outside the network, through the DSL modem/router.
Correctly configured the VPN server to give the client access to the full network via IP masquerading (next trick: get the network to simply route the packets instead of having to use masq).
Got ddclient working on the VPN server to keep dyndns updated so I don’t have to hard code an IP address in my VPN clients and check various server log files to see if it changed.
Fixed ddclient, when it failed to update dyndns with new IP address after my DSL provider mysteriously issued a new one, not 3 hours after setting up ddclient in the first place.

I can now log into my ispcp box from my desk at work, as though it was on the same network. I can now proceed with trying to get Mailman to play nice with ispCP when it’s slow at work.

I get productive when I ignore my games.

Getting hpasm installed on Ubuntu server

While installing Ubuntu Server 8.04 beta on an HP DL-320, I discovered I had some trouble getting HP’s “Proliant value added software” (hpasm) package installed. This package contains their system health check and control software which, among other things, switches the fans from “full-time full speed” (which is quite noisy) to temperature controlled speed (eg: normal (read: quiet) fan speed when system temp is normal).
The problem with installing and runnning this software stems from the fact that Ubuntu, for some reason, links /bin/sh to dash instead of bash. Dash is another bourne shell clone, but doesn’t understand Bash (bourne-again shell) specific syntax.
Re-linking /bin/sh to bash instead of dash solved the problem and the server is now humming (quietly) along.