Aug 31, 2011

Backups, Store More On Less For Free

Not sure if you’ve seen it, but I recently posted an article on creating your own redundant SAN using Ubuntu, GlusterFS, and ZFS. This solution provides you with a lot of cost effective storage with cool features like deduplication, compression, and thin provisioning for half the cost of the big boys.

What if you don’t have a storage solution with compression or dedupe though? Or what if you do, but you want to maximize what you store on it as far as backups? What if you are like me, and you want to pack as much data onto an LTO3 tape as you possibly can? Well, let me introduce you to one of my favorite backup solutions, CrashPlan!
CrashPlan comes in a few different flavors, Free, CrashPlan+, CrashPlan Pro, and CrashPlan Pro Enterprise. If you want to take advantage of CrashPlan’s cloud solution, you must pay for either CrashPlan+ or better.

If you are a business user, and want to use their cloud solution you must purchase a CrashPlan Pro plan or better. If you are just backing up locally, then the Free version will do just fine. It’s also legit. I received this email from CrashPlan’s support team about using the free version for business use:
Good day Paul,
You are more than welcome to use the Free version of the CrashPlan software to backup those business files as long as you are not backing them up to our servers!
Have a great day,
Anyway, one of the cool things about CrashPlan, even in the free version, is that it protects your data with 256 bit encryption, and performs block level deduplication so it saves a ton of space! If you have a dedupe SAN, and you save your CrashPlan backups to it, it can dedupe the dedupe saving even more space!

What I like about it is that I inherited a single LTO3 backup drive from my predecessors when I took over my current job. LTO3 tapes will only hold 800GB of compressed data. All the other shops I worked at had tape autoloaders, with multiple drives so space wasn’t an issue. Now that I have to take a trip to the data center every week to swap out one lousy tape, I want to make sure I’m getting more bang for my buck!

So here is what I am doing. I have installed the Free CrashPlan software on all my servers that I need to backup. I don’t have Microsoft Exchange to worry about, but I do have Microsoft SQL. For the SQL servers I have maintenance plans to backup databases to local storage, then with CrashPlan I backup those files to an iSCSI LUN attached to my backup server. On web and file servers I use CrashPlan to backup to the iSCSI LUN on my backup server as well. Then from there I backup the CrashPlan backup directory to tape!

For tape backup I am using Microsoft Data Protection Manager because my backup server is running Windows 2008 R2. If your backup server is running Windows 2003 though, NTBackup is built in and can write to tape! Mmmmm, smells like free!
I am now able to pack over 2TB of data into a little over 500GB of space! I can now fit my entire environment onto a single LTO3 tape! Boom!
If I ever have to restore, I can restore from disk fairly quickly. Way quicker than tape that is for damned sure. However, if I have to restore from tape, CrashPlan can mount a previously made backup archive.
If you have the budget for it, I would highly recommend purchasing their pro products as well. In fact, for home use I recommend their CrashPlan+ Unlimited for about $10 per month! I switched from Mozy to CrashPlan+ earlier this year and never looked back. I have found their backups to be highly reliable, and well worth it! Plus, the ability to have multiple backup destinations for free is huge!

Enhanced by Zemanta

Aug 30, 2011

How To Backup VMware Data Recovery To Tape in Windows

A lot of you out there are VMware admins like me right? How many of you know about VMware's really awesome backup appliance called VMware Data Recovery? How many of you know that it backs up full VM's directly from within vSphere, and stores them using  deduplication? Now answer me how many of you don't use it because it only backs up to disk, and not to tape?

I'm sure there are quite a few of you out there still using VCB, or something similar to backup full VMs, as well as send them offsite using tape because you can't get that with VMware Data Recovery. Hell, some of you probably use both. VMware Data Recovery for fast restores, and VCB to get offsite. Well, now you can stop using both, and just use VMware Data Recovery.

Using some Linux scripting skills and some WinSCP scripting skills in Windows, we can easily snap a backup of your VMware Data Recovery backups and wisk them off to tape.

Here's what you do:

  • On you VMware Data Recovery Server create a new /etc/crontab file using your favorite textvmwaredr editor. I like nano. For example:

                #nano /etc/crontab

  • Paste in the following:

                # run-parts
               01 * * * * root run-parts /etc/cron.hourly
               02 4 * * * root run-parts /etc/cron.daily
               22 4 * * 0 root run-parts /etc/cron.weekly
               42 4 1 * * root run-parts /etc/cron.monthly

               # Weekly VMware Tarball
               0 5 * * 0 root  /root/

  • Restart the Cron service:

                #/etc/init.d/crond restart

  • Create a script in /root called
  • Paste in the following:

               cd /SCSI-0\:1/
               tar -czvf VMwareDR-$(date +%m%d%y).tgz VMwareDataRecovery/

  • Make the script executable:

               #chmod +x

The above options creates a weekly tarball of the /SCSI-0:1/VMwareDataRecovery directory with the data appended to it every Monday at 5:00am. I'll let you Google how to change the date and time options for Cron on your own. I do mine on Monday because VMware Data recovery runs all weekend, and I want to get the weekend's backups off site by Wednesday.

Now that is done, download and install WinSCP on your Windows backup server.

Hint: NTBackup in Windows 2003 and below has the ability to backup to tape. If all you need to do it backup your VMware environment, this may be a completely free solution for you.

  • Once WinSCP is installed, create a folder for your scripts, and place to store the tarball we created in the previous steps. I put mine in E:\vmware. To keep things simple, I copied my WinSCP.exe program from C:\Program Files (x86)\WinSCP to E:\vmware.
  • Create two batch files. One called VMware.cmd and the other called WinSCP.cmd.
  • In VMware.cmd paste the following:

               option batch on
               option confirm off
               open root:password@IP_OF_VMDR
               cd /SCSI-0:1/
               option transfer binary
               get *.tgz
               rm *.tgz

  • In WinSCP.cmd paste the following:

               cd e:\vmware
               winscp.exe /console /script=e:\vmware\vmware.cmd

  • Now add a scheduled task in WIndows to run your WinSCP.cmd script once a week one day after your Cron job on the VMware Data Recovery server runs and it will automatically go out, grab the tarball to e:\vmware then delete it from the server.

Once your tarball is on your Windows backup server, you can then back it up to tape! To restore it later, all you have to do is setup a new VMware Data Recovery Server. Create a LUN on your SAN, and format it with ext3, then copy your tarball to the LUN. Extract the tarball with the following:

tar –xzvf *.tgz

Which should create a VMwareDataRecovery folder on the root of that LUN. Now mount that LUN to VMware Data Recovery. VMware Data Recovery should now rescan it, and be able to see your old backups.

If you're a corporate user that's interested in backing up an entire network, you can take a look at some of the solutions for VMware at that include data center services.

Did this work for you? Did it not? Do you do it a different way? Let us know in the comments!



Enhanced by Zemanta

Aug 29, 2011

Finally Acer Free, Plus I Saved $870!

About 2 weeks ago the motherboard in my Acer Aspire M5100 desktop computer died. If you have been a long time follower of the blog, you know I am not a fan of Acer, mainly because of this computer. Also because of a problem I had with it a few years ago which caused me to have to use Acer's shotty tech support and customer service. Well thanks to hardware failure I am finally Acer free!

So the motherboard died, and I looked at getting a replacement board online for it. A refurbished motherboard for the Aspire M5100 goes for around $50. I decided that I was done with Acer though, and found a brand new ATX motherboard that would support the AMD Phenom 9500 for around $10 more at Tiger Direct. I decided to get that board. Since the Acer case however only supported Micro-ATX boards, that meant I had to get another case.

I went with the Cooler Master Elite 360 because it was a little less than $30 at Frys. The board I got was an ECS IC780M-A that can support up to 32GB of RAM! Awesome right?! Evertything else I cannibalized from the Aspire.

When I powered it on, it ran great except the power supply made some strange noises that drove my wife crazy. I decided that meant another trip to Frys, and I picked up a Cooler Master Extreme Power 500W PSU for $34.99 after mail in rebate. That was super quiet, and made my wife super happy! Everything was running like a champ!

Before buying this stuff though, I looked at just purchasing a new computer, and even a barebone kit to replace the broken Acer. I was looking between $500 to $1000. Just replacing a few of the parts only cost me around $130! The CPU, RAM and video card were still fairly decent, so why the hell not?

Got a similar story? Did you recently rebuild your home PC? What parts did you go with? Got a picture of it? Share it with us in the comments! tags:          

Aug 25, 2011

Roll Your Own Fail-Over SAN Cluster With Thin Provisioning, Deduplication and/or Compression Using Ubuntu


I think my last post was rather negative. I was getting discouraged with setting up a redundant failover SAN cluster because I found that DRBD was just too flaky in the setup I wanted. The problem though was that almost all home-grown cluster solution tutorials on the web use DRBD with either Heartbeat or Corosync. It is almost impossible to find a different, or even better solution... Until now of course.

I often times can be a bloodhound when there is something not working quite right, and I really want it too. I will forego sleep to work on projects, even in the lab to make things work right. I just don't like computers telling me I can't do something. I TELL YOU WHAT TO DO COMPUTERS! NOT THE OTHER WAY AROUND!

So since DRBD doesn't work worth a good God damn, I decided to look elsewhere. Enter GlusterFS! Never heard of it? Here is what Wikipedia says:

GlusterFS is a scale-out NAS file system developed by Gluster. It aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. GlusterFS is based on a stackable user space design without compromising performance. It has found a variety of applications including cloud computing, biomedical sciences and archival storage. GlusterFS is free software, licensed under GNU AGPL v3 license.

I like it way better than DRBD because it doesn't require meta-data to sync before it will work. It just works! In fact, once it is setup it writes like a true RAID 1. If you put a file in your sync'd directory it will automagically show up on the other node in the cluster in real time!

Ok, so I figured out how to make clustering work with GlusterFS and Heartbeat. What's this about deduplication and thin provisioning? Yes! I got that working as well. In fact, not only can we do deduplication, we can do compression if we want. How? It's all thanks to the miracle that is ZFS! What's ZFS? According to Wikipedia:

In computing, ZFS is a combined file system and logical volume manager designed by Sun Microsystems. The features of ZFS include data integrity verification against data corruption modes (like bit rot), support for high storage capacities, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs. ZFS is implemented as open-source software, licensed under the Common Development and Distribution License (CDDL). The ZFS name is a trademark of Oracle.

Although it does deduplication, I am not enabling it in my setup yet because of the high RAM requirements. Either that, or you can throw in a large SSD for caching. Wikipedia says this about dedup with ZFS:

Effective use of deduplication requires additional hardware. ZFS designers recommend 2 GiB of RAM for every 1 TiB of storage. Example: at least 32 GiB of memory is recommended for 20 TiB of storage. [31] If RAM is lacking, consider adding an SSD as a cache, which will automatically handle the large de-dupe tables. This can speed up de-dupe performance 8x or more. Insufficient physical memory or lack of ZFS cache results in virtual memory thrashing, which lowers performance significantly.

If you have the hardware already though, I'll show you how to enable dedup and/or compression later in this post.

Lets get started then, here is how my hardware is setup. Make changes as necessary to fit your hardware.


Server1: SuperMicro SC826TQ with 4 NICs, 2 quad core CPUs and 4GB of RAM, 3Ware 9750-4i RAID Controller, and twelve 2TB SATA Drives

Server2: SuperMicro SC826TQ with 4 NICs, 2 quad core CPUs and 4GB of RAM, 3Ware 9750-4i RAID Controller, and twelve 2TB SATA Drives

In the 3Ware management, I configured a 10GB OS partition and everything in RAID 5, I then partitioned my disks as follows on both servers:


Mount Point



/dev/sda1 / ext4 9GB
/dev/sda2 None swap 1GB
/dev/sdb None raw 19TB

Also on both servers I have the NICs configured as follows:

Interface Purpose IP Address Server1 IP Address Server2 Network
bond0 Team iSCSI
bond1 Team Heartbeat
eth0 Slave to bond0 N/A N/A iSCSI
eth1 Slave to bond1 N/A N/A Heartbeat
eth2 Slave to bond0 N/A N/A iSCSI
eth3 Slave to bond1 N/A N/A Heartbeat

I also have a virtual IP address of that will be controlled by Heartbeat. The bond1 connection will connect one server to the other using crossover cables.

We are leaving /dev/sdb raw for now so we can use it for ZFS later. If you want to skip the whole ZFS thing, you can just partition /dev/sdb with ext4 as well. You just won’t get dedup or compression.

I also installed Ubuntu 10.10 on these servers because that is the latest version that the 3Ware 9750-4i supports. If you have the same card you can download the driver here: (Ubuntu 3Ware driver)

Once you get Ubuntu installed, and partitioned the way you want we first need some packages. One of the packages comes from a special repository. If you have Ubuntu 10.10 run the following to get the add-apt-repository command. Please note, that unless I specify otherwise, all commands will need to be done on both servers:

# sudo apt-get install python-software-properties

Now add the ZFS repository:

#sudo add-apt-repository ppa:dajhorn/zfs

Now lets update apt:

#sudo apt-get update

Next we install all of our necessary packages:

#sudo apt-get install –y snmpd ifenslave iscsitarget glusterfs-server glusterfs-client heartbeat ubuntu-zfs sysstat

If you only have two NICs, you don’t need ifenslave. You only need that to team your NICs. I will assume you have 4 NIC’s like me for the purpose of this post though. I also added snmpd so I could monitor my SANs with Zenoss, and sysstate so I can check I/O performance using the iostat command.

After that lets configure our NIC teams. Edit /etc/modprobe.d/aliases.conf with your favorite text editor. I like nano, for example:

#sudo nano /etc/modprobe.d/aliases.conf

Add the following:

alias bond0 bonding
options bond0 mode=0 miimon=100 downdelay=200 updelay=200 max_bonds=2

alias bond1 bonding
options bond1 mode=0 miimon=100 downdelay=200 updelay=200

Now edit /etc/network/interfaces and replace the contents with the following:

# The loopback network interface
auto lo
iface lo inet loopback

# The interfaces that will be bonded
auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet manual

auto eth3
iface eth3 inet manual

# The target-accessible network interface
auto bond0
iface bond0 inet static
address #(.13 for Server2)
mtu 9000
up /sbin/ifenslave bond0 eth0
up /sbin/ifenslave bond0 eth2

# The isolated network interface
auto bond1
iface bond1 inet static
address #(.13 for Server2)
mtu 9000
up /sbin/ifenslave bond1 eth1
up /sbin/ifenslave bond1 eth3

If your network is not configured for jumbo frames, remove the mtu 9000 option. Also, make sure to change the IP information to both match your environment and your hosts. See the table above for IP assignments.

At this point since my iSCSI network has no Internet access I rebooted both servers and plugged them into the iSCSI switch.

When both hosts come back up, edit /etc/hosts and add the following on both servers: Server1 Server2

This is so the servers can communicate by name over the Heartbeat connection. Once that is ready, run the following on each server:

#sudo touch /root/.ssh/authorized_keys
#sudo ssh-keygen -t dsa (Just press enter for everything)
#sudo scp /root/.ssh/ root@Server1:/root/.ssh/authorized_keys

This will allow you to copy files back and forth without having to enter a password each time.

Now we configure out ZFS storage. If you check the table above to see that I am putting that on /dev/sdb. To do that run the following:

#zpool create data /dev/sdb

This mounts to /data automatically. According to Ubuntu-ZFS documentation, ZFS remounts it automatically at reboot. I found that to not be the case. Since it didn’t mount correctly, it made my GlusterFS setup fail after rebooting. To fix that I created a startup script in /etc/init.d called zfsmount with the following:

zfs mount data
glusterfs -f /etc/glusterfs/glusterfs.vol /iscsi
/etc/init.d/glusterfs-server start

I made the script executable by running:

#sudo chmod +x /etc/init.d/zfsmount

I then copied that file over to the other server:

#sudo scp /etc/init.d/zfsmount root@Server2:/etc/init.d/

What that does is mounts the ZFS volume to /data, then mounts the Glusterfs client volume to /iscsi (We’ll get there) then starts the glusterfs-server daemon at boot. Because we want the zfsmount script to start the GlusterFS service, I also had to remove GlusterFS from rc.d by running the following:

#sudo update-rc.d –f glusterfs-server remove

We also want to make zfsmount run at boot, so we will add it to rc.d:

#sudo update-rc.d zfsmount defaults

At this point you can enable dedup or compression if you have the right hardware for it by running the following:

#sudo zfs set dedup=on data


#sudo zfs set compression=on data

Now our storage is ready, lets configure GlusterFS! Edit /etc/glusterfs/glusterfsd.vol, clear the contents and add this:

volume posix
type storage/posix
option directory /data

volume locks
type features/locks
subvolumes posix

volume brick
type performance/io-threads
option thread-count 8
subvolumes locks

volume server
type protocol/server
option transport-type tcp
option bind-address (or on Server2)
option auth.addr.brick.allow 172.16.18.*
subvolumes brick

Copy /etc/glusterfs/glusterfsd.vol to Server2:

#sudo scp /etc/glusterfs/glusterfsd.vol root@Server2:/etc/glusterfs/

Now start the GlusterFS Server service on both servers by running:

#sudo service glusterfs-server start

Now lets make our GlusterFS client directory by running the following:

#sudo mkdir /iscsi

Now lets edit /etc/glusterfs/glusterfs.vol, clear everything and add:

volume remote1
type protocol/client
option transport-type tcp
option remote-host
option remote-subvolume brick

volume remote2
type protocol/client
option transport-type tcp
option remote-host
option remote-subvolume brick

volume replicate
type cluster/replicate
subvolumes remote1 remote2

volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate

volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind

Now you can mount our GlusterFS client to /iscsi by running the following:

#sudo glusterfs -f /etc/glusterfs/glusterfs.vol /iscsi

This will automatically happen at reboot now thanks to our handy zfsmount script. Now we need to make two directories in /iscsi. One for our iscsitarget configs, and the other for our LUNs. Now that GlusterFS is running, we only need to do this on Server1.

#sudo mkdir /iscsi/iet
#sudo mkdir /iscsi/storage

Now lets move our iscsitarget configs to /iscsi/iet:

#sudo mv /etc/iet/* /iscsi/iet/

Now we will create links to those files:

#sudo ln –s /iscsi/iet/* /etc/iet/

On Server2 run the following:

#sudo rm /etc/iet/*
#sudo ln -s /iscsi/iet/* /etc/iet/

Now our iscsitarget configs only need to be changed in one spot, and it’s automatically replicated to the other node. Now it’s time to configure Heartbeat which will manage iscsitarget as well as our virtual IP address.

On Server1 you will need to edit three files in /etc/heartbeat. File one is, edit it as follows:

logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
bcast bond0
bcast bond1
node Server1
node Server2
auto_failback no

Next edit authkeys with the following:

auth 2
2 crc

Now set permissions on authkeys by running:

#sudo chmod 600 /etc/heartbeat/authkeys

Finally we edit haresources and add the following:

IPaddr:: iscsitarget

Now copy those files over to Server2:

#sudo scp /etc/heartbeat/ root@Server2:/etc/heartbeat
#sudo scp /etc/heartbeat/authkeys root@Server2:/etc/heartbeat
#sudo scp /etc/heartbeat/haresources root@Server2:/etc/heartbeat

Finally we are ready to configure some storage. I will show you how to create a LUN using either thin or thick provisioning. We will do this with the DD command.

Change into your /iscsi/storage directory. To create a 1TB thin provisioned LUN called LUN0 you would run the following:

#sudo dd if=/dev/zero of=LUN0 bs=1 count=0 seek=1T

To create the same LUN, but thick provisioned run:

#sudo sudo dd if=/dev/zero of=LUN0 bs=1024 count=1T seek=1T

Thin provisioning allows us to overcommit our storage, but thick provisioning is easier to maintain. You don’t have to worry about running out of space. Once you have everything provisioned, that’s all you get!

Okay, so now we have our first LUN called LUN0 in /iscsi/storage. Now we need to configure iscsitarget to serve up that LUN. First, since we want Heartbeat to manage iscsitarget, lets remove iscsitarget from rc.d by running the following on both servers:

#sudo update-rc.d –f iscsitarget remove

Now on Server1 only edit /iscsi/iet/ietd.conf with the following:

Target iqn.2011-08.BAUER-POWER:iscsi.LUN0
Lun 0 Path=/iscsi/storage/LUN0,Type=fileio,ScsiSN=random-0001
Alias LUN0

If you want to add CHAP authentication you can in ietd.conf, but I’ll let you Google that yourself. I prefer to lock down my LUNs to either IP addresses or iSCSI Initiators. In iscsitarget, it’s easier (for me) to filter by IP. To do that add the following line in /iscsi/iet/initiators.allow:


The above example restricts access to LUN0 to only If you want multiple hosts to access a LUN (Like with VMware) you can add more IP’s separated by commas.

Now we’re ready to rock and roll. Reboot both nodes, and start pinging your virtual IP address. When that comes up, try connecting to the virtual IP address using your iSCSI initiator.

It took a long time to write this up, but I’s actually fairly easy to get going. This solution gave my company 20TB of cheap redundant SATA storage at half the cost of an 8TB NetApp!

If you have any questions, or are confused on my directions. Hit me up in the comments.

Aug 23, 2011

Failover For Homegrown SANs is Not Ready For Prime Time

I have spent the better part of the last few months testing out some homegrown SAN options. I have been writing about them here almost exclusively during this time. I originally started off with Openfiler 2.99 using DRBD and Corosync. That blew up in my face with some really sweet kernel panics. I thought it might be a problem with Openfiler, so I thought I would build my own SAN using Ubuntu and iSCSI Enterprise Target, and setup failover using DRBD and Heartbeat. That didn't kernel panic on me but was really slow, and completely unusable with VMware.

Since DRBD seems to be the gold standard for open source failover, I would say this pretty much means that an HA setup with a homegrown SAN is not possible at the time of this writing. I decided to abandon the HA setup with my homegrown SAN, and guess what? By itself, without DRBD, Ubuntu with iSCSI Enterprise Target really does work flawlessly. Even with SATA disks, in VMware, and with 5TB of storage allocated for backups from Microsoft Data Protection Manager I never see I/O Wait times greater than 1%. With DRBD I/O Wait was anywhere from 15% to 30% constantly. I think the proof is in the pudding.

Reading on forums, wikis and blogs about these issues, it seems that the only people experiencing similar issues are people who are trying to setup a failover cluster using DRBD. In almost all the documentation on configuring Openfiler, or iSCSI Enterprise Target for failover includes using DRBD and LVMs. It turns out that LVMs over DRBD just don't work all that great. In fact, I believe it was DRBD causing the kernel panics in my Openfiler setup.

If you are hoping to setup your own failover cluster storage server, you might want to wait a while. It's just not there yet. If you just need a lot of cheap storage, then any of the current technologies are pretty good. Openfiler, FreeNAS, and iSCSI Enterprise Target will suit you just fine. In fact, I would say that iSCSI Enterprise Target on Ubuntu works better than Openfiler and FreeNAS, and has less overhead. Especially if you follow these best practice suggestions: (IET VMware Best Practice)

Do you know of a better way to setup failover in Linux other than DRBD? Is it easy? Can you point us to some documentation? Let us know in the comments! tags:           

Aug 16, 2011

Monitor Progress When Using DD in Linux

For those following me on Twitter and Facebook, you first know that I built a redundant SAN using Openfiler. Well, last week that son of a bitch gave me kernel panics 5 times in a row. The only thing I could do was reboot it. Not sure why, it wasn't under that heavy of a load. It had about 8 VMs running on it, and some backups. Not even 1TB of space was being used, but out of nowhere it would choke and die on me. I spent all night last Thursday until about 4:30am baby sitting it while our dev guys did a deployment. Well, that's all I can take with Openfiler.

I was talking with a former boss of mine, and the owner of a very successful managed services/consulting firm here in San Diego. He was telling me that at his colo facility, he hosts his clients on home grown iSCSI SANS himself. The difference? They run on Gentoo Linux using iSCSI Enterprise Target. Not only that, in his very own words, it works "flawlessly". Sign me up for that! Well, all except the Gentoo part. I mean come on, who are we kidding here? I'm not a sadomasochist. Nope. Ubuntu is my distro of choice for this.

Anyway, it turns out you can configure a similar failover cluster as my original Openfiler setup using Ubuntu, iSCSI Enterprise Target, DRBD, and Heartbeat. I figured, why the hell not? I just need to do something about the existing data on my running Openfiler.

What I did was take the offline Openfiler node down, and installed Ubuntu Server edition, and the above programs. I then created a 5TB LUN on my new Ubuntu SAN, then began to do a DD copy over SSH from my old online Openfiler LUN to my Ubuntu LUN. Nice and all, but if you've ever used DD it's not really good at letting you know it's progress. There is a cool trick to work around that though. Here's what you do:

  • Open an SSH session on the server you are copying to.
  • Open an SSH session on the server you are copying from.
  • On the server you are copying from, run your DD command. This is what I ran:

    dd if=/dev/data/LUN01 | ssh root@ dd of=/dev/VG1/LUN01

  • On the server you are copying to run the following to get the PID number of the DD process:

    pgrep -l '^dd$'

  • On the server you are copying to run the following (Change the below number to match the number you got from the above command):

    watch -n 10 kill -USR1 8789

  • Now on server you are copying from, you should see a status update on the DD copy every 10 seconds!

Nice right? Now I don't have to guess when my DD copy will be done. I can go make some coffee, then check back periodically to see how it's doing! 

Do you know of a better way to clone disks over the network that shows better progress? Let us know in the comments!

[Via Linux Commando] tags:           

Aug 10, 2011

Create Your Own Multi-Boot Linux USB Swiss Army Knife of Epic Pwnage!

I, like a lot of you, have a few USB keys laying around. Often times we have one or two in our pockets. Some times they carry random bits of data, or portable programs. Others contain live Linux operating systems. I for one usually have a USB Flash drive with Bauer-Puntu Linux on it in case I have to troubleshoot a Windows PC or reset a password. I'm sure you probably carry one of your favorites as well. Knoppix or BackTrack perhaps?

Maybe you have multiple USB thumb drives in your pocket with different distros on each. Wouldn't it be nice to be able to store all of your favorite live distros on one USB key? It turns out you can, and it's really easy to do. Allow me to introduce you to YUMI (Your Universal Multiboot Installer) from Pendrive Linux.

Here is how it works from their page:

[YUMI] enables each user to create their own custom Multiboot UFD containing only the distributions they want, in the order by which they are installed. A new distribution can be added to the UFD each time the tool is run.

Pretty simple right? You can easily do the above in four easy steps:

  1. Run YUMI following the onscreen instructions
  2. Run the tool again to Add More ISOs/Distributions to your Drive
  3. Restart your PC setting it to boot from the USB device
  4. Select a distribution to Boot from the Menu and enjoy!

Pretty cool right? What Linux distros are you going to put on your ultimate Linux Swiss Army Knife of Pwnage? Let us know in the comments! tags:          

Aug 9, 2011

Awesome Free DNS Hosting Service with API, Custom TTL and Redirects

How many of you out there have registered your domain names through GoDaddy? I have too. In fact, all my domains are registered through GoDaddy. They are the biggest registrar out there. Why is it then that their built-in default DNS sucks so bad? Ok, maybe it doesn't suck per se. It is reliable, and it does give you basic DNS functionality like A-Records and CNAME Records etc. It doesn't however let you set custom time to live, or setup redirects at the DNS level.

If you want something like that, more often than not you have to pay for a premium DNS service through someone like Dyn, DNS Made Easy, or EasyDNS. Some of those can get fairly costly. I was using Dyn for a long time with my domain name which right now doesn't do anything. I kept it on Dyn though because I could setup redirects with it. I would then point stuff like over to using a CNAME record in my GoDaddy DNS. would redirect to my blog post about Bauer-Puntu Linux. It worked, but it wasn't efficient, and it cost me about $25 per year.

While looking for a better DNS service for my company, I came across a really cool free DNS host that gave me everything I was using with Dyn, but is completely free for up to 10 domains! It's called PointHQ and here are some of their features:

  • Manage as many domains as you want (within reason, of course) from one friendly and easy to use interface
  • Add unlimited A, AAAA, CNAME, MX, SRV or TXT records to your zones/domains.
  • Setup permanant HTTP redirects directly from the web interface.
  • Use our easy to use SPF record setup wizard/tool.
  • Easily add MX & SRV records for Google® mail & talk.
  • Distributed nameservers across the UK and United States.
  • Group your domains into manageable chunks and change domain TTL.
  • A full API is available so you can integrate DNS management directly into your own apps and systems.

Now I no longer need to setup redirects using my Dyn account, and I can setup redirects directly using my free PointHQ DNS account! Hell, I can probably even let lapse now, which will save me a few extra bucks per year that could be better used at the strip club...errr... I mean on booze....errr... I mean on something nice for my wife.

What DNS service do you use? Is it free? What features does it have that you like? Maybe you hate it, and are looking to switch. Let us know why in the comments! tags:       


Aug 8, 2011

Remove "Remote Tech Support Mode..." Message From ESXi 4.1 Servers

I mentioned previously that I recently upgraded my VMware environment from 4.0 to 4.1. One of the cool things that using ESXi 4.1 over ESX 4.0 is that it's easier to enable remote tech support mode. If you don't know what that means, it basically means enabling SSH access to your ESXi server. In ESXi 4.0, in order to enable SSH you had to log into tech support mode from the console, then using VI edit the /etc/inetd.conf. In 4.1, you can just enable it through the administration console.

The only problem in 4.1 with enabling SSH, is that when you open up vSphere, you will see an error saying:

"Remote Tech Support Mode (SSH) for the host <hostname> has been enabled"

And you will see a yellow exclaimation point on your ESXi hosts in your cluster. If you are like me, you only want to see warnings like that when there is actually something wrong.

It turns out this little message goes away after a reboot, but who the hell can afford to reboot anything? Most of the time, we can't right? At least not during business hours, and if you want to do it after business hours you get an earfull from the wife for working too much. Am I right? It turns out that you don't have to reboot to make the message go away though.

All you need to do to make the message go away is to restart the hostd service. Since you have SSH enabled you can SSH in, and run the following:

# /etc/init.d/hostd restart

After that, the message will go away, and you can use SSH on your ESXi servers without hassel. tags:           

Aug 2, 2011

How To Upgrade To ESX 4.1 or ESXi 4.1 Using VMware Update Manager

I just successfully completed a data center move over the weekend. We moved from a crappy data center on one part of town, to a new awesome data center on the other part of town. What makes the new data center so awesome you ask? Well, besides newer, and better power generators, they also have a pool table and foosball table in the break room! In fact, it happens to be the same data center that we used at my last gig, so I get to see all the bright smiley faces of my former co-workers when I go there!

Anyhoo, since I moved data centers this gave me an unusual opportunity to revamp the network to my liking. During the move I reconfigured our core switches, and iSCSI networks to make them more efficient. I also was able to re-do all the cabling and implement some color coding to organize the cables better. I was also able to implement the new SAN I've been talking about, and finally I decided it was time to upgrade my VMware environment to 4.1 from 4.0.

I decided to build a new vCenter server, and migrate my hosts to it, then upgrade each host. After I installed vCenter 4.1, and the vSphere client I realized that the host update utility was no longer available. That sucks, because I am used to using that. That's okay though because VMware wants you to use the VMware Update Manager instead which is a plug-in to your vSphere client. It turns out upgrading ESX or ESXi to version 4.1 using the VMware Update Manager is pretty easy. Check out this video:

Pretty easy right? The VMware Update Utility is better in my opinion because it updates way more than just VMware components. You can also patch your VM hosts with it as well.

What do you like better? This way? The old way using the Host Update utility? How about using the CLI? Let us know in the comments. tags:              

Twitter Delicious Facebook Digg Stumbleupon Favorites More

Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | stopping spam