Using Linux to "ghost" image or file back up a partition or system. ====================================================================== Clean Up the Operating System Install Quick M$ Steps: virus scan spy-ware scan set scandisk to run on reboot and reboot defrag drives run eraser on drives set next reboot for scandisk, then boot to linux for backups Delete trash files: swap file trash/recycle user.dmp drwtsn32.log scandisk.log temp / tmp / Temporary Internet Files Favorites, Downloaded Program Files, History, Recent Internet cache, Cookies, Offline Web Pages win/Profiles "Disk Cleanup" from the partition tool box also: web browser caches vdub batch queues nntp caches other temp dirs Defragment disk. Run check disk. If eraser won't run, manually zero out the partition first to clean the file system image. mount /dev/... /mnt/... cd /mnt/... cat /dev/zero | split --bytes=30m - zerowindoze. sync rm zerowindoze.* sync If a non-mountable file system: Run Eraser and write 0's to the disk (eraser.sf.net). Be sure to use the "erase swap on shutdown" option. ----- This can be really dangerous... TOTALLY wipe the disk to a "new" state (as in no going back): hdparm -d1 /dev/hda dd if=/dev/zero of=/dev/hda bs=1024k ====================================================================== Using a remote file system to dump to instead of SSH. smbmount the remote file server (no spaces between commas and returns): mkdir -p /mnt/smbserver-dump smbmount //smbserver/dump /mnt/smbserver-dump -o username=user%password, netbiosname=smbclient, ip=192.168.1.101 ====================================================================== Generating SSH keys to avoid login prompts. Client Computer as root: ssh-keygen -b 2048 -t dsa scp /root/.ssh/id_dsa.pub user@server:${HOME}/.ssh/pub.someuser_host Backup Server as common user: cd ~/.ssh cat pub.someuser_host >>authorized_keys2 Warning: this can allow a "back door" from one system into another if you aren't careful. ====================================================================== ssh tricked out disk image way: copy sda of disk to sdb of computer 2. Partitions must be the same size on each machine. dd if=/dev/sda | ssh root@machine "dd of=/dev/sdb" - rsync --delete --oune-file-system -avH source_dir/ dest_dir/ source and dest are local mount points ====================================================================== Backup Notes ----- Boot block / disk partition table: dd if=/dev/hda of=/path/to/disk-boot-block.img bs=512 count=1 Sometimes the boot block seems bigger than 512 bytes. It's ok to keep a bigger version IF you know how to use it in time of need. dd if=/dev/hda of=/path/to/disk-boot-block.img bs=1024k count=1 It's usually a good idea to keep a text partition list just for safety and future reference. fdisk -l /dev/hda > /path/to/hda-fdisk.txt ----- Double compression tests. Note that bzip2 straight is really better than doing a double. Use smaller compression numbers for faster speed (but larger files). Note gzip is far faster than bzip2 but produces larger files. Tip: If the disk is mostly empty with a lot of blank space, gzip won't compress as efficiently. It will leave about 1meg per 1gig of empty drive space. This can be further compressed to about 150k per 1meg of gzip. cat /dev/hda1 | gzip -c -9 | gzip -c -9 > partition.img.gz.gz Note: bzip2 doesn't seem to have this problem, although it is far slower. Doing a second bzip2 on the bz2 file will usually make it a little smaller, though, but not by a lot (as in not worth it). In general, double compression is overly time consuming and isn't worth it. ----- cat may have trouble at 2g or 4g boundaries. This is a rare problem now. If so, use dd instead: dd if=/dev/hda1 bs=1024k | bzip2 -c -1 > partition.img.bz2 dd can be a little obnoxious and during extraction/test may report an "ends unexpectedly" error at the end of archive. This should be ignored. The bzip2 return code on this is usually "2". ----- Splitting the compressed file is usually a good idea if it will be overly large. Smaller files are easier to handle than one huge one. If you split on the 50meg boundary, you can back up to either a 650meg or 700meg CDR (or both) easily. ----- If the data is critical and/or on questionable media, generate par2 files for recovery (parchive.sf.net). The command line version could be added into a script for automation. Keep in mind par2 is kinda slow, but can really save your ass in times of desperation. # 5% redundancy with standard block size options. par2 create -r5 file.img.bz2.aa # repair a busted file using the extra par2 files. par2 repair file.img.bz2.aa.par2 file.img.bz2.aa.*par2 ====================================================================== Image Backup Single Volume / Simple: Create: cat /dev/hda1 | bzip2 -c -9 > partitionname.img.bz2 or bzip2 -c -9 /dev/hda1 >partitionname.img.bz2 Extract: bzip2 -cd partitionname.img.bz2 > /dev/hda1 ----- Multiple Volumes: Create: cat /dev/hda1 | bzip2 -c -9 | split --bytes=50m - partitionname.img.bz2. Test: find . -type f -maxdepth 1 | grep partitionname | sort | xargs -l1 cat | bzip2 -t ; echo $? Extract: find . -type f -maxdepth 1 | grep partitionname | sort | xargs -l1 cat | bzip2 -cd > /dev/hda1 ----- Multiple Volumes Remote: Create: cat /dev/hda1 | bzip2 -c -9 | ssh user@server "split --bytes=50m - /path/to/partitionname.img.bz2." Test: (command run on the server, 0=good) ssh user@server "find /path/to/ -type f -maxdepth 1 | grep partitionname | sort | xargs -l1 cat | bzip2 -t" ; echo $? Extract: ssh user@server "find /path/to/ -type f -maxdepth 1 | grep partitionname | sort | xargs -l1 cat" | bzip2 -cd >/dev/hda1 ====================================================================== #!/bin/sh #back up a partition by files (not image) using ssh over a network. #fat32 windorks in this case... sSSHUserServer="user@server" sDate="`date +%Y%m%d.%H%M%S`" sBakFile="/scratch/bak/computer-${sDate}_running.tar.gz" sBakDir="/mnt/win" sPartition="/dev/hda1" sLogFile="/scratch/backup-runs.txt" sThrottleShort="sleep 1s" sThrottleLong="sleep 1m" #--- echo "Backup Start: `date`" >>${sLogFile} #mount the disk mkdir ${sBakDir} 2>/dev/null mount ${sPartition} ${sBakDir} 2>/dev/null ${sThrottleShort} #backup the files. echo "Backing up..." cd ${sBakDir} cd .. sBakDirCut="`echo "${sBakDir}" | tr '/' '\n' | tail -n1 | tr -d '\r\n\t'`" tar zcvf - ${sBakDirCut} | ssh ${sSSHUserServer} "cat > ${sBakFile}" echo "Finished backup" echo "Backup Finish: `date`" >>${sLogFile} umount ${sBakDir} #automate going straight back to windoze. #sync #shutdown -r 1 ====================================================================== #!/bin/sh #restore a partition by files (not image) using ssh over a network. #fat32 windorks in this case... sSSHUserServer="user@server" sBakFile="/scratch/bak/computer-06_running.tar.gz" sBakDir="/mnt/win" sPartition="/dev/hda1" sLogFile="/scratch/backup-runs.txt" sThrottleShort="sleep 1s" sThrottleLong="sleep 1m" #--- if test -z "${sBakDir}" ; then echo "sBakDir cannot be empty" exit 1 fi echo "Restore Start: `date`" >>${sLogFile} #mount the disk mkdir ${sBakDir} 2>/dev/null mount ${sPartition} ${sBakDir} 2>/dev/null ${sThrottleShort} #delete the old files echo "Deleting old files on partition..." rm -r ${sBakDir}/* 2>/dev/null sync ${sThrottleShort} #paranoia: zero out the partition echo "Paranoia: zero'ing out the disk" nice cat /dev/zero | nice split --bytes=30m - ${sBakDir}/zero. 2>/dev/null sync ${sThrottleShort} #sBakDir has already been tested to not be empty. rm -r ${sBakDir}/* 2>/dev/null sync ${sThrottleShort} #restore the files. echo "Restoring backup..." cd ${sBakDir} cd .. ssh ${sSSHUserServer} "cat ${sBakFile}" | tar zxvf - sync ${sThrottleLong} sync echo "Finished restoring" echo "Restore Finish: `date`" >>${sLogFile} umount ${sBakDir} #automate going straight back to windoze. #sync #shutdown -r ====================================================================== lotus notes... #!/bin/sh #Backs up to backup server using ssh/scp. unset LANG #Variables ThisServer="notes.k90.com" sShort="k90" NumPreviousKeep="1" BUServer="172.16.1.100" BUUser="backup" BURootDir="/mnt/svr/backup" sSplitSize="50m" ###===--- To make the keys. #on the system to be backed up: #ssh-keygen -b 1024 -t dsa #scp /root/.ssh/id_dsa.pub ${BUUser}@${BUServer}:${HOME}/.ssh/pub.someuser_host #on the backup server: #cd ~/.ssh #cat pub.someuser_host >>authorized_keys2 ###===--- Don't touch this section. #Generate the rest Date="`date +%Y%m%d | tr -d '\n' | tr -d '\r' | tr -d '\t' | tr -d ' '`" RunIt="ssh -l ${BUUser} ${BUServer}" CopyIt="scp -r -p" RmtDir="${BURootDir}/${ThisServer}-${Date}" BUDest="${BUUser}@${BUServer}:${RmtDir}" #Evaluate out the numbers. NumCurr="`${RunIt} find ${BURootDir} -maxdepth 1 | grep -v "^\.$" | grep -v "^${BURootDir}$" | grep ${ThisServer} | wc -l | tr -d '\n' | tr -d '\r' | tr -d '\t' | tr -d ' '`" NumReal="$[ ${NumCurr} - ${NumPreviousKeep} ]" #If too many backed up, delete oldest. if test ${NumReal} -gt 0 ; then ${RunIt} "find ${BURootDir} -maxdepth 1 | grep -v \"^\.$\" | grep -v \"^${BURootDir}$\" | grep ${ThisServer} | sort | head -n ${NumReal} | xargs -l1 nice chmod -R 700 2>/dev/null" ${RunIt} "find ${BURootDir} -maxdepth 1 | grep -v \"^\.$\" | grep -v \"^${BURootDir}$\" | grep ${ThisServer} | sort | head -n ${NumReal} | xargs -l1 nice rm -rf 2>/dev/null" fi #Set the stage. ${RunIt} mkdir -p ${RmtDir} ###===--- Perform backup commands and copy out files. #split backup into multiple files to not break file size limits. /etc/rc.d/init.d/notes-${sShort} stop #this fails half the time /etc/rc.d/init.d/notes-${sShort} backup - | ssh ${BUUser}@${BUServer} -c none "split --bytes ${sSplitSize} - ${RmtDir}/${ThisServer}-${Date}.tar.gz." ${RunIt} sync #stupid notes...it usually won't restart. date shutdown -r 1 #To extract: #find . -type f -maxdepth 1 | grep filenameprefix | sort | xargs -l1 cat | tar ztvf - #over network: #cd / #ssh backup@172.16.1.211 "find /svr/backup/ford.k90.com-20030720 -type f -maxdepth 1 | grep ford | sort | xargs -l1 cat" | tar ztxf - ###--- old ways. #old way: do a single file backup. #/etc/rc.d/init.d/notes-${sShort} backup - | ssh ${BUUser}@${BUServer} "dd of=${RmtDir}/${ThisServer}-${Date}.tar.gz" #${RunIt} sync #old way: create archive locally then move, fine for smaller archives. #mkdir -p /home/bak-$$ 2>/dev/null #/etc/rc.d/init.d/notes backup /home/bak-$$/${ThisServer}-${Date}.tar.gz #${CopyIt} /home/bak-$$/* ${BUDest} #rm -rf /home/bak-$$ ======================================================================