Adding a Disk to a Linux System using Command Line Tools

I was faced with installing a new disk on a home server, and while I’ve done this a hundred times, it seemed to be difficult enough to deserve yet another tutorial, if only for my own reference. It’s not hard, but until the new GNOME GUI tools were developed, it was a kind of complex process. (It’s complex on all Unixes, for some reason. BSD is even more difficult than Linux.)

Typically, I use the graphical gparted disk tool, or the gnome-disk-tool or some other GUI tool to add disks to my computer, but if I have to add them to a server or a remote computer, I to use command line tools. This birds-eye howto explains how to do this, in a lot of detail, using many tools that help you to inspect the disks. It’s beginner level stuff, but you will need to use the man pages and search for docs to understand the whole process.

The rest of the tutorial is in preformatted monospace, because I wrote this all in a text editor, for myself, and figured it could be useful to everyone.

1. Plug the disk in.  (Oy, this could be a long topic, so let's skip it.  Just get it in there.)

2. Create a directory where you want to mount the disk.  I used /backup

3. Locate the disk.  I usually do this:

   ls /dev/sd*
   If you know your disks, you'll spot the new one.  If you are unfamiliar, run
   fdisk -l
   That lists all the disks, with some partition information.
   You can also try lsblk to view the disk tree:
   If that's not enough information, try 
   cat /proc/scsi/scsi
   It'll display all the devices on the SCSI bus.  It won't display the sd* 
   devices, so that's going to be tricky, but at least you can verify the model
   of the disk.  Also, note that this isn't a real scsi bus.  The ATA devices
   are treated like SCSI devices.
   Then, use parted to examine each disk:
   parted /dev/sdb print
   That will print the hardware info for that disk.
   If that's not enough, look at all the mounted disks:
   sudo mount
   Now, see what disks are not listed.  If there's only one disk not listed
   it must be the new disk.
   If more than one disk is not listed, and you cannot distinguish them by
   the partition information, or the manufacterer information, you must have
   identical disks in the system.  You can still try to distinguish between them
   by looking at the SMART data inside the disk.
   You may not have the tools installed.  On an Ubuntu system, you can install 
   them thus:
   sudo apt-get install smartmontools
   Then view each disk like this:
   sudo smartctl -a /dev/sdb
   That will print a lot of info, and in there you'll find a value like
   Power_On_Hours, which will show you how long the disk's been spinning.
   I don't really trust the numbers, but it's all you've got.

4. Partition the disk.

   parted is considered the cool tool, but I'm still using fdisk out of habit.   
5. Find out the UUID of the disk.
   blkid /dev/sdb1

6. Make a new line in /etc/fstab to tell the system how to mount
   this partition.
   sudo nano /etc/fstab

   add a line like this:
UUID=ac1508cf-7d69-4da4-8e18-4cd8f9405fda /backup ext4 errors=remount-ro 0 2

   lines in /etc/fstab have 6 fields:
   1 : device UUID or device node in /dev
   2 : mount point
   3 : file system type
   4 : comma separated list of options
   5 : usually set to 0, indicates if a filesystem needs to be dumped
   6 : fs_passno, used by fsck to determine order of filesystems to check
       at boot time.  Set this to 2.
You might want to add a comment for yourself:

# /dev/sdb1

7. Mount the disk.  You should run the command and specify the mount point
   rather than the disk device, to test that you have set it up right in the
   mount /backup
8. The disk I installed was pretty old, so I did a simple command that writes  
   a huge file to the disk.  Writing causes the system to write, then read 
   each sector.  If, during the process, it finds an error, the disk will 
   automatically remap the sector.  It's better to suffer this now (and find out
   via the smartctl tool if sectors were remapped) than when the disk is in 
   production.  The command is:
   cat /dev/zero > zero &
   I'm learning there's a feature in fsck that does bad block scanning with
   a read/write test.  That sounds better!

   (Before running e2fsck, kill the job that's writing the zero file. Then,
   delete the zero file.)
   Unmount your disk, then run the e2fsck command:
   umount /backup
   e2fsck -c -c -y /dev/sdb1
   More details at:
   This command takes a long time to run, like hours and hours.  
   You might want to use the screen command to allow it to run while you're 
   To start screen: screen
   To detach from a running shell, Control-A Control-D.
   To re-attach to that shell: screen -r
   If you don't like screen, you can use the old-school nohup command.   

9. Reboot to check that it mounts on boot.

FYI, for the boot disk, I would suggest using a tool like SpinRite to scan the disk surface. SpinRite will do the read/write scan. It boots pretty quickly, and runs with a nice graphical display that makes you feel like it’s doing very important work.