Add a hard drive in Linux with LVM

I posted last time about modifying the swap space on Linux with LVM and also introduced expanding a file system to match a new larger partition. Next is to add another drive for data storage. I’m just adding one drive, because the drive is actually a virtual drive through VMware that sits on a RAID5 raid group on our SAN. So redundancy is not really an issue. This example is from a Red Hat Enterprise (RHEL5) clone and is using Logical Volume Management (LVM), but should be usable in other Linux distributions.

Since this is a VM, I simply attached a new blank hard drive through VirtualCenter. Now, of course, I could have rebooted the server to pick up the new drive, but what fun is that? So, the question then became, how do you detect a new hard drive without rebooting in Linux? Currently, it looks like this:

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: VMware Model: Virtual disk Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02

In Solaris, you simply run devfsadm. In Linux, you just need to have the OS rescan the SCSI bus. To do this, we utilize the /proc file system. Determine what the parameters of the new SCSI drive are and the run the following command:

# echo "scsi add-single-device 0 0 1 0" > /proc/scsi/scsi

The four numbers in order are host, channel, id, and LUN. Now, when we check we see the new drive:

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: VMware Model: Virtual disk Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 00
Vendor: VMware Model: Virtual disk Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02

At the same time, you will see the following lines in /var/log/messages:

kernel: Vendor: VMware Model: Virtual disk Rev: 1.0
kernel: Type: Direct-Access ANSI SCSI revision: 02
kernel: target0:0:1: Beginning Domain Validation
kernel: target0:0:1: Domain Validation skipping write tests
kernel: target0:0:1: Ending Domain Validation
kernel: target0:0:1: FAST-160 WIDE SCSI 320.0 MB/s DT IU RDSTRM RTI WRFLOW PCOMP (6.25 ns, offset 127)
kernel: SCSI device sdb: 104857600 512-byte hdwr sectors (53687 MB)

Since this is a brand new drive, we need to setup any partitions we would like on the drive. If you’re not sure what the name of the drive is, you can always do a fdisk -l first:

# fdisk -l
Disk /dev/sda: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1566 12474472+ 8e Linux LVM

Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn’t contain a valid partition table

Go ahead and setup the partitions, I just need one:

# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won’t be recoverable.

The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-6527, default 6527):
Using default value 6527

Command (m for help): p

Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 6527 52428096 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now you can see the partitions are defined:

# fdisk -l
Disk /dev/sda: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1566 12474472+ 8e Linux LVM

Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 6527 52428096 83 Linux

Now, if you don’t have LVM, you would just make the new file system on the physical drive. (If you are using LVM, don’t do this, skip to the next step.)

# mkfs.ext3 /dev/sdb1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
6553600 inodes, 13107024 blocks
655351 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
400 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Since we have LVM, we will be using that instead. Using lvmscan, we can see that the new partition is available:

# lvmdiskscan
…skip…
/dev/sdb [ 50.00 GB]
/dev/sdb1 [ 50.00 GB]

First step is to turn it into a new physical volume:

# pvcreate /dev/sdb1
Physical volume “/dev/sdb1” successfully created

Now we need a volume group and add the physical volume to it:

# vgcreate VolGroup01 /dev/sdb1
Volume group “VolGroup01” successfully created

And then create the logical volume (using all of the available space on the volume group):

# lvcreate -l 100%FREE -n LogVol00 VolGroup01
Logical volume “LogVol00” created

Now we make the file system on the logical volume:

# mkfs.ext3 /dev/VolGroup01/LogVol00
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
6553600 inodes, 13106176 blocks
655308 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
400 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Add the appropriate line to /etc/fstab:

/dev/VolGroup01/LogVol00 /data ext3 defaults 1 1

Create the mount point and mount the new file system:

# mkdir /data
# mount /data

See that it’s now available to the OS:

# df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup01-LogVol00
50G 180M 47G 1% /data

All done!

Modify swap space in Linux (with LVM)

I posted a while back how to modify your swap space in Solaris 10, now I will show you how to do it in Linux. This example is from a Red Hat Enterprise (RHEL5) clone and is using Logical Volume Management (LVM), but should be usable in other Linux distributions and you could use fdisk to resize the partition instead of using LVM. I will specifically show how to reduce swap space here, put is applicable to enlarging it as well.

To see what file system is being used for your current swap space, use proc:

# cat /proc/swaps
Filename Type Size Used Priority
/dev/mapper/VolGroup00-LogVol01 partition 4194296 0 -1

And then to see how much swap is in use, run the free command:

# free
total used free shared buffers cached
Mem: 385560 76388 309172 0 11328 33788
-/+ buffers/cache: 31272 354288
Swap: 4194296 0 4194296

If your swap space is in use, you will need to reboot into single user mode, shut some applications down until it is free, or add a separate swap drive or file for swap use temporarily. Once your swap space is free to be modified, turn off the swap space that you want to modify:

# swapoff -v /dev/VolGroup00/LogVol01
swapoff on /dev/VolGroup00/LogVol01

You can now verify that the swap space is no longer in use:

# free
total used free shared buffers cached
Mem: 385560 74404 311156 0 11340 33780
-/+ buffers/cache: 29284 356276
Swap: 0 0 0
# cat /proc/swaps

Now that the file system is not is use, we are free to modify it. In this case, I am reducing the 4 GB set aside for swap to 1 GB so that I can reuse it on my root partition. Here’s the layout of my current LVM partitions:

# lvdisplay
— Logical volume —
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID jzjdTT-Md9K-iP52-3kv4-OqSL-2Y0c-yxUq7o
LV Write Access read/write
LV Status available
# open 1
LV Size 7.88 GB
Current LE 252
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID ixEabw-7lMg-ho6h-GcVq-pEOE-rHPb-X3HY3e
LV Write Access read/write
LV Status available
# open 1
LV Size 4.00 GB
Current LE 128
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

Now to shrink the swap space partition. The fact that shrinking the partition is destructive doesn’t matter since it is just swap space anyway:

# lvm lvreduce /dev/VolGroup00/LogVol01 -L -3G
WARNING: Reducing active logical volume to 1.00 GB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LogVol01? [y/n]: y
Reducing logical volume LogVol01 to 1.00 GB
Logical volume LogVol01 successfully resized

And now to extend the root partition. (You can increase the size while the partition is in use and it is non-destructive.):

# lvm lvextend -l +100%FREE /dev/VolGroup00/LogVol00
Extending logical volume LogVol00 to 10.88 GB
Logical volume LogVol00 successfully resized

Let’s take a look at the new partition sizes:

# lvdisplay
— Logical volume —
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID jzjdTT-Md9K-iP52-3kv4-OqSL-2Y0c-yxUq7o
LV Write Access read/write
LV Status available
# open 1
LV Size 10.88 GB
Current LE 348
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID ixEabw-7lMg-ho6h-GcVq-pEOE-rHPb-X3HY3e
LV Write Access read/write
LV Status available
# open 0
LV Size 1.00 GB
Current LE 32
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

Now to remake the smaller partition into usable swap space:

# mkswap /dev/VolGroup00/LogVol01
Setting up swapspace version 1, size = 1073737 kB

And then re-add it back to the OS as usable swap space:

# swapon -va
swapon on /dev/VolGroup00/LogVol01

Verify that the swap space is back:

# cat /proc/swaps
Filename Type Size Used Priority
/dev/mapper/VolGroup00-LogVol01 partition 1048568 0 -2

# free
total used free shared buffers cached
Mem: 385560 76156 309404 0 11668 35036
-/+ buffers/cache: 29452 356108
Swap: 1048568 0 1048568

Let’s go ahead and resize the actual file system on the root partition to take into account the newly available space. First, a before snapshot:

# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
7.7G 764M 6.5G 11% /

Now to increase it. Note, that if you do not specify a new size, it will automatically fill up to the maximum size of the underlying partition:

# resize2fs -p /dev/mapper/VolGroup00-LogVol00
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/mapper/VolGroup00-LogVol00 is mounted on /; on-line resizing required
Performing an on-line resize of /dev/mapper/VolGroup00-LogVol00 to 2850816 (4k) blocks.
The filesystem on /dev/mapper/VolGroup00-LogVol00 is now 2850816 blocks long.

Verify the new size:

# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
11G 766M 9.3G 8% /

Next post I’ll explain how to add a second drive to this system.