Tuesday, September 8, 2009

Software RAID vs. LVM: Quick Speed Test

Table of Contents


Introduction

Currently, I have a fileserver that is setup this way:

Filesystem
      ^
Logical Volume Manager
      ^
Software RAID Arrays
      ^
Physical Disks

In my case, the LVM is an extra layer and it's not useful since I only have one physical entity that belongs to a Volume Group: A single RAID5 array.
So you could put your filesystem on top of a Logical Volume, or directly on the RAID array device. It depends on how you want to manage your data and devices.

So, is this hampering performance? The tables below will do the talking, but first: the setup.

System Setup

Processor
Intel Pentium Dual CPU E2160 @ 1.80GHz
MotherboardMSI (MS-7514) P43 Neo3-F

North Bridge: Intel P43

South Bridge: Intel ICH10
SATA Controller 1
JMicron 20360/20363 AHCI Controller

AHCI Mode: Enabled

Ports: 6-7
Sata Controller 2
82801JI (ICH10 Family) SATA AHCI Controller

Ports: 0-5
RAM
1GB @ CL 5
Video Card
GeForce 7300 GS
Disk sda
WDC WD10EACS-00D6B1
Disk sdb
WDC WD10EACS-00D6B1
Disk sdc
WDC WD10EACS-00ZJB0
Disk sdd
WDC WD10EADS-65L5B1
Disk sde
WDC WD10EADS-65L5B1
Disk sdf
MAXTOR STM31000340AS
Disk sdg
WDC WD10EACS-00ZJB0
Disk sdh
WDC WD10EADS-00L5B1
Disk sdi
Hitachi HDS721680PLAT80 (OS)
Chunk size
256kB
LVM: Physical Extent Size
1GB
LVM: Read ahead sectors
Auto (set to 256)

Speed Test Methods

A quick and easy way to run a speed test is by using a tool called hdparm and another called dd.
Note that these two utilities don't take the filesystem performance into account, as they read directly from the device, not a certain file. It doesn't matter in this case, as I'm about to show comparisons to show the magnitude of difference speed only, not show very exact results ;)

hdparm

hdparm -tT /dev/xxx
-t: Perform timings of device reads for benchmark and comparison purposes.
Displays  the  speed of reading through the buffer cache to the disk without any prior caching of data.
This measurement is an indication of how fast the drive can sustain sequential data reads under Linux, without any filesystem overhead.

-T: Perform timings of cache reads for benchmark and comparison purposes.
This displays the speed of reading directly from the Linux buffer cache without disk access.
This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.

dd

dd if=/dev/xxx of=/dev/null bs=10M count=400
This will read from the device and dump the data to a null device (just reading). Block size=10 Megabytes (2^20).
This will read 4GB of data. I specified 4GB to make sure that it surpasses the RAM size.

Before running dd, I flushed the read cache by entering: hdparm -f /dev/sd[a-h], which flushes the cache of all RAID disks.

Speed Test #1: RAID vs. LVM

LVM
root@Adam:~/mdadm-3.0# dd if=/dev/mapper/arrays-storage of=/dev/null bs=10M count=400
2097152000 bytes (2.1 GB) copied, 41.1147 s, 43.0 MB/s


root@Adam:~/mdadm-3.0# hdparm -tT /dev/mapper/arrays-storage
 Timing cached reads:   1926 MB in  2.00 seconds = 962.65 MB/sec
 Timing buffered disk reads:  146 MB in  3.00 seconds =  48.62 MB/sec
 
 
RAID
root@Adam:~/mdadm-3.0# dd if=/dev/md0 of=/dev/null bs=10M count=400
2097152000 bytes (2.1 GB) copied, 10.9341 s, 125 MB/s


root@Adam:~/mdadm-3.0# hdparm -tT /dev/md0
Timing cached reads:   1998 MB in  2.00 seconds = 998.73 MB/sec
Timing buffered disk reads:  538 MB in  3.01 seconds = 178.98 MB/sec

The above numbers are the average of 3 runs.

Speed Test #2: Disks Separately

root@Adam:~# for i in {a,b,c,d,e,f,g,h}; do dd if=/dev/sd"$i"1 of=/dev/null bs=10M count=400; done
root@Adam:~# for i in {a,b,c,d,e,f,g,h}; do hdparm -I /dev/sd"$i" | grep Firmware; done

Disk
Model
Firmware
Speed Test Result
sda
WDC WD10EACS-00D6B101.01A0146.3106 s, 90.6 MB/s
sdb
WDC WD10EACS-00D6B101.01A0148.6391 s, 86.2 MB/s
sdc
WDC WD10EACS-00ZJB001.01B0170.8184 s, 59.2 MB/s
sdd
WDC WD10EADS-65L5B101.01A0146.9733 s, 89.3 MB/s
sde
WDC WD10EADS-65L5B101.01A0144.2861 s, 94.7 MB/s
sdf
MAXTOR STM31000340ASMX15
77.1797 s, 54.3 MB/s
sdg
WDC WD10EACS-00ZJB001.01B0150.5498 s, 83.0 MB/s
sdh
WDC WD10EADS-00L5B101.01A0146.747 s, 89.7 MB/s

As you can see, though sdc & sdg have the same model and firmware, their speed differs! I have no clue why and I searched in Western Digital's website for firmwares to download, but their site leads no where to any firmware download link.

The Maxtor disk has a newer firmware released. I'll checkout its changelog before installing it. Also, as a precaution, I'll clone the Maxtor disk to sdg since it's not being used now; just in case the new firmware doesn't play nice!

Conclusion

From the above numbers, it's clear that LVM, in my setup, has crippled the performance by a huge margin (~66%). So for my next setup, I'm going to skip LVM and slap the filesystem directly on top of the RAID5 array.

On one of my PCs (Adrenalin), I already have XFS filesystem running on top of the RAID array and LVM is not being used. I get double the speed of hard disks out of the array (140 MB/s) when tested it last year with hdparm.

I don't claim that this is a typical problem of LVM. I did a quick search and didn't find numbers. I'm too lazy right now to find anything really. But I have the numbers on that MSI crap board (caused me so many problems with the SATA ports), and I'll skip LVM on that board. If I keep the board & not smash it to smithereens.

Irrelevant note: I'm loving posting to my blog through Google Docs.

4 comments:

Unknown said...

Just repeated your experiment and in both the hdparm and the dd test the lvm performance was ~25% slower.

No idea why - I did look into the stripe size of the lvm on the raid 5 at one point but went no further.

MBH said...

jonathh,
I'm sorry, it seems that I forgot to mention some details.

I've added the chunksize, PE size and read ahead sectors right under the disk models.

Maybe those can be of more use.

I've bought a new motherboard and I'm going to migrate the data to a new system and be able to free myself from LVM. Then I'll toy with some parameters as found in here.

MBH said...

jonathh,
Also you should note of the motherboard's type and see what kind of chipset it uses. It makes some difference!

To compare your results to mine, you need to verify against all the things listed in System Setup: I/O chipset, RAM, chunksize, disk models, disk firmwares, ...etc.

Depending on those, results will vary.

Aaron said...

Did you add striping to your logical volume? I don't think it will be as fast as RAID, but it should increase your LVM performance to an extent.