Page 1 of 1

RAIDing 8 WD LB drives

PostPosted: Sat Jun 19, 2004 2:45 pm
by Kennyshin
RAID is always difficult because of cost and data safety. Even if one is 4x richer than the rest of the population, RAID 0+1 is not exactly as safe as having just one disk drive for one volume simply because one needs multiple disks even to access the stored data normally.

I got 12 WD LB drives, 160GB ones, on Friday. I bought a used 3ware 8-ch. P-ATA PCI card a few weeks ago. 4 LB's are in this PC, connected to a 6-ch. P-ATA PCI card. The rest is connected to 3ware.

I need to decide whether to use these as independent or as RAID 0. I'm not an advocate of RAID 5 or RAID 0+1. For personal use, I think such "redundancy" is too redundant to be pragmatic. If I were that affluent, I would have bought another 12 drives to backup the first 12 drives manually, copying the data from PC A to PC B through GbE LAN.

This is my first experience with a 3ware card and also my first with WD LB.

PostPosted: Sat Jun 19, 2004 7:10 pm
by Boba_Fett
Unfortunately, I think about 4 of those drive are going to be enough to saturate a 133mhz bus so any speed you get in excess of 133mb will be for naught. Another caution. RAID 0 sounds cool at first but looking at the fact that you are striping data together with them (meaning, you will lose all you data on a RAID chain if one of them goes) it reallly, REALLY, would suck if a problem occured (That is a lot of data you don't want to lose). If you do go ahead and decide to try this, well, I hope you got lots of spare time and around 200 DVD-Rs for backing up ;) Man, you must got a huge case and power supply to house 12 freakin HDDs...

PostPosted: Sun Jun 20, 2004 9:12 am
by Kennyshin
200 DVD-R's not necessary. I prefer 200 DVD+R's to that, but I am not going use DVD media to "back up" my RAID 0 drive of 1.92TB. HDD is best for copying HDD data.

The data that will be stored will never be critical. I am using the RAID 0 itself for backup and that is why I added them this week. Striped HDD"s hardly a primary storage for me.

For personal use, I think such "redundancy" is too redundant to be pragmatic. If I were that affluent, I would have bought another 12 drives to backup the first 12 drives manually, copying the data from PC A to PC B through GbE LAN.


As I said above, I prefer just adding more HDDs to making it redundant using hardware-based RAID mirroring.

I don't need especially huge case though I have a few big tower cases at home. I use almost anything for the hard drives to sit on. I am using a 10-inch fan for the HDD cooling alone. Not 10 centimeters but 10 inches. 10 or 12, I'm not sure, and I'll use even 16-inch if necesary.

At first, I tried 8 drives in one striped array. Next, I tried 4 drives each. One with 64KB stripe size and the other with 256KB stripe size. Now I'm trying 2 drives with 64KB and another 2-drive array with 128KB. I have two more RAID controllers on this Gigabyte motherboard so that I can compare the performance in a single system.

The performance so far, has hardly been satisfying. Though the access time went down to as little as 1ms.

PostPosted: Sun Jun 20, 2004 1:47 pm
by Kennyshin
Here are some of my first test results with this configuration.

http://www.storageinfo.co.kr/EzMODse/vi ... hp?t=18365

Test 1:

Benchmark Breakdown
Buffered Read : 73 MB/s
Sequential Read : 68 MB/s
Random Read : 29 MB/s
Buffered Write : 15 MB/s
Sequential Write : 65 MB/s
Random Write : 60 MB/s
Average Access Time : 1 ms (estimated)


Test 2:

Benchmark Breakdown
Buffered Read : 85 MB/s
Sequential Read : 99 MB/s
Random Read : 15 MB/s
Buffered Write : 15 MB/s
Sequential Write : 66 MB/s
Random Write : 27 MB/s
Average Access Time : 4 ms (estimated)


Test 3:

Benchmark Breakdown
Buffered Read : 74 MB/s
Sequential Read : 96 MB/s
Random Read : 19 MB/s
Buffered Write : 15 MB/s
Sequential Write : 66 MB/s
Random Write : 33 MB/s
Average Access Time : 3 ms (estimated)


Test 4:

Benchmark Breakdown
Buffered Read : 74 MB/s
Sequential Read : 68 MB/s
Random Read : 8 MB/s
Buffered Write : 17 MB/s
Sequential Write : 65 MB/s
Random Write : 13 MB/s
Average Access Time : 7 ms (estimated)


Test 5:

Benchmark Breakdown
Buffered Read : 74 MB/s
Sequential Read : 70 MB/s
Random Read : 10 MB/s
Buffered Write : 15 MB/s
Sequential Write : 64 MB/s
Random Write : 15 MB/s
Average Access Time : 5 ms (estimated)

PostPosted: Sun Jun 20, 2004 2:03 pm
by Kennyshin
Test environment:

Gigabyte SiS motherboard
Intel Pentium 4 Northwood 2.4GHz "B" core
HT unavailable
S-ATA not used
Built-in onboard P-ATA RAID not used
Fujitsu 2.5-inch 4,200RPM 20GB for booting
Windows XP Profession SP1
C: heavily fragmented
Sandra 2004, free version
PCI 32-bit, 33-MHz
No other PCI slot occupied
GbE port connected to 60-80Mbps router to 100Mbps internet
3ware 8-ch. PCI, 2 chips, 8 connectors
8 Western Digital 1600LB 160GB 7,200-RPM 2MB buffer drives

Current array configuration:

4 used for RAID 0 stripe array
2 used for RAID 0 stripe array 64KB stripe size
2 used for RAID 0 stripe array 128KB stripe size

Test 1: 8 drives, RAID 0, 1MB stripe
Test 2: 4 drives, RAID 0, 64KB stripe
Test 3: 4 drives, RAID 0, 256KB stripe
Test 4: 2 drives, RAID 0, 64KB stripe
Test 5: 2 drives, RAID 0, 128KB stripe

I can also try other PCI slots, use more RAM modules, different AGP graphics cards, install OS on a fresh basis, add SCSI for booting and OS, higher CPU clock, 3 drive array, RAID 0+1, RAID 1, RAID 5, and many others. There are also a lot of programs to test RAID performance other than Sandra but Sandra's what I had on the test PC. The PC was performing several tasks while running Sandra.

WD 1600LB is not especially a good choice for RAID 0, but that was the cheapest 7,200-RPM drive available in Seoul. I'll hopefully try Hitachi 400GB S-ATA next time. I have two motherboards with 64-bit 66/133MHz PCI slots but they use Xeon processors instead of Pentium 4 Northwoods and Registered ECC DDR memory modules intead of non-ECC unbuffered ones.

PostPosted: Sun Jun 20, 2004 2:05 pm
by Kennyshin
Coming Soon! 07 June 2004

Ever since Western Digital's Raptor WD740GD was announced, oh, about 10 months ago, enthusiasts and IT professionals everywhere have been patiently (or otherwise) waiting for concrete results that demonstrate the potential benefits of the the drive's tagged command queuing (TCQ).
We've been embroiled in hundreds of hours of testing using several different TCQ-enabled controllers in conjunction with arrays of 1 to 4 WD740GD drives. Unfortunately, many hours of tests remain.

However, we're eager to present some initial results. Stay tuned for the first of a three-part series that will examine what benefits TCQ confers in our relatively modest third-generation testbed in single-user and multi-user setups!


http://storagereview.net

740GD Raptor drives are great but I'd better get used 10K rpm SCSI 320 since I already have SCSI 320 controllers.

PostPosted: Sun Jun 20, 2004 2:19 pm
by Boba_Fett
Kennyshin wrote:
The data that will be stored will never be critical. I am using the RAID 0 itself for backup and that is why I added them this week. Striped HDD"s hardly a primary storage for me.



Interesting. Who goes through all the trouble of making a 1.92TB chain of HDDs and not care about the data that's on it? Oh well, to each his own. On a side note, I personally prefer SCSI only for intergrety reasons. I personally have never seen a SCSI HDD die once, which is really making me reconsider looking for a new SATA drive at the moment. I guess one of the reasons I don't use SCSI is that it is simpily not cost effective at all. I mean, a 74gb 15,000rpm SCSI HDD for $600 is absoludicrous #-o

PostPosted: Sun Jun 20, 2004 2:50 pm
by Kennyshin
Boba_Fett wrote:
Kennyshin wrote:
The data that will be stored will never be critical. I am using the RAID 0 itself for backup and that is why I added them this week. Striped HDD"s hardly a primary storage for me.



Interesting. Who goes through all the trouble of making a 1.92TB chain of HDDs and not care about the data that's on it? Oh well, to each his own. On a side note, I personally prefer SCSI only for intergrety reasons. I personally have never seen a SCSI HDD die once, which is really making me reconsider looking for a new SATA drive at the moment. I guess one of the reasons I don't use SCSI is that it is simpily not cost effective at all. I mean, a 74gb 15,000rpm SCSI HDD for $600 is absoludicrous #-o


Try USED 10K rpm SCSI instead. ;) Even in South Korea, I bought my 10K.6 146GB (not 74GB) 10K rpm SCSI drives for US$300 each. I mean compare the cheapest IDE drives with the cheapest SCSI drives and compare the fastest IDE drives with the fastest IDE drives to be fair. Raptor's 10K rpm, not 15K rpm so comparing 10K rpm prices with 15K rpm prices is unreal as well. Raptor 74GB vs. Seagate 10K.6 is more realistic.

Browse PriceWatch and buy the cheapest 160GB hard disk drives to have 1.92TB just my way and I doubt it's that much of a trouble. It was not for me. Cars cost tens of times more, and not just money.

I said it's BACKUP. HDD's rewritable and if it's just for backup, what's the risk of losing the data inside? Writing 1.92TB data to HDD takes long but it is much faster then trying 500 DVD disks. Even imagine it. Which would you choose: 1 RAID or 500 DVDs? Using HDD's only for primary storage is just another prejudice when average HDD's have become much cheaper than average monitors. So I wanted to make good use of the HDD storage technologies more and more.

PostPosted: Fri Jul 09, 2004 10:58 pm
by integspec
Boba_Fett wrote:
Kennyshin wrote:
I personally have never seen a SCSI HDD die once


Got to agree with that. I have seen SCSI drives which refuse to die, even when you try to kill them. We once had an issue with some SCSI drives with a client. Eventually, we found out that the RAID controller was to blame not the drives. :(