For that many disks you need RAID 6 or RAID 10, you cannot afford the rebuild time taken with RAID 5 to have single parity, it has to read all of every disk so can take days. RAID 10 rebuild just mirrors one disk to another so takes about 10 minutes. RAID 6 is very slow on writes since there are 6 physical I/Os per logical one but I doubt you are going to be write intensive. The battery backed cache speeds up writes anyway - very slow is relative, the more disks you have the faster it is.
The stripe size is important, with a lot of disks in a multi-user environment you want as much data from a single disk as possible before going onto the next disk, so as you are storing pictures I would use the largest stripe size as possible unless it's larger than a single pic. This is the oposite though for random access of small data with a single thread; in that case you want a small stripe size to keep all the disks working for you. You can migrate the stripe size on the fly to experiment but during migration the whole thing slows down.
Personally I would not go above 1TB if it can be avoided, the defrag tool in Win2000 croaked above 1TB since it is a cutdown version of diskkeeper standard, not sure if the 2003 version goes past 1TB or not. Windows can't go past 2TB without GPT/dynamic disks and you have to consider the time it takes to run chkdsk to clean up the volume if you have a crash. Trouble is that keeping under 1TB you end up with half a dozen 1TB volumes with the data on so you have to keep track of which volume the data is on.
I think I would create 3 or 4 950GB RAID 6 logical disks accross a single array to start with if writing the software from scratch, then you've got rid of the single volume problem at the start of the project and adding another dozen disks becomes simple.
|