Question : Problem: Using a dual port fibre channel card to have a high capacity throughput from Server directly to SAN

I'm trying to find a solution for a customer.  We're an OEM so we resell Intel Hardware and Microsoft Products, we're an Intel Channel Partner as well as a Microsoft Gold Partner.  A customer was wondering if a SAN would be able to do the following and maybe if someone's done this or they have a better solution on how to address it, feel free to express your opinion.  We have an SSR212MC2 SAN with Microsoft Unified Data Storage Server 2003 R2.   In the SAN we would put the EXPX9502AFXSR Dual Port Fiber Channel Card in the server and use that instead of iSCSI.  Then from there a customer has an email server that they'd like to put the mail store on a SAN to benefit with throughput by putting a single port fiber channel card on their mail server and plug it directly into one of the SAN's fiber channel ports and then take the other fiber channel port and plug that into a Switch that has a Fiber GBIC.  So the mail server could get like 10GB of throughput between the server and the SAN and the rest of the server would be plugged in via Cat5e to the switch and would get 1GB sharing the other fiber channel

Seems a little unorthodox so i'm wondering if this would even work, any experience with other vendor's SAN's.  Also would there be a better way of doing this to get high capacity throughput from a server that has the mail store stored on another machine like a SAN?

Answer : Problem: Using a dual port fibre channel card to have a high capacity throughput from Server directly to SAN

6 or 700 mailboxes?  What are the mailbox sizes?  What's the average use like?  Microsoft defines VERY heavy use as:
30 sent / 120 received    PER DAY (50KB messages)
5 MB   database cache/user
0.48    IOPS/user
36   logs generated/mailbox

In addition, add 5% onto database LUN size for Content Indexing, add 10% onto database LUN size for Overhead / white space, and the default Deleted Items retention period (14 days) adds up to 30% in the database LUN.

Further, this does not take into account if it's Exchange 2007, at which point you'll have to worry about things like the transport dumpster (an additional 17 IOPS per 40KB message)
 

Microsoft has specific formulas for Exchange server sizing with regards to disk I/O (not disk size) or IOPS.  You say "the mail store"..  They have all those mailboxes in a single store?  They'd get much better performance (and faster recovery times) if those stores were broken up into several smaller stores.  Then you separate the log files from the database files (because log files are sequential I/O and database is random I/O), put each store and log file on their own set of drives..  So three stores could easily take up 12 spindles (assuming a not-super-busy Exchange server and RAID1).  That number will only increase depending on I/O as you have to add spindles by switching to RAID10.  And unless my google-foo deceives me, 12 spindles is the max that "SAN" can hold.

Also, you fail to mention whether or not the spindles in question are SAS or SATA.  SATA doesn't have half the throughput of SAS, and isn't going to perform well.

IMHO, the "SAN" you are talking about isn't really much of a SAN at all, and it's not going to hold up well to the abuse of even a moderately busy Exchange server.

Yes, FC can be shared by multiple hosts, with the use of a FC switch or a SAN director.  Pricey, though.  a Cisco MDS9506 with a couple of 4GB/s line cards loaded with GBICs can set you back over $50k by itself.  http://www.memory4less.com/m4l_itemdetail.asp?rid=fd_10&itemid=1441269700    Note that is only the chassis and SUP cards, no line cards, support, or licensingLOL   You wouldn't need anything like that for this, probably just a nice 8 or 16 port Q-logic to get started.  http://www.dealtime.com/xPO-Q-Logic-QLogic-SANbox-5602-Switch-Fibre-Channel-4-x-SFP-4-x-XPAK-empty-1-U  But again, you aren't going to get near the throughput as you would with DAS or if you upgraded to a real SAN.  Enterprise-class SANs can saturate a link by combining 20 or 30 spindles into a single LUN and presenting that (or a portion of it) to a host.  Another thing to consider is the learning curve behind FC - if you use a device other than Cisco, you need to worry about configuring zones, and LUN masking and such.  Why besides Cisco?  Because Cisco doesn't use zones, they use VSANs, which work like VLANs, and that makes the learning curve not quite so severe (if you're familiar with configuring VLANs, anyway).  But you pay $$$$$$ for it (see above).

I am going to have to go with andy on this one and say that DAS is the way to go for this, unless they are serious about a large-ish cash outlay for this project, starting with replacing that "SAN".

I know it's not what you wanted to hear, but..

HTH,
exx
Random Solutions  
 
programming4us programming4us