PDA

View Full Version : OpenFiler SAN software



Joe Carney
06-30-2008, 05:42 PM
I just found out about OpenFiler, an open source linux based system for designing/creating NAS and SAN setups. Its free, as in beer, but you can purchase support contracts if needed.

http://openfiler.com

Has anyone had real world experience with it? (Please no Windows vs Macs vs Linux diatrabes).

It supports iSCSI and various Hardware based Raid setups, and..dynamic volume management. Too many features to list here, if curious check out the link.

Richard Lackey
07-01-2008, 01:23 AM
I want to give this a try, just setting up a SAN over Gb ethernet to begin with, using my existing RAID in the server. I can't risk my two OS system drives as it's my main workhorse, I need a third OS drive to install Linux so I can give it a go. If I can make a plan, I'll install Linux, then Openfiler and see what kind of performance I get and share the results.

Some Fibre Channel cards and a FC switch would be great additions if this works.

Joe Carney
07-01-2008, 06:52 AM
I believe OpenFiler includes it's own linux distro streamlined for NAS/SAN duties. You can download an iso image and go from there.

Joe C.

Richard Lackey
07-01-2008, 08:19 AM
Yeah, I'm downloading it now. I need a boot drive, don't have anything laying around so I might have to go buy one or partition one of my Windows boot drives.

Vigen Vartanov
07-01-2008, 02:45 PM
want to give this a try, just setting up a SAN over Gb ethernet to begin with, using my existing RAID in the server.

Good day . I have question , you can make sharing with 1 Gb ethernet , and you don not need to make SAN for it. It will be NAS ,
sow it will work with same speed.
Yours Server can manage permissions and etc. Sow ok you will get some
functions from SAN , but HDD performance will be same.
This software is very nice , my IT stuff once show me how dose it works. :)

chocblu
07-01-2008, 08:06 PM
Hmmm thought i posted on the already, but my post doesnt seem to appear.

Anywho. There are two things with openFiler. One if your going to use iSCSI with a Mac, then beware a little. The free iscsi initiator that you get somewhere (there is only one that i know off) can have some adverse symptoms to your system. i just found instability to be one of them. Dont know for sure it was the driver, but ive read of other people having the same problem. Check it on another system somewhere.

Second know that if your using iSCSI, it wont do any multiuser stuff for you. So if you share out your RAID to two different machines and they both have write acceess, they can corrupt files.

As far as i kow samba should handle some of this as its a network file system. Sames with NFS. But both of these have are supposed to have extra overhead, compared to iSCSI. I didnt get that much more performace from iSCSI, but that could hvae had to do with my RAID array than anything else.

Hope this helps

Cheers

Mark

Joe Carney
07-02-2008, 10:49 AM
Richard, here is a detailed install description designed for media shops from the openfiler forum.
The guy asking the question is running 3d render farms and video editing.

Username tungsten2K....
>>I have direct experience with this by working for several small media producers here in SF.

My suggestion would be to buy the largest SATA performance drives you can, along with the 3Ware 9690SA-16ML maxed out on Cache and the BBU and set to "Balanced" StorSave profile. For drives I would suggest the Seagate ST31000x4NS (http://www.storagereview.com/1000.sr). We've had remarkable performance improvement of these 1TB ES.2 units over the Hitachi's in the Nexsan SATAbeast for just this type of application. To compare, the old setup of 14x150GBxR6 10kRPM Ultra320 HP drives on an HP SmartArray 400x controller with 512+ cache set to 75W/25R and Windows 2003 Server shown minimal speed differences compared to a 14x750GBxR6 SATA in Nexsan SATABeast with 512GB write-back cache connected to OF system via FC. Your application is prime candidate for a cluster file system but the complexities are not worth it at this stage. When you have to render at constant multi-Gigabyte/sec speeds, you'll have no choice.

Next, I would configure it RAID6 regardless of the write-speed implications(these recent SATA offerings die faster than batteries at an all-girl sleep over) and I know how meticulous media shops are about checking nightly backups

Better would be RAID10 with more drives but I doubt your budget will be amenible, esp. after having to buy the other items on this laundry list.

8 x 1TB R6 = 6TB of usable space.

During setup of the array, config a "Boot Volume" option to, say 16GB, and "Auto-Carving" at 2TB. This will save you from having to configure GPT disks manually (and possibly making a grave mistake). In OF, you'll just gang the partitions up JBOD. The processing overhead for this is absolutely minimal as this is not doing any striping.

When configuring the volumes, choose the XFS file system. I am not a huge fan of this filesytem but it has proven itself to be the most optimized solution for large file transfers using OF. JFS is a passable 2nd and solid as a rock so go with that if you are on edge about using XFS but know that you are hampering your throughput ceiling. Absolutely do NOT use ReiserFS (latest conviction not withstanding or EXT2/3 as they are abysmal for this workload compared to XFS and even JFS. (Note: Rafiu, when oh when will we have EXT4 support ?)

Purchase a high quality gigabit switch. Unfortunately I would have to suggest something like an HP2610-48 at a bare minimum but truly stretch for the HP2810-48 if you can afford it. The HP2848 is only really needed when using multiple independant render farms on the same switch (VLAN) because most 3DSMax (burner), Maya and other render managers simply cannot scale beyond approx 20 nodes in my experience without exponentially diminishing returns. The reason for the 48 is that if you outgrow the 24 (easily done when 802.3ad comes into play) then you're screwed if you have to stack the switches because not only are you eating up the aggregation ports at twice the speed, but you'll never come close to the switching backplane capability of the single switch. So, yes, futureproof it with a 48-port'r. Procurves have lifetime free warranty to boot. If you have to skimp somewhere, skimp by getting the 2610, not by getting a 24-port.

Use Intel Server NICs (Like the Pro1000-PT Quad). Embedded is okay too but ensure you buy same chipset as your embedded Intel Pro1000 for your PCI-X or PCI-e, just don't skimp on the NIC ast will cause you headaches.

Configure Jumbo Frames and put the whole wad one their own LAN and you will see some radical performance that your team will be buying you beers all night.

You'll have to configure a Layer 3 router to connect the LAN to the rest of the Interweb but you can just grab a VM like Untangle. I'd say dedicate a NIC in the OF machine to the 1500mtu LAN and have the OF box do the routing but that's probably asking for trouble (runs and hids from Rafiu].

Good Luck !

-=dave
<<

Richard Lackey
07-02-2008, 01:11 PM
Wow, that's some really in depth advice. I'm going to do a bit of reading up on this before I take the plunge but I've found a 16-drive bay chassis to build my server with reasonable airflow and a redundant power supply. I'll probably go Fiber Channel rather than gigabit like this guy is suggesting. If I'm hanging Scratch from the server, I need a reliable 300MB/sec across the network. I've got to decide how many workstations are going to be connected and what kind of read and write bandwidth I need for each. If I want 3 x 300MB/sec, that's a lot to expect of the server and the network infrastructure. I am also considering more than 16 drives, as 16 SATA drives is going to max out at 600MB/sec, whereas if they are configured on 2 x 8 port controllers, my theoretical max through the controllers is over a GB/sec, but I'd need more than one drive on each port. The enclosure gets to be a bit of a problem with over 16 drives as I want to avoid the expense of external storage, I want the drives local to the server mainboard and run the SATA cables straight from the controller cards, no backplane.

Anyway, more research is necessary before I order parts, I've sobered a bit on this and am realising the challenges and implications of building a long-term reliable solution that I won't regret down the line.

Joe Carney
07-02-2008, 07:37 PM
OpenFiler supports Fiberchannel and/or 10g, based on their docs. Along with raid on raid.

chocblu
07-03-2008, 09:54 PM
Does OpenFiler support NIC Bonding. If it does you could join 4 gigE ports together and get redundant 4gigE connections

Joe Carney
07-04-2008, 01:26 PM
According to their forum and docs, yes it supports nic bonding.

Kris Bird
04-26-2009, 04:52 PM
Anyone done any high performance testing yet? I just stumbled upon this thread after doing some of my own testing.

I've set up OpenFiler with three spare drives (identical 500gb SATA, 18 months old). The three drives in software RAID0 got me 65MB/s over iSCSI to a WinXP box. Set up as individual drives in OF, I get 55MB/s via iSCSI.

Connection was over a single GigE channel via integrated GigE ports- the OF box an Intel 845G, the receiving box a recent NForce.

The same 3 drives configured as a Software RAID0 in XP achieve 200MB/s in the blackmagic/aja benchmarks (when empty), but obviously XP->XP sharing is very slow.

It's early days, but already its performing better than windows xp sharing (not saying much).

What are other people finding? What are the key bottlenecks? GigE itself, the GigE chipset/boards? iSCSI vs. SMB/CIFS?

Let me know if you're on a similar journey and we can share info.

Kris