Re: Best practice for large storage?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/14/2013 11:48 AM, Jeff Johnson wrote:
> Stable enough where it is being used at Lawrence Livermore Nat'l Labs on
> a 55PB Lustre resource.

That's a tad misleading.  LLNL' Sequoia has ZFS striped across three 8+2
hardware RADI6 arrays using 3TB drives.  Lustre is then layered atop
those.  So here ZFS sits atop 72TB raw.  It is not scaling to 55PB.

Something worth noting in this "if they use it so should you" context is
that US gov't computer labs tend to live on the bleeding edge, and have
the budget, resources, and personnel on staff to fix anything, including
rewriting Lustre and ZFS to fit their needs.

The name Donald Becker may be familiar to many here.  He wrote a good
number of the Linux ethernet device drivers while building Beowulf
clusters at NASA.  They bought a bunch of hardware, no Linux drivers
existed, so he wrote them to enable their hardware.  Eventually they
made it into mainline.

The moral of this story should be obvious.

-- 
Stan

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux