Also, about kernel patches, it seems there are a pre-build RPM's, which allow an easy installation: http://downloads.lustre.org/ In my opinion, The main benefit of Lustre, when compared to GlusterFS, is the transparent HA of data (distribution done behind the scenes), addition and removal of nodes (don't know if they support data rebalancing - KosmosFS does), and management. GlusterFS of course major points are complete meta-data dispersion, which prevents a single point of failure, and very easy start (just install and get it running). Regards. 2009/1/12 Stas Oskin <stas.oskin at gmail.com> > Hi. > > It's got similar speeds compared to gluster for a few nodes but depends on >> fiberchannel or some other shared block storage system for redundancy. We >> immediately discarded it in favor of gluster for this reason. It was also >> significantly more difficult to get running as it was a kernel patch. >> > > Are you sure about that? > > > From Lustre wiki (http://wiki.lustre.org/index.php?title=Lustre_FAQ) > >> Are fibrechannel switches necessary? How does HA shared storage work? >> >> Typically, fibrechannel switches are not necessary. Multi-port shared >> storage for failover is normally configured to be shared between two server >> nodes on a FC-AL. Shared SCSI and future shared SATA devices will also work. >> >> >> Backend storage is expected to be cache-coherent between multiple channels >> reaching the devices. Servers in an OSS failover pair are normally both >> active in the file system, and can be configured to take over partitions for >> each other in the case of a failure. MDS failover pairs can also both be >> active, but only if they serve multiple separate file systems. >> > > As far a I understand, Lustre is designed with the approach most cluster > file systems (except GlusterFS of course :) ), meaning you have master > servers that responsible for storage and retrieval of the data, and storage > nodes, which do actualy storage. > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zresearch.com/pipermail/gluster-users/attachments/20090112/d3dd2b51/attachment.htm