All, I'm not sure if what I'm about to ask is relevant to this discussion but because of the brief explanation Wendy has here (which I did know a bit of from before) re-inforced the question in mind to confirm one way or the other. The question is, does anyone on this list think that for an application which "MAY" have just as many simultaneous reads to the same filesystem (almost never in the same folders however) by multiple nodes as it has writes, using a database to store information instead of using a clustered file system ala GFS is a better solution?? In short, I can either store and read these sound files from a SAN with a layer of GFS running over it (in which case the performance will depend on the efficiency and nature of GFS implementation) OR I could store these same files in a database over an ODBC connection. Note that this databases are enterprise grade, clustered and attached to the SAN via high performance HBAs. Any comments and/or advice would be greatly appreciated. Best Regards, \R -- Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster