On Sun, Nov 14, 2010 at 15:42:29 -0800, Khawaja Shams wrote: > Hi Greg, > Thank you for the insightful response. We have multiple automated > clients pushing and pulling changes from git as events occur. We have > not hit any real performance issues just yet. Our main goal is to > improve the availability of the repository in case the box running the > apache server has an outage during a mission critical period. If you are out for availability, NFS isn't an answer, because the NFS server remains a single point of failure. There are distributed filesystems (Gluster, Lustre etc.) that can provide redundancy of storage nodes too or you could have shared storage array with appropriate filesystem (GlobalFS, OCFS2, etc.), but that requires special hardware. These will probably give you better performance too -- git network protocol is optimized to send minimal data, but that often means a lot more needs to be read from the disk. I don't have personal experience with them though, so I can't give you more specific recommendation. -- Jan 'Bulb' Hudec <bulb@xxxxxx> -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html