Sorry, I should have noted that. 380MB is both read and write (I
confirmed this with a developer). We do need the NFS stack, as that's how all the code and various many Instances work -- we have several "workers" that chop up video on the same namespace. It's not efficient, but that's how it has to be for now. Redundancy, in terms of the server? We have RAIDED volumes if that's what you're referring to. Here's a basic outline of the flow (as I understand it): Video Capture Agent sends in large file of video (30gb +/-) Administrative host receives and writes to NFS A process copies this over to another point in the namespace Another Instance picks up the file, reads and starts processing and writes (FFMPEG is involved) Something like that -- I may not have all the steps, but essentially there's a ton of I/O going on. I know our code model is not efficient, but it's complicated and can't just be changed (it's based on an open source product and there's some code baggage). We looked into another product that allegedly scaled out using multiple NFS heads with massive local cache (AWS instances) and sharing the same space, but it was horrible and just didn't work for us. Thank you. On 7/14/15 3:06 PM, Mathieu Chateau
wrote:
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users