Hi all,
I usualy advise clients to use the native client if at all possible, as it is very robust. But I am running in to problems here.
In this case the gluster system is used to store video streams. Basicaly the setup is the following:
- A gluster cluster of 3 nodes, with ample storage. They export several volumes.
- The network is 10GB, switched.
- A "recording server" which subscribes to multi cast video streams, and records them to disk. The recorder writes the streams in 10s blocks, so when it is for example recording 50 streams it is creating 5 files a second, each about 5M. it uses a write-then-rename process.
I simulated that with a small script, that wrote 5M files and renamed them as fast as it could, and could easily create around 100 files/s (which abouts saturates the network). So I think the cluster is up to the task.
However if we try the actualy workload we run in to trouble. Running the recorder software we can gradually ramp up the number of streams it records (and thus the number of files it creates), and at arou d 50 streams the recorder eventually stops writing files. According to the programmers that wrote it, it appears that it can no longer get the needed locks¸ and as a result just stops writing.
We decided to test using the NFS client as well, and there the problem does not exist. But again, I (and the customer) would prefer not to use NFS, but use the native client in stead.
So if the problem is file locking, and the problem exists with the native client, and not using NFS, what could be the cause?
In what way do locking differ between the two different file systems, between NFS and Fuse, and how can the programmers work around any issues the fuse client might be causing?
This video stream software is a bespoke solution, developped in house and it is thus possible to change the way it handles files so it works with the native client, but the programmers are looking at me for guidance.
Any suggestions?
Krist
--
Vriendelijke Groet | Best Regards | Freundliche Grüße | Cordialement
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users