Played around with the /var/lib/glusterd/vols/gl_disk /gl_disk-fuse.vol using rr and alu schedulers. Seems like this is the side effect of the default DHT. In my set of files – the filename gets hashed to a single node and the brick fills up. The files start small and they will grow later so I don’t really want
many of them on a single node. I do have control to name them differently or structure the layout. Reads are the ones that are important so the DHT is good to have as opposed to sending lookups to many bricks.
Is gaming the DHT the only option? From: gluster-users-bounces@xxxxxxxxxxx [mailto:gluster-users-bounces@xxxxxxxxxxx]
On Behalf Of Prasad, Nirmal Hi, Based on the documentation – distribute replicated volume puts files in random. I have 20 bricks (10x2) and want to distribute files in round-robin. The files land up only on 1 node till it fills up – I want to evenly distribute the files
(only 1 writer). Tried to find the location for volume file for rr scheduler – there are a bunch of files – not sure which one is correct. Any pointers? Thanks Regards Nirmal Volume Name: gl_disk Type: Distributed-Replicate Volume ID: ff5e70d6-c559-4c38-8b13-96e4f09825ba Status: Started Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: 192.168.100.10:/mnt/disk/data Brick2: 192.168.100.11:/mnt/disk/data Brick3: 192.168.100.12:/mnt/disk/data Brick4: 192.168.100.13:/mnt/disk/data Brick5: 192.168.100.14:/mnt/disk/data Brick6: 192.168.100.15:/mnt/disk/data Brick7: 192.168.100.16:/mnt/disk/data Brick8: 192.168.100.17:/mnt/disk/data Brick9: 192.168.100.18:/mnt/disk/data Brick10: 192.168.100.19:/mnt/disk/data Brick11: 192.168.100.20:/mnt/disk/data Brick12: 192.168.100.21:/mnt/disk/data Brick13: 192.168.100.22:/mnt/disk/data Brick14: 192.168.100.23:/mnt/disk/data Brick15: 192.168.100.24:/mnt/disk/data Brick16: 192.168.100.25:/mnt/disk/data Brick17: 192.168.100.26:/mnt/disk/data Brick18: 192.168.100.27:/mnt/disk/data Brick19: 192.168.100.28:/mnt/disk/data Brick20: 192.168.100.29:/mnt/disk/data |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users