I set up a DHT over AFR on top of 360 machines. Every 4 machines form an AFR cluster, then the DHT is defined on top of those clusters. Each machine defines a storage endpoint: ----- SERVER.CONF ----- volume posix type storage/posix option directory /glfs end-volume volume brick type features/locks subvolumes posix end-volume volume server type protocol/server option transport-type tcp option listen-port 6996 option auth.addr.brick.allow * subvolumes brick end-volume ----- CLIENT.CONF ----- volume multifeed001 type protocol/client option transport-type tcp option remote-host multifeed001 option remote-subvolume brick end-volume . . . (1 per host) . volume rep-0 type cluster/afr subvolumes multifeed001 multifeed091 multifeed181 multifeed271 end-volume . . . (1 per cluster, 90 total) . . . volume dht type cluster/dht subvolumes rep-0 rep-1 rep-2 rep-3 rep-4 rep-5 rep-6 rep-7 rep-8 rep-9 rep-10 rep-11 rep-12 rep-13 rep-14 rep-15 rep-16 rep-17 rep-18 rep-19 rep-20 rep-21 rep-22 rep-23 rep-24 rep-25 rep-26 rep-27 rep-28 rep-29 rep-30 rep-31 rep-32 rep-33 rep-34 rep-35 rep-36 rep-37 rep-38 rep-39 rep-40 rep-41 rep-42 rep-43 rep-44 rep-45 rep-46 rep-47 rep-48 rep-49 rep-50 rep-51 rep-52 rep-53 rep-54 rep-55 rep-56 rep-57 rep-58 rep-59 rep-60 rep-61 rep-62 rep-63 rep-64 rep-65 rep-66 rep-67 rep-68 rep-69 rep-70 rep-71 rep-72 rep-73 rep-74 rep-75 rep-76 rep-77 rep-78 rep-79 rep-80 rep-81 rep-82 rep-83 rep-84 rep-85 rep-86 rep-87 rep-88 rep-89 end-volume I mount the fs by doing glusterfs -f client.conf /mnt/glfs I then created a bunch of test files in /mnt/glfs and they showed up in the backend dir on every node. Am I misunderstanding how DHT should work?