You can't move '.glusterfs' as it is full of hard lunks (thus it must be on the same FS).
You can mount bia noatime and ensure that inode size is at least 512 bytes.
Also, you can use those NVMEs as a caching layer, so things will be faster.
Usually I recommend profiling your workload and then work towards optimizations.
GlusterFS suffers from negative searches (searching for file that doesn't exist) and lattency. The more nodes you have - the better is the situation.
Consider also splitting the workloads into 2 (if that is feasible) so you separate the small I/O from large I/O as the volume tunables for both workloads are self-contradicting.
Best Regards,
Strahil Nikolov
On Wed, Dec 15, 2021 at 21:06, Arman Khalatyan<arm2arm@xxxxxxxxx> wrote:________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users