So, I am looking to solve an extremely minor argument among my co-workers.
Since the list is quiet I figured I'd throw this out there. Its clearly
not a problem, bug, etc so obviously ignore it if you better things to
do <Grin>
We have lots of 2 data hosts +1 arbiter clusters around. Mostly
corresponding to Libvirt pools.
So we have something akin to /POOL-GL1 available on the two data notes
using fuse. We also mount /POOL-GL1 on the arbiter. They correspond to
the actual volume GL1.
Obviously we can be logged into the arbiter and manipulate the mounted
files just as we would on the real data nodes and because we may use
multiple arbiters on the same kit, it can be more convenient to login
there when moving files among clusters.
The question is:
Assuming equal hardware network speed and similar hard disk i/o
performance, then if we are transferring a large file (say a VM image)
then is it more efficient to copy that into the mounted directory on one
of the real data hosts or do you get the same efficiency just uploading
it onto the arb node?
Obviously if you copy a large file into the mount on the arb it is not
actually being uploaded there, but is rather being copied out to the two
data nodes which have the real data and only the meta data is retained
on the arb.
So the question is by uploading to the arb are we doing extra work and
is it more efficient to upload into one of the volumes on a data host
where it only has to copy data off to the other data volume and the
metadata onto the arb.
We ran a few tests which implied the direct to host option was a
'little' faster but they were on production equipment which has varying
loads and thus we couldn't compare cpu loads or come to a firm conclusion.
Its not yet an important enough issue to build up a test bed, so we were
wondering if perhaps someone else already knows the answer based on an
understanding of the architecture or perhaps they did do the testing?
-bill
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users