On 2015-09-22 02:59, Krutika Dhananjay wrote:
-------------------------
FROM: hmlth@xxxxxxxxxx
Thank you, this solved the issue (after a umount/mount). The
question
now is: what's the catch? Why is this not the default?
https://partner-bugzilla.redhat.com/show_bug.cgi?id=1203122
The above link makes me think that there is a problem with
"readdirp"
performances but I'm not sure if the impact is serious or not.
That's right. Enabling the option can slow down readdirp operations,
which is why it is disabled by default.
Is there a list of options where this tradeoff is being made? Disabling
consistency for performance is not what I was expecting by default.
Regards
Thomas HAMEL
On 2015-09-21 16:14, Krutika Dhananjay wrote:
Could you set 'cluster.consistent-metadata' to 'on' and try the
test
again?
#gluster volume set <VOL> cluster.consistent-metadata on
-Krutika
-------------------------
FROM: hmlth@xxxxxxxxxx
TO: gluster-users@xxxxxxxxxxx
SENT: Monday, September 21, 2015 7:10:59 PM
SUBJECT: "file changed as we read it" in gluster
3.7.4
Hello,
I'm evaluating gluster on Debian, I installed the version 3.7.4
and
I
see this kind of error messages when I run tar:
# tar c linux-3.16.7-ckt11/ > /dev/null
tar: linux-3.16.7-ckt11/sound/soc: file changed as we read it
tar: linux-3.16.7-ckt11/net: file changed as we read it
tar: linux-3.16.7-ckt11/Documentation/devicetree/bindings: file
changed
as we read it
tar: linux-3.16.7-ckt11/Documentation: file changed as we read it
tar: linux-3.16.7-ckt11/tools/perf: file changed as we read it
tar: linux-3.16.7-ckt11/include/uapi/linux: file changed as we
read
it
tar: linux-3.16.7-ckt11/arch/powerpc: file changed as we read it
tar: linux-3.16.7-ckt11/arch/blackfin: file changed as we read it
tar: linux-3.16.7-ckt11/arch/arm/boot/dts: file changed as we
read
it
tar: linux-3.16.7-ckt11/arch/arm: file changed as we read it
tar: linux-3.16.7-ckt11/drivers/media: file changed as we read it
tar: linux-3.16.7-ckt11/drivers/staging: file changed as we read
it
#
I saw this problem was discussed here earlier but I was under the
impression it was resolved on the 3.5 series. Is the fix in the
3.7
branch?
My volume configuration:
# gluster volume info glustervol1
Volume Name: glustervol1
Type: Replicate
Volume ID: 71ce34f2-28da-4674-91c9-b19a2b791aef
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: n1:/glusterfs/n1-2/brick
Brick2: n2:/glusterfs/n2-2/brick
Brick3: n3:/glusterfs/n3-2/brick
Options Reconfigured:
performance.readdir-ahead: on
cluster.server-quorum-ratio: 51
Regards
Thomas HAMEL
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users