What seems to be happening:
The interesting part of quota.t contains:
16: TEST ! dd if=/dev/urandom of=$M0/test/file1.txt bs=1024k count=12
17: TEST rm $M0/test/file1.txt
18: EXPECT_WITHIN $MARKER_UPDATE_TIMEOUT "0Bytes" usage "/test"
19: TEST dd if=/dev/urandom of=$M0/test/file2.txt bs=1024k count=8
at this point, the volume is configured with a quota of 10MB, so the
test 16 should fail.
ec xlator receives the create and many write fops. At some point, fops
start returning EDQUOT, but many other write requests are still being
processed. Somehow seems that the control returns to the user before all
write request complete (there seems to be an FSYNC request, but it never
reaches ec). The user deletes the file on the next test (17). This
causes that some of the pending writes return ENOENT (not sure why
because the inode should not have been deleted).
The ENOENT error causes a segmentation fault on DHT. This segmentation
fault is asynchronous and coincides with the test 19, but it's not the
responsible of the failure.
A message appears on the console (probably when dd finishes and closes
the file):
perfused: perfuse_node_inactive: perfuse_node_fsync failed error = 69:
Resource temporarily unavailable
Note that errno 69 is EDQUOT on NetBSD. Not sure what does mean or if
it's important.
I'm not sure what is happening here, and why some write requests keep
floating after the command that launched them has finished (dd).
Probably it's an ec bug, but I don't see where.
If anyone has any clue, it will be appreciated. Otherwise I'll look
deeper into it tomorrow.
Xavi
On 11/17/2014 05:18 PM, Emmanuel Dreyfus wrote:
On Mon, Nov 17, 2014 at 05:08:18PM +0100, Xavier Hernandez wrote:
Ok. Let me know login credentials and I'll try to see what's happening.
ssh jenkins@xxxxxxxxxxxxxxxxxxxxxxxxxxx with usual password
then become root using su and:
cd /autobuild/glusterfs && ./run-tests.sh -f ./tests/basic/ec/quota.t
While there you can also give a try to ./tests/basic/ec/self-heal.t
which also has a failure.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel