Have you tried just running “sync;sync” on the originating node? Does that achieve the same thing or not? (I guess it could/should).
Jan
Thanks again,
even 'du' performance is terrible on node B (testing on a directory taken from Phoronix):
# time du -hs /storage/test9/installed-tests/pts/pgbench-1.5.1/ 73M /storage/test9/installed-tests/pts/pgbench-1.5.1/ real 0m21.044s user 0m0.010s sys 0m0.067s
Reading the files from node B doesn't seem to help with subsequent accesses in this case:
# time tar c /storage/test9/installed-tests/pts/pgbench-1.5.1/>/dev/null real 1m47.650s user 0m0.041s sys 0m0.212s
# time tar c /storage/test9/installed-tests/pts/pgbench-1.5.1/>/dev/null real 1m45.636s user 0m0.042s sys 0m0.214s
# time ls -laR /storage/test9/installed-tests/pts/pgbench-1.5.1>/dev/null
real 1m43.180s user 0m0.069s sys 0m0.236s
Of course, once I dismount the CephFS on node A everything gets as fast as it can be.
Am I missing something obvious here? Yes I could drop the Linux cache as a 'fix' but that would drop the entire system's cache, sounds a bit extreme! :P
Unless is there a way to drop the cache only for that single dir...?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com