-roughly how large is the expanded untared folder, and roughly how many
files ?
-also roughly, what cluster throughput and bandwidth do you see when
untaring the file, you could observe this from ceph status
-is the cluster running on the same client machine ? hdd/ssd ?
/Maged
On 08/10/2021 21:21, Jorge Garcia wrote:
I was wondering about performance differences between cephfs and rbd,
so I deviced this quick test. The results were pretty surprising to me.
The test: on a very idle machine, make 2 mounts. One is a cephfs
mount, the other an rbd mount. In each directory, copy a humongous
.tgz file (1.5 TB) and try to untar the file into the directory. The
untar on the cephfs directory took slightly over 2 hours, but on the
rbd directory it took almost a whole day. I repeated the test 3 times
and the results were similar each time. Is there something I'm
missing? Is RBD that much slower than cephfs (or is cephfs that much
faster than RBD)? Are there any tuning options I can try to improve
RBD performance?
# df -h | grep mnt
10.1.1.150:/ 275T 1.5T 273T 1% /mnt/cephfs
/dev/rbd0 20T 1.5T 19T 8% /mnt/rbd
bash-4.4$ pwd
/mnt/cephfs/test
bash-4.4$ date; time tar xf exceRptDB_v4_EXOGenomes.tgz; date
Fri Jul 2 13:10:01 PDT 2021
real 137m22.601s
user 1m6.222s
sys 35m57.697s
Fri Jul 2 15:27:23 PDT 2021
bash-4.4$ pwd
/mnt/rbd/test
bash-4.4$ date; time tar xf exceRptDB_v4_EXOGenomes.tgz; date
Fri Jul 2 15:38:28 PDT 2021
real 1422m42.236s
user 1m34.198s
sys 38m48.761s
Sat Jul 3 15:21:10 PDT 2021
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx