Buenos días, Jorge, Muy interesante la prueba, ¿Dado que veo que trasteas mucho con ceph, me puedes responder a una duda? Como calcula Ceph el espacio disponible de disco, no termino de entender cómo es que, si tienes 3 servidores con 5 discos en cada uno de ellos, solo quedan efectivos un tercio del total de todos los discos, cuando tiene que mantener vivo 2 servidores, es más lógico que brinde un espacio de 2 tercios y se quede con 1 tercio para la replicación de datos Un saludo -----Mensaje original----- De: Jorge Garcia <jgarcia@xxxxxxxxxxxx> Enviado el: viernes, 8 de octubre de 2021 21:21 Para: ceph-users@xxxxxxx Asunto: cephfs vs rbd I was wondering about performance differences between cephfs and rbd, so I deviced this quick test. The results were pretty surprising to me. The test: on a very idle machine, make 2 mounts. One is a cephfs mount, the other an rbd mount. In each directory, copy a humongous .tgz file (1.5 TB) and try to untar the file into the directory. The untar on the cephfs directory took slightly over 2 hours, but on the rbd directory it took almost a whole day. I repeated the test 3 times and the results were similar each time. Is there something I'm missing? Is RBD that much slower than cephfs (or is cephfs that much faster than RBD)? Are there any tuning options I can try to improve RBD performance? # df -h | grep mnt 10.1.1.150:/ 275T 1.5T 273T 1% /mnt/cephfs /dev/rbd0 20T 1.5T 19T 8% /mnt/rbd bash-4.4$ pwd /mnt/cephfs/test bash-4.4$ date; time tar xf exceRptDB_v4_EXOGenomes.tgz; date Fri Jul 2 13:10:01 PDT 2021 real 137m22.601s user 1m6.222s sys 35m57.697s Fri Jul 2 15:27:23 PDT 2021 bash-4.4$ pwd /mnt/rbd/test bash-4.4$ date; time tar xf exceRptDB_v4_EXOGenomes.tgz; date Fri Jul 2 15:38:28 PDT 2021 real 1422m42.236s user 1m34.198s sys 38m48.761s Sat Jul 3 15:21:10 PDT 2021 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx