why does 3 copies take so much more time than 2?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm testing cephfs. I have 3 nodes, with 2 hard disks and one ssd on each. cephfs is set to put metadata on ssd and data on hdd.

With the two pools set size = 3, untar'ing a 19 G file with 90K files in it takes 4.5 minutes.
With size = 2, it takes 40 sec. (The tar file is stored in a file system that's in memory.)

Is that expected?

This is the current version of ceph, deployed with cephadm. The only non-default setup is allocating metadata to ssd and data to hdd.

  data_devices:
    rotational: 1
  db_devices:
    rotational: 0

ceph osd crush rule create-replicated replicated_hdd default host hdd
ceph osd crush rule create-replicated replicated_ssd default host ssd
ceph osd pool set cephfs.main.data crush_rule replicated_hdd
ceph osd pool set cephfs.main.meta crush_rule replicated_ssd





_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux