Replicas handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings,

We have been testing a full SSD Ceph cluster for a few weeks now and still testing.  One of the outcome(We will post a full report on our test soon but for now this email will only be for replicas) is that as soon as you put more than 1 copy of the cluster, it kills the performance by at least 2.5times.

Im curious if someone can confirm my theory on how the replication is handle:

Here is a scenarios:

3 Nodes
Each Nodes has 1 Journal (SSD) and 2 OSD (SSD)
Replica count = 3

-New object/file  is written on node1.journal which then write onto node1.osd1
-For the second copy: node1.journal will write the file on node2.journal then node2.journal which can write onto node2.osd1
-For the third copy: node1.journal will write the file on node3.journal then node3.journal which can write onto node3.osd1

Is this how ceph would handle the replication?

P.S.  I understand that the Crush algorithm will probably have not in this order but my question is more to confirm that to replicate, it need to be written on the second and third journal prior of being able to write onto those 2nd and 3rd OSD

Many thanks

Anthony


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux