Re: 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Will need to see a full export of your crush map rules.

Depends what the failure domain is set to.

,Ash
Sent from my iPhone

On 26 Jun 2017, at 4:11 PM, Stéphane Klein <contact@xxxxxxxxxxxxxxxxxxx> wrote:

Hi,

I have this OSD:

root@ceph-storage-rbx-1:~# ceph osd tree
ID WEIGHT   TYPE NAME                   UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.70432 root default
-2 10.85216     host ceph-storage-rbx-1
 0  3.61739         osd.0                    up  1.00000          1.00000
 2  3.61739         osd.2                    up  1.00000          1.00000
 4  3.61739         osd.4                    up  1.00000          1.00000
-3 10.85216     host ceph-storage-rbx-2
 1  3.61739         osd.1                    up  1.00000          1.00000
 3  3.61739         osd.3                    up  1.00000          1.00000
 5  3.61739         osd.5                    up  1.00000          1.00000

with:

      osd_pool_default_size: 2
      osd_pool_default_min_size: 1

Question: does Ceph always write data in one osd on host1 and replica on host2?
I fear that Ceph sometime write data on osd.0 and replica on osd.2.

Best regards,
Stéphane
--
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux