Ching-Cheng,
Data placement is handled by CRUSH. Please examine the following:
ceph osd getcrushmap -o crushmap && crushtool -d crushmap -o
crushmap.txt && cat crushmap.txt
That will show the topology and placement rules Ceph is using.
Pay close attention to the "step chooseleaf" lines inside the rule for
each pool. Under certain configurations, I believe the placement that
you describe is in fact the expected behavior.
Thanks,
Mike Dawson
Co-Founder, Cloudapt LLC
On 10/1/2013 10:46 AM, Chen, Ching-Cheng (KFRM 1) wrote:
Found a weird behavior (or looks like weird) with ceph 0.67.3
I have 5 servers. Monitor runs on server 1. And server 2 to 5 have
one OSD running each (osd.0 - osd.3)
I did a 'ceph pg dump'. I can see PGs got somehow randomly distributed
to all 4 OSDs which is expected behavior.
However, if I bring up one OSD in the same server running monitor. It
seems all PGs has their primary ODS move to this new OSD. After I add a
new OSD (osd.4) to the same server running monitor, the 'ceph pg dump'
command showing active OSDs as [4,x] for all PGs.
Is this expected behavior??
Regards,
Chen
Ching-Cheng Chen
*CREDIT SUISSE*
Information Technology | MDS - New York, KVBB 41
One Madison Avenue | 10010 New York | United States
Phone +1 212 538 8031 | Mobile +1 732 216 7939
chingcheng.chen@xxxxxxxxxxxxxxxxx
<mailto:chingcheng.chen@xxxxxxxxxxxxxxxxx> | www.credit-suisse.com
<http://www.credit-suisse.com>
==============================================================================
Please access the attached hyperlink for an important electronic
communications disclaimer:
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
==============================================================================
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com