Re: Unsetting osd_crush_chooseleaf_type = 0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Yes you will need to change osd to host as you thought so that copies
will be separated between hosts. You will run into problems you see
until that is changed. It will cause data movement.
- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Thu, Jul 16, 2015 at 11:45 AM, Steve Dainard  wrote:
> I originally built a single node cluster, and added
> 'osd_crush_chooseleaf_type = 0 #0 is for one node cluster' to ceph.conf
> (which is now commented out).
>
> I've now added a 2nd node, where can I set this value to 1? I see in the
> crush map that the osd's are under 'host' buckets and don't see any
> reference to leaf.
>
> Would the cluster automatically rebalance when the 2nd host was added? How
> can I verify this?
>
> The issue right now, is with two host, copies = 2, min copies = 1, I cannot
> access data from client machines when one of the two hosts goes down.
>
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_local_fallback_tries 0
> tunable choose_total_tries 50
> tunable chooseleaf_descend_once 1
>
> # devices
> device 0 osd.0
> device 1 osd.1
> device 2 osd.2
> device 3 osd.3
>
> # types
> type 0 osd
> type 1 host
> type 2 chassis
> type 3 rack
> type 4 row
> type 5 pdu
> type 6 pod
> type 7 room
> type 8 datacenter
> type 9 region
> type 10 root
>
> # buckets
> host ceph1 {
>         id -2           # do not change unnecessarily
>         # weight 163.350
>         alg straw
>         hash 0  # rjenkins1
>         item osd.0 weight 3.630
>         item osd.1 weight 3.630
> }
> host ceph2 {
>         id -3           # do not change unnecessarily
>         # weight 163.350
>         alg straw
>         hash 0  # rjenkins1
>         item osd.2 weight 3.630
>         item osd.3 weight 3.630
> }
> root default {
>         id -1           # do not change unnecessarily
>         # weight 326.699
>         alg straw
>         hash 0  # rjenkins1
>         item ceph1 weight 163.350
>         item ceph2 weight 163.350
> }
>
> # rules
> rule replicated_ruleset {
>         ruleset 0
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step choose firstn 0 type osd <-- should this line be ''step
> chooseleaf firstn 0 type host"?
>         step emit
> }
>
> # end crush map
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJVqTieCRDmVDuy+mK58QAASToQAJCnJW4xqiRQRiUiJK9s
tF//xmrODAI3CrJENjtJ1wekNPq8YMWfX/CcxU0v7DdEx4JusVPxfv1BcEO3
OIKYHq3IXhWeMg20um1jGXINNI9ENR1Wpmp8G1gENjSRG+F3UOBG5jvZtoUn
LEZ0OmYbzIQcHmgOGhcCICp/vXlTbHfSkOMRz34mlwpoKkKlwuQoQKd0o1gg
MT3XvVBgIhZcQANiwu0sWxNTfNCtYq0/jVPcQyd3Wfzh9XtqHmZlgOYb92E1
uqYFTBpNc06vCYcBt2C93DhzUbJXdYWFxqG3HIkDu5GDJ1jL8WHcHMhoEdRX
61wTm5uku4+XTwHqoIJnr29f70z0pVBqZaY9nqz8XLcNdW7wYRbFvKnbravq
q32DJMVM8uhN5daRlJb0dRKMRCGjUhEItS/omHdriijmxu61SOjJzZvJN+w4
eEivnLeglqPGq20zGXHuSQT9KXcQlCGzvnCDblDGDky+UuDpwtYqRO3ez3xH
fOcBpjjslzXp09YGUo7UT9n9flonGnnOOi18lhgweF/zSIRw1lP350rdolbg
EnKSynkIYyoYjIf8cM8bw1mbxQXt2b3pIu2tTRYZG5bEe58aK2+8IKv0yN2I
M88VAOSXxOpzF0hxUyJiGYU2PK45lK5k+KQgQR2nuICk6Qc59/kuXjK8uZWX
LDRh
=Ay3V
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux