Re: Corrupt full osdmap on RBD Kernel Image mount (Jewel 10.2.2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ilya,

yes, thank you that was the issue. I was wondering why do my mons
exchange so much data :)

I didn't know we index the buckets using the actual id value, don't
recall I red that somewhere.
One shouldn't be too imaginative with the id values  then, heh :)

Thank you once again,
Ivan



On Fri, Aug 26, 2016 at 4:19 PM, Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
> On Wed, Aug 24, 2016 at 5:17 PM, Ivan Grcic <igrcic@xxxxxxxxx> wrote:
>> Hi Ilya,
>>
>> there you go, and thank you for your time.
>>
>> BTW should one get a crushmap from osdmap doing something like this:
>>
>> osdmaptool --export-crush /tmp/crushmap /tmp/osdmap
>> crushtool -c crushmap -o crushmap.3518
>
> Yes.  You can also use
>
>     $ ceph osd getcrushmap -o /tmp/crushmap
>
>>
>> Until now I was just creating/compiling crushmaps, havent played  with
>> osd maps yet.
>
> You've got the following buckets in your crushmap:
>
> ...
>
> host g6 {
>         id -5           # do not change unnecessarily
>         # weight 4.930
>         alg straw
>         hash 0  # rjenkins1
>         item osd.18 weight 0.600
>         item osd.19 weight 0.250
>         item osd.20 weight 1.100
>         item osd.21 weight 0.500
>         item osd.22 weight 0.080
>         item osd.23 weight 0.500
>         item osd.24 weight 0.400
>         item osd.25 weight 0.400
>         item osd.26 weight 0.400
>         item osd.27 weight 0.150
>         item osd.28 weight 0.400
>         item osd.29 weight 0.150
> }
> room kitchen {
>         id -100         # do not change unnecessarily
>         # weight 4.930
>         alg straw
>         hash 0  # rjenkins1
>         item g6 weight 4.930
> }
> room bedroom {
>         id -200         # do not change unnecessarily
>         # weight 6.920
>         alg straw
>         hash 0  # rjenkins1
>         item asus weight 2.500
>         item urs weight 2.500
>         item think weight 1.920
> }
> datacenter home {
>         id -1000                # do not change unnecessarily   <---
>         # weight 11.850
>         alg straw
>         hash 0  # rjenkins1
>         item kitchen weight 4.930
>         item bedroom weight 6.920
> }
> root sonnenbergweg {
>         id -1000000             # do not change unnecessarily   <---
>         # weight 11.850
>         alg straw
>         hash 0  # rjenkins1
>         item home weight 11.850
> }
>
> The id of the bucket isn't just an arbitrary number - it indexes the
> buckets array.  By having a 1000000 in there, you are creating an ~4M
> crushmap (~8M for the in-memory pointers-to-buckets array), which the
> kernel fails to allocate memory for.  The failure mode could have been
> slightly better, but this is a borderline crushmap - we should probably
> add checks to "crushtool -c" for this.
>
> Thanks,
>
>                 Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux