Re: Errors when creating new pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



re-adding mailing list.

I've had a quick look at the code and the logic for the
expected_num_objects seems broken, it uses the wrong way to detect
filestore OSDs.

I've opened an issue: http://tracker.ceph.com/issues/37532

The new error is just that you probably didn't restart your mons after
setting this option. Try to run

ceph tell mon.\* injectargs '--osd_pool_default_size=1'



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Orbiting Code, Inc. <support@xxxxxxxxxxxxxxxx>:
>
> I'm running CEPH version 12.2.10 on my cluster nodes, and version 2.0.1 of ceph-deploy. I see the following included in the output when adding OSDs using ceph-deploy:
>
> [osd1][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg
>
> [osd1][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-5be28a76-3e74-4bcf-bb9e-c6662b961a20/osd-block-b75f1905-fb4c-4ecc-ac3a-391a755b364a --path /var/lib/ceph/osd/ceph-4
>
> [osd1][DEBUG ] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid b75f1905-fb4c-4ecc-ac3a-391a755b364a --setuser ceph --setgroup ceph
>
> From this, I'm assuming that Bluestore is the default journal now, since I did not specify it when adding the OSDs.
>
> I ran the following command as per your recommendation:
>
> ceph osd pool create kvm 1024 1024 replicated_rule 100
>
> And, received the following error:
>
> Error ERANGE:  pg_num 1024 size 3 would mean 3072 total pgs, which exceeds max 2000 (mon_max_pg_per_osd 250 * num_in_osds 8)
>
> In my ceph.conf file, I have "osd pool default size = 1" for this test cluster, but in the error above, a size of 3 is coming from some unknown place. Also, I'm at a loss as to how I would possibly estimate expected_num_objects, so I picked an arbitrary value of 100. I also tried 0, which is the default according to the documentation.
>
> Thank you again,
> Todd
>
>
> On 12/5/18 3:17 PM, Paul Emmerich wrote:
>
> I think it's new in 12.2.10, but it should only show up when using
> Filestore OSDs. Since you mention that the cluster is new: are you not
> using Bluestore?
>
> That being said: the default crush rule name is "replicated_rule", so
> "ceph osd pool create <name> <pg> <pg> replicated_rule
> <expected_objects>" is the right way to create a pool on a filestore
> cluster now.
>
> I think there's some room for improvement from a user experience point
> of view...
>
> Paul
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux