Re: Unexpected ceph pool creation error with Ceph Quincy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry, hit send too early. It seems I could reproduce it by reducing the value to 1:

host1:~ # ceph config set mon mon_max_pool_pg_num 1
host1:~ # ceph config get mon mon_max_pool_pg_num
1
host1:~ # ceph osd pool create pool3
Error ERANGE: 'pg_num' must be greater than 0 and less than or equal to 1 (you may adjust 'mon max pool pg num' for higher values)

The default is 65536. Can you verify if this is your issue?

Zitat von Eugen Block <eblock@xxxxxx>:

Did you ever adjust mon_max_pool_pg_num? Can you check what your current config value is?

host1:~ # ceph config get mon mon_max_pool_pg_num
65536

Zitat von Geert Kloosterman <gkloosterman@xxxxxxxxxx>:

Hi,

Thanks Eugen for checking this. I get the same default values as you when I remove the entries from my ceph.conf:

  [root@gjk-ceph ~]# ceph-conf -D | grep default_pg
  osd_pool_default_pg_autoscale_mode = on
  osd_pool_default_pg_num = 32
  osd_pool_default_pgp_num = 0

However, in my case, the pool creation error remains:

  [root@gjk-ceph ~]# ceph osd pool create asdf
  Error ERANGE: 'pgp_num' must be greater than 0 and lower or equal
  than 'pg_num', which in this case is 1

But I can create the pool when passing the same pg_num and pgp_num
values explicity:

  [root@gjk-ceph ~]# ceph osd pool create asdf 32 0
  pool 'asdf' created

Does anyone have an idea how I can debug this further?

I'm running Ceph on a virtualized Rocky 8.7 test cluster, with Ceph
rpms installed from http://download.ceph.com/rpm-quincy/el8/

Best regards,
Geert Kloosterman


On Wed, 2023-03-15 at 13:42 +0000, Eugen Block wrote:
External email: Use caution opening links or attachments


Hi,

I could not confirm this in a virtual lab cluster, also on 17.2.5:

host1:~ # ceph osd pool create asdf
pool 'asdf' created

host1:~ # ceph-conf -D | grep 'osd_pool_default_pg'
osd_pool_default_pg_autoscale_mode = on
osd_pool_default_pg_num = 32
osd_pool_default_pgp_num = 0

So it looks quite similar except the pgp_num value (I can't remember
having that modified). This is an upgraded Nautilus cluster.

Zitat von Geert Kloosterman <gkloosterman@xxxxxxxxxx>:

Hi all,

I'm trying out Ceph Quincy (17.2.5) for the first time and I'm
running into unexpected behavior of "ceph osd pool create".

When not passing any pg_num and pgp_num values, I get the following
error with Quincy:

    [root@gjk-ceph ~]# ceph osd pool create asdf
    Error ERANGE: 'pgp_num' must be greater than 0 and lower or
equal than 'pg_num', which in this case is 1

I checked with Ceph Pacific (16.2.11) and there the extra arguments
are not needed.

I expected it would use osd_pool_default_pg_num and
osd_pool_default_pgp_num as defined in my ceph.conf:

    [root@gjk-ceph ~]# ceph-conf -D | grep 'osd_pool_default_pg'
    osd_pool_default_pg_autoscale_mode = on
    osd_pool_default_pg_num = 8
    osd_pool_default_pgp_num = 8

At least, this is what appears to be used with Pacific.

Is this an intended change of behavior?  I could not find anything
related in the release notes.

Best regards,
Geert Kloosterman
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux