Re: increase pg_num error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, after all is settled, I tried changing pg_num again on my pool and it still didn’t work:

# ceph osd pool get rbd1 pg_num
pg_num: 100
# ceph osd pool set rbd1 pg_num 128
# ceph osd pool get rbd1 pg_num
pg_num: 100
# ceph osd require-osd-release nautilus
# ceph osd pool set rbd1 pg_num 128
# ceph osd pool get rbd1 pg_num
pg_num: 100


Suggestions, anybody?

Thanks!

George



On Sep 11, 2019, at 5:29 PM, Kyriazis, George <george.kyriazis@xxxxxxxxx> wrote:

No, it’s pg_num first, then pgp_num.

Found the problem, and still slowly working on fixing it.

I upgraded from mimic to nautilus, but forgot to restart the OSD daemons for 2 of the OSDs.  “ceph osd tell osd.* version” told me which OSDs had a stale version.

Then it was just a matter of restarting the osd daemons to bring the version up-to-date.  After I did that “ceph -s” was complaining about legacy statfs records on OSDs, which means that I had to run “ceph-bluestore-tool repair” on the OSDs in question.  That meant taking OSDs “out”, wait for migration, then “down”, then stop the OSD daemons, repair, and then reverse the process to bring them up.

Now I am waiting for re-mapping to be done, and then I’ll try changing pg_num again to see if it works.

Thanks!

George


On Sep 11, 2019, at 5:00 PM, solarflow99 <solarflow99@xxxxxxxxx> wrote:

You don't have to increase pgp_num first?


On Wed, Sep 11, 2019 at 6:23 AM Kyriazis, George <george.kyriazis@xxxxxxxxx> wrote:
I have the same problem (nautilus installed), but the proposed command gave me an error:

# ceph osd require-osd-release nautilus
Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_NAUTILUS feature
#

I created my cluster with mimic and then upgraded to nautilus.

What would be my next step?

Thanks!

George


> On Jul 1, 2019, at 9:21 AM, Nathan Fish <lordcirth@xxxxxxxxx> wrote:
>
> I ran into this recently. Try running "ceph osd require-osd-release
> nautilus". This drops backwards compat with pre-nautilus and allows
> changing settings.
>
> On Mon, Jul 1, 2019 at 4:24 AM Sylvain PORTIER <cabeur@xxxxxxx> wrote:
>>
>> Hi all,
>>
>> I am using ceph 14.2.1 (Nautilus)
>>
>> I am unable to increase the pg_num of a pool.
>>
>> I have a pool named Backup, the current pg_num is 64 : ceph osd pool get
>> Backup pg_num => result pg_num: 64
>>
>> And when I try to increase it using the command
>>
>> ceph osd pool set Backup pg_num 512 => result "set pool 6 pg_num to 512"
>>
>> And then I check with the command : ceph osd pool get Backup pg_num =>
>> result pg_num: 64
>>
>> I don't how to increase the pg_num of a pool, I also tried the autoscale
>> module, but it doesn't work (unable to activate the autoscale, always
>> warn mode).
>>
>> Thank you for your help,
>>
>>
>> Cabeur.
>>
>>
>> ---
>> L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
>> https://www.avast.com/antivirus
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux