Re: Impacts on doubling the size of pgs in a rbd pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Hi,

Michel,

the pool already appears to be in automatic autoscale ("autoscale_mode on").
If you're worried (if, for example, the platform is having trouble handling
a large data shift) then you can set the parameter to warn (like the
rjenkis pool).

If not, as Hervé says, the transition to 2048 pg will be smoother if it's
automatic.

To answer your questions:

1/ There's not much point in doing it before adding the OSDs. In any case,
there will be a significant but gradual replacement of the data. Even if
it's unlikely to see nearfull with the data you've notified.

2/ The recommendation would be to leave the default settings (pg autoscale,
osd_max_backfills, recovery, ...). If there really is a concern, then leave
it at 1024 and set autoscale_mode to warn.


Le mar. 3 oct. 2023 à 17:13, Michel Jouvin <michel.jouvin@xxxxxxxxxxxxxxx>
a écrit :

> Hi Herve,
>
> Why you don't use the automatic adjustment of the number of PGs. This
> makes life much easier and works well.
>
> Cheers,
>
> Michel
>
> Le 03/10/2023 à 17:06, Hervé Ballans a écrit :
> > Hi all,
> >
> > Sorry for the reminder, but does anyone have any advice on how to deal
> > with this?
> >
> > Many thanks!
> > Hervé
> >
> > Le 29/09/2023 à 11:34, Hervé Ballans a écrit :
> >> Hi all,
> >>
> >> I have a Ceph cluster on Quincy (17.2.6), with 3 pools (1 rbd and 1
> >> CephFS volume), each configured with 3 replicas.
> >>
> >> $ sudo ceph osd pool ls detail
> >> pool 7 'cephfs_data_home' replicated size 3 min_size 2 crush_rule 1
> >> object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode on
> >> last_change 6287147 lfor 0/5364613/5364611 flags hashpspool
> >> stripe_width 0 application cephfs
> >> pool 8 'cephfs_metadata_home' replicated size 3 min_size 2 crush_rule
> >> 3 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on
> >> last_change 6333341 lfor 0/6333341/6333339 flags hashpspool
> >> stripe_width 0 application cephfs
> >> pool 9 'rbd_backup_vms' replicated size 3 min_size 2 crush_rule 2
> >> object_hash rjenkins pg_num 1024 pgp_num 1024 autoscale_mode on
> >> last_change 6365131 lfor 0/211948/249421 flags
> >> hashpspool,selfmanaged_snaps stripe_width 0 application rbd
> >> pool 10 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash
> >> rjenkins pg_num 1 pgp_num 1 autoscale_mode warn last_change 6365131
> >> flags hashpspool stripe_width 0 pg_num_min 1 application
> >> mgr,mgr_devicehealth
> >>
> >> $ sudo ceph df
> >> --- RAW STORAGE ---
> >> CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
> >> hdd    306 TiB  186 TiB  119 TiB   119 TiB      39.00
> >> nvme   4.4 TiB  4.3 TiB  118 GiB   118 GiB       2.63
> >> TOTAL  310 TiB  191 TiB  119 TiB   119 TiB      38.49
> >>
> >> --- POOLS ---
> >> POOL                  ID   PGS  STORED  OBJECTS    USED  %USED MAX AVAIL
> >> cephfs_data_home       7   512  12 TiB   28.86M  12 TiB 12.85 27 TiB
> >> cephfs_metadata_home   8    32  33 GiB    3.63M  33 GiB 0.79 1.3 TiB
> >> rbd_backup_vms         9  1024  24 TiB    6.42M  24 TiB 58.65 5.6 TiB
> >> .mgr                  10     1  35 MiB        9  35 MiB 0     12 TiB
> >>
> >> I am going to extend the rbd pool (rbd_backup_vms), currently used at
> >> 60%.
> >> This pool contains 60 disks, i.e. 20 disks by rack in the crushmap.
> >> This pool is used for storing VM disk images (available to a separate
> >> ProxmoxVE cluster)
> >>
> >> For this purpose, I am going to add 42 disks of the same size as
> >> those currently in the pool, i.e. 14 additional disks on each rack.
> >>
> >> Currently, this pool is configured with 1024 pgs.
> >> Before this operation, I would like to extend the number of pgs,
> >> let's say 2048 (i.e. double).
> >>
> >> I wonder about the overall impact of this change on the cluster. I
> >> guess that the heavy moves in the pgs will have a strong impact
> >> regarding the iops?
> >>
> >> I have two questions:
> >>
> >> 1) Is it useful to make this modification before adding the new OSDs?
> >> (I'm afraid of warnings about full or nearfull pgs if not)
> >>
> >> 2) are there any configuration recommendations in order to minimize
> >> these anticipated impacts?
> >>
> >> Thank you!
> >>
> >> Cheers,
> >> Hervé
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux