Re: Large number of misplaced PGs but little backfill going on

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Torkil,

I take my previous response back.

You have an erasure-coded pool with nine shards but only three
datacenters. This, in general, cannot work. You need either nine
datacenters or a very custom CRUSH rule. The second option may not be
available if the current EC setup is already incompatible, as there is
no way to change the EC parameters.

It would help if you provided the output of "ceph osd pool ls detail".

On Sun, Mar 24, 2024 at 1:43 AM Alexander E. Patrakov
<patrakov@xxxxxxxxx> wrote:
>
> Hi Torkil,
>
> Unfortunately, your files contain nothing obviously bad or suspicious,
> except for two things: more PGs than usual and bad balance.
>
> What's your "mon max pg per osd" setting?
>
> On Sun, Mar 24, 2024 at 1:08 AM Torkil Svensgaard <torkil@xxxxxxxx> wrote:
> >
> > On 2024-03-23 17:54, Kai Stian Olstad wrote:
> > > On Sat, Mar 23, 2024 at 12:09:29PM +0100, Torkil Svensgaard wrote:
> > >>
> > >> The other output is too big for pastebin and I'm not familiar with
> > >> paste services, any suggestion for a preferred way to share such
> > >> output?
> > >
> > > You can attached files to the mail here on the list.
> >
> > Doh, for some reason I was sure attachments would be stripped. Thanks,
> > attached.
> >
> > Mvh.
> >
> > Torkil
>
>
>
> --
> Alexander E. Patrakov



-- 
Alexander E. Patrakov
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux