Re: Inherited CEPH nightmare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

You want to also check disk_io_weighted via some kind of metric system.
That will detect which SSDs that are hogging the systems, if there are any
specific ones. Also check their error levels and endurance.

On Fri, 7 Oct 2022 at 17:05, Stefan Kooman <stefan@xxxxxx> wrote:

> On 10/7/22 16:56, Tino Todino wrote:
> > Hi folks,
> >
> > The company I recently joined has a Proxmox cluster of 4 hosts with a
> CEPH implementation that was set-up using the Proxmox GUI.  It is running
> terribly, and as a CEPH newbie I'm trying to figure out if the
> configuration is at fault.  I'd really appreciate some help and guidance on
> this please.
>
> Can you send output of these commands:
> ceph -s
> ceph osd df
>
> Gr. Stefan
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux