Re: Removing pool in nautilus is incredibly slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm using

osd_op_queue = wpq
osd_op_queue_cut_off = high

and these settings are recommended.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Francois Legrand <fleg@xxxxxxxxxxxxxx>
Sent: 26 June 2020 09:44:00
To: Frank Schilder; ceph-users@xxxxxxx
Subject: Re:  Re: Removing pool in nautilus is incredibly slow

We are now using osd_op_queue = wpq. Maybe returning to prio should help ?
What are you using on your mimic custer ?
F.

Le 25/06/2020 à 19:28, Frank Schilder a écrit :
> OK, this *does* sound bad. I would consider this a show stopper for upgrade from mimic.
>
> Best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ________________________________________
> From: Francois Legrand <fleg@xxxxxxxxxxxxxx>
> Sent: 25 June 2020 19:25:14
> To: ceph-users@xxxxxxx
> Subject:  Re: Removing pool in nautilus is incredibly slow
>
> I also had this kind of symptoms with nautilus.
> Replacing a failed disk (from cluster ok) generates degraded objects.
> Also, we have a proxmox cluster accessing vm images stored in our ceph storage with rbd.
> Each time I had some operation on the ceph cluster like adding or removing a pool, most of our proxmox vms lost contact with their system disk in ceph and crashed (or remount system storage in read-only mode). At first I thought it was a network problem, but now I am sure that it's related to ceph becoming unresponsive during background operations.
> For now, proxmox cannot even access ceph storage using rbd (it fails with timeout).
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux