Re: power loss -> 1 osd high load for 24h

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My guess without logs is that that osd was purging PGs that had been
removed previously but not fully deleted from the disk. There have been
bugs like that fixed recently, and PG removal can be intense (unless you
run latest releases).

Next time you have an unexplained busy osd, inject debug_osd=10 to see what
it's doing.

.. Dan



On Thu, 2 Sep 2021, 22:49 Marc, <Marc@xxxxxxxxxxxxxxxxx> wrote:

>
> I was told there was a power loss at the datacenter. Anyway all ceph nodes
> lost power, just turning them on was enough to get everything back online,
> no problems at all. However I had one disk/osd on a high load for day.
>
> I guess this must have been some check of ceph? How can I see this,
> because I do not see anything in the logs when I do grep on -i error or
> warn. Should there not be some warning or error logged when a osd is fully
> utilized like this? I do not think it was a normal scrub/deep-scrub.
> The amount of lines of 'rocksdb', 'bdev' and 'bluefs' between this osd log
> and others are sort of similar.
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux