Re: Pool full but the user cleaned it up already

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

here it is, I usually set just space quota not object quota.

NAME                           ID     QUOTA OBJECTS     QUOTA BYTES     USED        %USED     MAX AVAIL     OBJECTS     DIRTY       READ        WRITE       RAW USED
k8s                   8      N/A               200GiB           200GiB      0.22       90.5TiB       51860      51.86k     1.20MiB     18.1MiB       600GiB

-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: Thursday, May 21, 2020 3:49 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  Re: Pool full but the user cleaned it up already

Email received from outside the company. If in doubt don't click links nor open attachments!
________________________________

Do you have quotas enabled on that pool?

Can you also show

ceph df detail


Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:

> Restarted mgr and mon services, nothing helped :/
>
>
> -----Original Message-----
> From: Eugen Block <eblock@xxxxxx>
> Sent: Wednesday, May 20, 2020 3:05 PM
> To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
> Cc: ceph-users@xxxxxxx
> Subject:  Re: Pool full but the user cleaned it up already
>
> Email received from outside the company. If in doubt don't click links
> nor open attachments!
> ________________________________
>
> Okay, so the OSDs are in fact not full, it's strange that the pool
> still is reported as full. Maybe restart the mgr services?
>
>
> Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:
>
>> Yeah, sorry:
>>
>> ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE VAR  PGS
>> 12   ssd 2.29799  1.00000 2.30TiB 2.67GiB 2.30TiB 0.11 0.16  24
>> 13   ssd 2.29799  1.00000 2.30TiB 2.33GiB 2.30TiB 0.10 0.14  21
>> 14   ssd 3.49300  1.00000 3.49TiB 2.71GiB 3.49TiB 0.08 0.11  27
>> 27   ssd 2.29799  1.00000 2.30TiB 2.52GiB 2.30TiB 0.11 0.15  19
>> 28   ssd 2.29799  1.00000 2.30TiB 1.79GiB 2.30TiB 0.08 0.11  18
>> 29   ssd 3.49300  1.00000 3.49TiB 3.40GiB 3.49TiB 0.10 0.13  35
>> 38   ssd 2.29990  1.00000 2.30TiB 3.16GiB 2.30TiB 0.13 0.19  27
>> 39   ssd 2.29549  1.00000 2.30TiB 2.05GiB 2.29TiB 0.09 0.12  16
>> 40   ssd 3.49300  1.00000 3.49TiB 2.51GiB 3.49TiB 0.07 0.10  29
>>  0   hdd 9.09599  1.00000 9.10TiB 15.7GiB 9.08TiB 0.17 0.23  12
>>  1   hdd 9.09599  1.00000 9.10TiB 92.5GiB 9.01TiB 0.99 1.38  18
>>  2   hdd 9.09599  1.00000 9.10TiB 7.24GiB 9.09TiB 0.08 0.11  15
>>  3   hdd 9.09599  1.00000 9.10TiB 17.5GiB 9.08TiB 0.19 0.26  18
>>  4   hdd 9.09599  1.00000 9.10TiB 94.4GiB 9.00TiB 1.01 1.41  19
>>  5   hdd 9.09599  1.00000 9.10TiB  132GiB 8.97TiB 1.41 1.96  17
>>  6   hdd 9.09599  1.00000 9.10TiB  140GiB 8.96TiB 1.51 2.09  22
>>  7   hdd 9.09599  1.00000 9.10TiB 15.4GiB 9.08TiB 0.17 0.23  11
>>  8   hdd 9.09599  1.00000 9.10TiB 44.7GiB 9.05TiB 0.48 0.67  21
>>  9   hdd 9.09599  1.00000 9.10TiB  149GiB 8.95TiB 1.59 2.21  13
>> 10   hdd 9.09599  1.00000 9.10TiB 22.5GiB 9.07TiB 0.24 0.34  12
>> 11   hdd 9.09599  1.00000 9.10TiB 38.4GiB 9.06TiB 0.41 0.57  22
>> 15   hdd 9.09599  1.00000 9.10TiB 99.7GiB 9.00TiB 1.07 1.49  17
>> 16   hdd 9.09599  1.00000 9.10TiB  142GiB 8.96TiB 1.52 2.11  23
>> 17   hdd 9.09599  1.00000 9.10TiB 23.2GiB 9.07TiB 0.25 0.35  12
>> 18   hdd 9.09599  1.00000 9.10TiB 9.44GiB 9.09TiB 0.10 0.14  14
>> 19   hdd 9.09599  1.00000 9.10TiB  146GiB 8.95TiB 1.56 2.17  17
>> 20   hdd 9.09599  1.00000 9.10TiB 19.8GiB 9.08TiB 0.21 0.30  13
>> 21   hdd 9.09599  1.00000 9.10TiB 72.9GiB 9.02TiB 0.78 1.09  12
>> 22   hdd 9.09599  1.00000 9.10TiB 80.8GiB 9.02TiB 0.87 1.21  12
>> 23   hdd 9.09599  1.00000 9.10TiB 15.8GiB 9.08TiB 0.17 0.24  15
>> 24   hdd 9.09599  1.00000 9.10TiB 26.4GiB 9.07TiB 0.28 0.39  20
>> 25   hdd 9.09599  1.00000 9.10TiB 92.3GiB 9.01TiB 0.99 1.38  22
>> 26   hdd 9.09599  1.00000 9.10TiB 41.2GiB 9.06TiB 0.44 0.61  23
>> 30   hdd 9.09560  1.00000 9.10TiB 82.5GiB 9.02TiB 0.89 1.23  19
>> 31   hdd 9.09560  1.00000 9.10TiB  162GiB 8.94TiB 1.74 2.42  32
>> 32   hdd 9.09560  1.00000 9.10TiB 96.2GiB 9.00TiB 1.03 1.43  19
>> 33   hdd 9.09560  1.00000 9.10TiB  102GiB 9.00TiB 1.09 1.52  28
>> 34   hdd 9.09560  1.00000 9.10TiB 89.4GiB 9.01TiB 0.96 1.33  23
>> 35   hdd 9.09560  1.00000 9.10TiB 36.3GiB 9.06TiB 0.39 0.54  29
>> 36   hdd 9.09560  1.00000 9.10TiB 98.5GiB 9.00TiB 1.06 1.47  25
>> 37   hdd 9.09560  1.00000 9.10TiB 97.5GiB 9.00TiB 1.05 1.45  25
>>                     TOTAL  315TiB 2.27TiB  313TiB 0.72 MIN/MAX VAR:
>> 0.10/2.42  STDDEV: 0.54
>>
>>
>> -----Original Message-----
>> From: Eugen Block <eblock@xxxxxx>
>> Sent: Wednesday, May 20, 2020 1:13 PM
>> To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
>> Cc: ceph-users@xxxxxxx
>> Subject:  Re: Pool full but the user cleaned it up
>> already
>>
>> Email received from outside the company. If in doubt don't click
>> links nor open attachments!
>> ________________________________
>>
>> Please add 'ceph osd df' output, not 'ceph df'.
>>
>>
>> Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:
>>
>>> Hello,
>>>
>>> No, haven't deleted, this warning is quite long time ago.
>>>
>>> ceph health detail
>>> HEALTH_WARN 1 pool(s) full
>>> POOL_FULL 1 pool(s) full
>>>     pool 'k8s' is full (no quota)
>>> ceph df
>>> GLOBAL:
>>>     SIZE       AVAIL      RAW USED     %RAW USED
>>>     315TiB     313TiB      2.27TiB          0.72
>>> POOLS:
>>>     NAME                           ID     USED        %USED     MAX
>>> AVAIL     OBJECTS
>>>     k8s                   8       200GiB      0.22       90.5TiB
>>>     51860
>>>
>>> -----Original Message-----
>>> From: Eugen Block <eblock@xxxxxx>
>>> Sent: Wednesday, May 20, 2020 12:50 PM
>>> To: ceph-users@xxxxxxx
>>> Subject:  Re: Pool full but the user cleaned it up
>>> already
>>>
>>> Email received from outside the company. If in doubt don't click
>>> links nor open attachments!
>>> ________________________________
>>>
>>> Hi,
>>>
>>> take a look into 'ceph osd df' (maybe share the output) to see which
>>> OSD(s) are full, they determine if when pool becomes full.
>>> Did you delete lots of objects from that pool recently? That can
>>> take some time until the space is really cleared.
>>>
>>>
>>> Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:
>>>
>>>> Hi,
>>>>
>>>> I have a health warn regarding pool full:
>>>>
>>>>     health: HEALTH_WARN
>>>>             1 pool(s) full
>>>>
>>>> This is the pool that is complaining:
>>>> Ceph df:
>>>>     NAME                           ID     USED        %USED     MAX
>>>> AVAIL     OBJECTS
>>>>     k8s                                  8       200GiB      0.22
>>>>    90.5TiB       51860
>>>>
>>>> Rados df:
>>>> POOL_NAME                  USED    OBJECTS CLONES COPIES
>>>> MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS      RD      WR_OPS   WR
>>>> k8s-infraops                200GiB   51860      0 155580
>>>>      0       0        0     1256337  126GiB 19012858  944GiB
>>>>
>>>> What needs to be cleaned to avoid the pool full?
>>>>
>>>> Ceph version luminous 12.2.8.
>>>>
>>>> Thank you
>>>>
>>>> ________________________________
>>>> This message is confidential and is for the sole use of the
>>>> intended recipient(s). It may also be privileged or otherwise
>>>> protected by copyright or other legal rules. If you have received
>>>> it by mistake please let us know by reply email and delete it from
>>>> your system. It is prohibited to copy this message or disclose its content to anyone.
>>>> Any confidentiality or privilege is not waived or lost by any
>>>> mistaken delivery or unauthorized disclosure of the message. All
>>>> messages sent to and from Agoda may be monitored to ensure
>>>> compliance with company policies, to protect the company's
>>>> interests and to remove potential malware. Electronic messages may
>>>> be intercepted, amended, lost or deleted, or contain viruses.
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send
>>>> an email to ceph-users-leave@xxxxxxx
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
>>> email to ceph-users-leave@xxxxxxx
>>>
>>> ________________________________
>>> This message is confidential and is for the sole use of the intended
>>> recipient(s). It may also be privileged or otherwise protected by
>>> copyright or other legal rules. If you have received it by mistake
>>> please let us know by reply email and delete it from your system. It
>>> is prohibited to copy this message or disclose its content to anyone.
>>> Any confidentiality or privilege is not waived or lost by any
>>> mistaken delivery or unauthorized disclosure of the message. All
>>> messages sent to and from Agoda may be monitored to ensure
>>> compliance with company policies, to protect the company's interests
>>> and to remove potential malware. Electronic messages may be
>>> intercepted, amended, lost or deleted, or contain viruses.
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
>> email to ceph-users-leave@xxxxxxx
>>
>> ________________________________
>> This message is confidential and is for the sole use of the intended
>> recipient(s). It may also be privileged or otherwise protected by
>> copyright or other legal rules. If you have received it by mistake
>> please let us know by reply email and delete it from your system. It
>> is prohibited to copy this message or disclose its content to anyone.
>> Any confidentiality or privilege is not waived or lost by any
>> mistaken delivery or unauthorized disclosure of the message. All
>> messages sent to and from Agoda may be monitored to ensure compliance
>> with company policies, to protect the company's interests and to
>> remove potential malware. Electronic messages may be intercepted,
>> amended, lost or deleted, or contain viruses.
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> email to ceph-users-leave@xxxxxxx
>
> ________________________________
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by
> copyright or other legal rules. If you have received it by mistake
> please let us know by reply email and delete it from your system. It
> is prohibited to copy this message or disclose its content to anyone.
> Any confidentiality or privilege is not waived or lost by any mistaken
> delivery or unauthorized disclosure of the message. All messages sent
> to and from Agoda may be monitored to ensure compliance with company
> policies, to protect the company's interests and to remove potential
> malware. Electronic messages may be intercepted, amended, lost or
> deleted, or contain viruses.




________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux