Re: Cluster Status:HEALTH_ERR for Full OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quick and dirty solution: take the full OSD down to issue the deletion
command ;)

Better solutions: temporarily incrase the full limit (ceph osd
set-full-ratio) or reduce the OSD's reweight (ceph osd reweight)


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Jan 30, 2019 at 11:56 AM Fabio - NS3 srl <fabio@xxxxxx> wrote:
>
> Hello guys,
> i have a Ceph with a full S3
>
> ~# ceph health detail
> HEALTH_ERR 1 full osd(s); 1 near full osd(s)
> osd.2 is full at 95%
> osd.5 is near full at 85%
>
>
> I want to delete some bucket but when i tried to show list bucket
>
>
> ~# radosgw-admin bucket list
> 2019-01-30 11:41:47.933621 7f467a9d0780  0 client.3967227.objecter  FULL, paused modify 0x2aaf410 tid 8
>
> the command remains blocked ...no prompt.
>
> Solutions .... as well as adding an OSD?
>
> Many thankssss
> --
> Fabio
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux