Re: remove S3 bucket with rados CLI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quick & dirty solution if only one OSD is full (likely as it looks
very unbalanced): take down the full OSD, delete data, take it back
online


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Thu, Apr 9, 2020 at 3:30 PM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
>
> On Thu, Apr 9, 2020 at 3:25 PM Robert Sander
> <r.sander@xxxxxxxxxxxxxxxxxxx> wrote:
> >
> > Hi Dan,
> >
> > Am 09.04.20 um 15:08 schrieb Dan van der Ster:
> > >
> > > What do you have for full_ratio?
> >
> > The cluster is running Nautilus and the ratios should still be the
> > default values. Currently I have to direct access to report them.
> >
> > > Maybe you can unblock by setting the full_ratio to 0.96?
> >
> > We will try that on tuesday.
> >
> > Additionally here is the output of "ceph df":
> >
> > [root@fra1s80103 ~]# ceph df
> > RAW STORAGE:
> >     CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
> >     hdd       524 TiB     101 TiB     416 TiB      423 TiB         80.74
> >     ssd        11 TiB     7.8 TiB     688 MiB      3.2 TiB         28.92
> >     TOTAL     535 TiB     109 TiB     416 TiB      426 TiB         79.68
> >
> > POOLS:
> > POOL            ID          STORED  OBJECTS    USED   %USED    MAX AVAIL
> > .rgw.root        2          1.2 KiB      4  256 KiB       0    1.4 TiB
> > default.rgw.control  3        0 B        8    0 B         0    1.4 TiB
> > default.rgw.meta 4          3.2 KiB     13  769 KiB       0    1.4 TiB
> > default.rgw.log  5           48 KiB    210   48 KiB       0    1.4 TiB
> > default.rgw.buckets.index 6 487 GiB  21.10k 487 GiB       8.09 1.4 TiB
> > default.rgw.buckets.data 8  186 TiB 671.88M 416 TiB     100.00   0 B
> > default.rgw.buckets.non-ec 9  0 B        0    0 B         0      0 B
> >
> > It's a four node cluster with the buckets.data pool erasure coded on hdd
> > with k=m=2 and size=4 and min_size=2, to have each part on a different node.
> >
> > New HDDs and even new nodes are currently being ordered to expand this
> > proof of concept setup for backup storage.
>
> This looks like an unbalanced cluster.
>
> # ceph osd df  | sort -n -k17
>
> should be illuminating.
>
> -- dan
>
>
> >
> > Regards
> > --
> > Robert Sander
> > Heinlein Support GmbH
> > Schwedter Str. 8/9b, 10119 Berlin
> >
> > http://www.heinlein-support.de
> >
> > Tel: 030 / 405051-43
> > Fax: 030 / 405051-19
> >
> > Zwangsangaben lt. §35a GmbHG:
> > HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> > Geschäftsführer: Peer Heinlein -- Sitz: Berlin
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux