Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Unsubscribe please from all email ids

On Tue, 29 Oct 2019 at 7:12 am, <ceph-users-request@xxxxxxx> wrote:

> Send ceph-users mailing list submissions to
>         ceph-users@xxxxxxx
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
>         ceph-users-request@xxxxxxx
>
> You can reach the person managing the list at
>         ceph-users-owner@xxxxxxx
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of ceph-users digest..."
>
> Today's Topics:
>
>    1. Help (Sumit Gaur)
>
>
> ----------------------------------------------------------------------
>
> Date: Tue, 29 Oct 2019 07:06:17 +1100
> From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
> Subject:  Help
> To: ceph-users@xxxxxxx
> Message-ID:
>         <CAH_rbop2r7d4BVMi_cxYQgOaEcnpcdyjbsKR=
> UdQwc9wBnAh_g@xxxxxxxxxxxxxx>
> Content-Type: multipart/alternative;
>         boundary="0000000000000dfea80595fe0971"
>
> --0000000000000dfea80595fe0971
> Content-Type: text/plain; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> On Tue, 29 Oct 2019 at 1:50 am, <ceph-users-request@xxxxxxx> wrote:
>
> > Send ceph-users mailing list submissions to
> >         ceph-users@xxxxxxx
> >
> > To subscribe or unsubscribe via email, send a message with subject or
> > body 'help' to
> >         ceph-users-request@xxxxxxx
> >
> > You can reach the person managing the list at
> >         ceph-users-owner@xxxxxxx
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of ceph-users digest..."
> >
> > Today's Topics:
> >
> >    1. Re: subtrees have overcommitted (target_size_bytes /
> > target_size_ratio)
> >       (Lars T=C3=A4uber)
> >    2. After delete 8.5M Objects in a bucket still 500K left
> >       (EDH - Manuel Rios Fernandez)
> >    3. Re: Static website hosting with RGW (Casey Bodley)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Date: Mon, 28 Oct 2019 11:24:54 +0100
> > From: Lars T=C3=A4uber <taeuber@xxxxxxx>
> > Subject:  Re: subtrees have overcommitted
> >         (target_size_bytes / target_size_ratio)
> > To: ceph-users <ceph-users@xxxxxxx>
> > Message-ID: <20191028112454.0362fe66@xxxxxxx>
> > Content-Type: text/plain; charset=3DUTF-8
> >
> > Is there a way to get rid of this warnings with activated autoscaler
> > besides adding new osds?
> >
> > Yet I couldn't get a satisfactory answer to the question why this all
> > happens.
> >
> > ceph osd pool autoscale-status :
> >  POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET
> > RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE
> >  cephfs_data      122.2T                1.5        165.4T  1.1085
> > 0.8500   1.0    1024              on
> >
> > versus
> >
> >  ceph df  :
> > RAW STORAGE:
> >     CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
> >     hdd       165 TiB      41 TiB     124 TiB      124 TiB         74.95
> >
> > POOLS:
> >     POOL                ID     STORED     OBJECTS     USED        %USED
> >  MAX AVAIL
> >     cephfs_data          1     75 TiB      49.31M     122 TiB     87.16
> >     12 TiB
> >
> >
> > It seems that the overcommitment is wrongly calculated. Isn't the RATE
> > already used to calculate the SIZE?
> >
> > It seems USED(df) =3D SIZE(autoscale-status)
> > Isn't the RATE already taken into account here?
> >
> > Could someone please explain the numbers to me?
> >
> >
> > Thanks!
> > Lars
> >
> > Fri, 25 Oct 2019 07:42:58 +0200
> > Lars T=C3=A4uber <taeuber@xxxxxxx> =3D=3D> Nathan Fish
> <lordcirth@gmail.c=
> om> :
> > > Hi Nathan,
> > >
> > > Thu, 24 Oct 2019 10:59:55 -0400
> > > Nathan Fish <lordcirth@xxxxxxxxx> =3D=3D> Lars T=C3=A4uber
> <taeuber@bba=
> w.de> :
> > > > Ah, I see! The BIAS reflects the number of placement groups it should
> > > > create. Since cephfs metadata pools are usually very small, but have
> > > > many objects and high IO, the autoscaler gives them 4x the number of
> > > > placement groups that it would normally give for that amount of data.
> > > >
> > > ah ok, I understand.
> > >
> > > > So, your cephfs_data is set to a ratio of 0.9, and cephfs_metadata to
> > > > 0.3? Are the two pools using entirely different device classes, so
> > > > they are not sharing space?
> > >
> > > Yes, the metadata is on SSDs and the data on HDDs.
> > >
> > > > Anyway, I see that your overcommit is only "1.031x". So if you set
> > > > cephfs_data to 0.85, it should go away.
> > >
> > > This is not the case. I set the target_ratio to 0.7 and get this:
> > >
> > >  POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO
> > TARGET RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE
> > >  cephfs_metadata  15736M                3.0         2454G  0.0188
> > 0.3000   4.0     256              on
> > >  cephfs_data      122.2T                1.5        165.4T  1.1085
> > 0.7000   1.0    1024              on
> > >
> > > The ratio seems to have nothing to do with the target_ratio but the
> SIZ=
> E
> > and the RAW_CAPACITY.
> > > Because the pool is still getting more data the SIZE increases and
> > therefore the RATIO increases.
> > > The RATIO seems to be calculated by this formula
> > > RATIO =3D SIZE * RATE / RAW_CAPACITY.
> > >
> > > This is what I don't understand. The data in the cephfs_data pool seems
> > to need more space than the raw capacity of the cluster provides. Hence
> t=
> he
> > situation is called "overcommitment".
> > >
> > > But why is this only the case when the autoscaler is active?
> > >
> > > Thanks
> > > Lars
> > >
> > > >
> > > > On Thu, Oct 24, 2019 at 10:09 AM Lars T=C3=A4uber <taeuber@xxxxxxx>
> > wrote:
> > > > >
> > > > > Thanks Nathan for your answer,
> > > > >
> > > > > but I set the the Target Ratio to 0.9. It is the cephfs_data pool
> > that makes the troubles.
> > > > >
> > > > > The 4.0 is the BIAS from the cephfs_metadata pool. This "BIAS" is
> > not explained on the page linked below. So I don't know its meaning.
> > > > >
> > > > > How can be a pool overcommited when it is the only pool on a set of
> > OSDs?
> > > > >
> > > > > Best regards,
> > > > > Lars
> > > > >
> > > > > Thu, 24 Oct 2019 09:39:51 -0400
> > > > > Nathan Fish <lordcirth@xxxxxxxxx> =3D=3D> Lars T=C3=A4uber
> <taeuber=
> @bbaw.de>
> > :
> > > > > > The formatting is mangled on my phone, but if I am reading it
> > correctly,
> > > > > > you have set Target Ratio to 4.0. This means you have told the
> > balancer
> > > > > > that this pool will occupy 4x the space of your whole cluster,
> an=
> d
> > to
> > > > > > optimize accordingly. This is naturally a problem. Setting it to
> =
> 0
> > will
> > > > > > clear the setting and allow the autobalancer to work.
> > > > > >
> > > > > > On Thu., Oct. 24, 2019, 5:18 a.m. Lars T=C3=A4uber, <taeuber@bbaw
> =
> .de>
> > wrote:
> > > > > >
> > > > > > > This question is answered here:
> > > > > > >
> https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning=
> /
> > > > > > >
> > > > > > > But it tells me that there is more data stored in the pool than
> > the raw
> > > > > > > capacity provides (taking the replication factor RATE into
> > account) hence
> > > > > > > the RATIO being above 1.0 .
> > > > > > >
> > > > > > > How comes this is the case? - Data is stored outside of the
> poo=
> l?
> > > > > > > How comes this is only the case when the autoscaler is active?
> > > > > > >
> > > > > > > Thanks
> > > > > > > Lars
> > > > > > >
> > > > > > >
> > > > > > > Thu, 24 Oct 2019 10:36:52 +0200
> > > > > > > Lars T=C3=A4uber <taeuber@xxxxxxx> =3D=3D> ceph-users@xxxxxxx
> :
> > > > > > > > My question requires too complex an answer.
> > > > > > > > So let me ask a simple question:
> > > > > > > >
> > > > > > > > What does the SIZE of "osd pool autoscale-status"
> > tell/mean/comes from?
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > > Lars
> > > > > > > >
> > > > > > > > Wed, 23 Oct 2019 14:28:10 +0200
> > > > > > > > Lars T=C3=A4uber <taeuber@xxxxxxx> =3D=3D>
> ceph-users@xxxxxxx=
>  :
> > > > > > > > > Hello everybody!
> > > > > > > > >
> > > > > > > > > What does this mean?
> > > > > > > > >
> > > > > > > > >     health: HEALTH_WARN
> > > > > > > > >             1 subtrees have overcommitted pool
> > target_size_bytes
> > > > > > > > >             1 subtrees have overcommitted pool
> > target_size_ratio
> > > > > > > > >
> > > > > > > > > and what does it have to do with the autoscaler?
> > > > > > > > > When I deactivate the autoscaler the warning goes away.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > $ ceph osd pool autoscale-status
> > > > > > > > >  POOL               SIZE  TARGET SIZE  RATE  RAW CAPACITY
> >  RATIO
> > > > > > > TARGET RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE
> > > > > > > > >  cephfs_metadata  15106M                3.0         2454G
> > 0.0180
> > > > > > >   0.3000   4.0     256              on
> > > > > > > > >  cephfs_data      113.6T                1.5        165.4T
> > 1.0306
> > > > > > >   0.9000   1.0     512              on
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > $ ceph health detail
> > > > > > > > > HEALTH_WARN 1 subtrees have overcommitted pool
> > target_size_bytes; 1
> > > > > > > subtrees have overcommitted pool target_size_ratio
> > > > > > > > > POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have
> > overcommitted
> > > > > > > pool target_size_bytes
> > > > > > > > >     Pools ['cephfs_data'] overcommit available storage by
> > 1.031x due
> > > > > > > to target_size_bytes    0  on pools []
> > > > > > > > > POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have
> > overcommitted
> > > > > > > pool target_size_ratio
> > > > > > > > >     Pools ['cephfs_data'] overcommit available storage by
> > 1.031x due
> > > > > > > to target_size_ratio 0.900 on pools ['cephfs_data']
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > > Lars
> > > > > > > > > _______________________________________________
> > > > > > > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > > > > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> > ------------------------------
> >
> > Date: Mon, 28 Oct 2019 14:18:01 +0100
> > From: "EDH - Manuel Rios Fernandez" <mriosfer@xxxxxxxxxxxxxxxx>
> > Subject:  After delete 8.5M Objects in a bucket still 500K
> >         left
> > To: <ceph-users@xxxxxxx>
> > Message-ID: <02a201d58d92$1fe85880$5fb90980$@easydatahost.com>
> > Content-Type: multipart/alternative;
> >         boundary=3D"----=3D_NextPart_000_02A3_01D58D9A.81B17B70"
> >
> > This is a multipart message in MIME format.
> >
> > ------=3D_NextPart_000_02A3_01D58D9A.81B17B70
> > Content-Type: text/plain;
> >         charset=3D"us-ascii"
> > Content-Transfer-Encoding: 7bit
> >
> > Hi Ceph's!
> >
> >
> >
> > We started deteling a bucket several days ago. Total size 47TB / 8.5M
> > objects.
> >
> >
> >
> > Now we see the cli bucket rm stucked and by console drop this messages.
> >
> >
> >
> > [root@ceph-rgw03 ~]# 2019-10-28 13:55:43.880 7f0dd92c9700  0
> > abort_bucket_multiparts WARNING : aborted 1000 incomplete multipart
> uploa=
> ds
> >
> > 2019-10-28 13:56:24.021 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 2000 incomplete multipart uploads
> >
> > 2019-10-28 13:57:04.726 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 3000 incomplete multipart uploads
> >
> > 2019-10-28 13:57:45.424 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 4000 incomplete multipart uploads
> >
> > 2019-10-28 13:58:25.905 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 5000 incomplete multipart uploads
> >
> > 2019-10-28 13:59:06.898 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 6000 incomplete multipart uploads
> >
> > 2019-10-28 13:59:47.829 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 7000 incomplete multipart uploads
> >
> > 2019-10-28 14:00:42.102 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 8000 incomplete multipart uploads
> >
> > 2019-10-28 14:01:23.829 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 9000 incomplete multipart uploads
> >
> > 2019-10-28 14:02:06.028 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 10000 incomplete multipart uploads
> >
> > 2019-10-28 14:02:48.648 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 11000 incomplete multipart uploads
> >
> > 2019-10-28 14:03:29.807 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 12000 incomplete multipart uploads
> >
> > 2019-10-28 14:04:11.180 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 13000 incomplete multipart uploads
> >
> > 2019-10-28 14:04:52.396 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 14000 incomplete multipart uploads
> >
> > 2019-10-28 14:05:33.050 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 15000 incomplete multipart uploads
> >
> > 2019-10-28 14:06:13.652 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 16000 incomplete multipart uploads
> >
> > 2019-10-28 14:06:54.806 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 17000 incomplete multipart uploads
> >
> > 2019-10-28 14:07:35.867 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 18000 incomplete multipart uploads
> >
> > 2019-10-28 14:08:16.886 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 19000 incomplete multipart uploads
> >
> > 2019-10-28 14:08:57.711 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 20000 incomplete multipart uploads
> >
> > 2019-10-28 14:09:38.032 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 21000 incomplete multipart uploads
> >
> > 2019-10-28 14:10:18.377 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 22000 incomplete multipart uploads
> >
> > 2019-10-28 14:10:58.833 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 23000 incomplete multipart uploads
> >
> > 2019-10-28 14:11:39.078 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 24000 incomplete multipart uploads
> >
> > 2019-10-28 14:12:24.731 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 25000 incomplete multipart uploads
> >
> > 2019-10-28 14:13:12.176 7f0dd92c9700  0 abort_bucket_multiparts WARNING :
> > aborted 26000 incomplete multipart uploads
> >
> >
> >
> >
> >
> > Bucket stats show 500K objects left. Looks like bucket rm is trying to
> > abort
> > all incompleted mutipart. But in bucket stats this operation is not
> > reflected removing objects from stats.
> >
> >
> >
> > May be wait to get up 500K or it's a bug?
> >
> >
> >
> > Regards
> >
> > Manuel
> >
> >
> >
> >
> > ------=3D_NextPart_000_02A3_01D58D9A.81B17B70
> > Content-Type: text/html;
> >         charset=3D"us-ascii"
> > Content-Transfer-Encoding: quoted-printable
> >
> > <html xmlns:v=3D3D"urn:schemas-microsoft-com:vml" =3D
> > xmlns:o=3D3D"urn:schemas-microsoft-com:office:office" =3D
> > xmlns:w=3D3D"urn:schemas-microsoft-com:office:word" =3D
> > xmlns:m=3D3D"http://schemas.microsoft.com/office/2004/12/omml"; =3D
> > xmlns=3D3D"http://www.w3.org/TR/REC-html40";><head><meta =3D
> > http-equiv=3D3DContent-Type content=3D3D"text/html; =3D
> > charset=3D3Dus-ascii"><meta name=3D3DGenerator content=3D3D"Microsoft
> Wor=
> d 15 =3D
> > (filtered medium)"><style><!--
> > /* Font Definitions */
> > @font-face
> >         {font-family:"Cambria Math";
> >         panose-1:2 4 5 3 5 4 6 3 2 4;}
> > @font-face
> >         {font-family:Calibri;
> >         panose-1:2 15 5 2 2 2 4 3 2 4;}
> > /* Style Definitions */
> > p.MsoNormal, li.MsoNormal, div.MsoNormal
> >         {margin:0cm;
> >         margin-bottom:.0001pt;
> >         font-size:11.0pt;
> >         font-family:"Calibri",sans-serif;
> >         mso-fareast-language:EN-US;}
> > a:link, span.MsoHyperlink
> >         {mso-style-priority:99;
> >         color:#0563C1;
> >         text-decoration:underline;}
> > a:visited, span.MsoHyperlinkFollowed
> >         {mso-style-priority:99;
> >         color:#954F72;
> >         text-decoration:underline;}
> > span.EstiloCorreo17
> >         {mso-style-type:personal-compose;
> >         font-family:"Calibri",sans-serif;
> >         color:windowtext;}
> > .MsoChpDefault
> >         {mso-style-type:export-only;
> >         font-family:"Calibri",sans-serif;
> >         mso-fareast-language:EN-US;}
> > @page WordSection1
> >         {size:612.0pt 792.0pt;
> >         margin:70.85pt 3.0cm 70.85pt 3.0cm;}
> > div.WordSection1
> >         {page:WordSection1;}
> > --></style><!--[if gte mso 9]><xml>
> > <o:shapedefaults v:ext=3D3D"edit" spidmax=3D3D"1026" />
> > </xml><![endif]--><!--[if gte mso 9]><xml>
> > <o:shapelayout v:ext=3D3D"edit">
> > <o:idmap v:ext=3D3D"edit" data=3D3D"1" />
> > </o:shapelayout></xml><![endif]--></head><body lang=3D3DES =3D
> > link=3D3D"#0563C1" vlink=3D3D"#954F72"><div class=3D3DWordSection1><p =3D
> > class=3D3DMsoNormal><span =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>Hi =3D
> > Ceph&#8217;s!<o:p></o:p></span></p><p class=3D3DMsoNormal><span =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&nbsp;</o:p></span=
> ><=3D
> > /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>We started deteling a =
> =3D
> > bucket several days ago. Total size 47TB / 8.5M =3D
> > objects.<o:p></o:p></span></p><p class=3D3DMsoNormal><span
> lang=3D3DEN-US=
>  =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&nbsp;</o:p></span=
> ><=3D
> > /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>Now we see the cli =3D
> > bucket rm stucked and by console drop this =3D
> > messages.<o:p></o:p></span></p><p class=3D3DMsoNormal><span
> lang=3D3DEN-U=
> S =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&nbsp;</o:p></span=
> ><=3D
> > /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>[root@ceph-rgw03 ~]# =
> =3D
> > 2019-10-28 13:55:43.880 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts =3D
> > WARNING : aborted 1000 incomplete multipart =3D
> > uploads<o:p></o:p></span></p><p class=3D3DMsoNormal><span lang=3D3DEN-US
> =
> =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 13:56:24.021=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 2000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 13:57:04.726=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 3000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 13:57:45.424=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 4000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 13:58:25.905=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 5000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 13:59:06.898=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 6000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 13:59:47.829=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 7000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:00:42.102=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 8000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:01:23.829=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 9000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:02:06.028=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 10000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:02:48.648=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 11000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:03:29.807=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 12000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:04:11.180=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 13000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:04:52.396=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 14000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:05:33.050=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 15000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:06:13.652=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 16000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:06:54.806=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 17000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:07:35.867=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 18000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:08:16.886=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 19000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:08:57.711=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 20000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:09:38.032=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 21000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:10:18.377=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 22000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:10:58.833=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 23000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:11:39.078=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 24000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:12:24.731=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 25000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>2019-10-28
> 14:13:12.176=
>  =3D
> > 7f0dd92c9700&nbsp; 0 abort_bucket_multiparts WARNING : aborted 26000 =3D
> > incomplete multipart uploads<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&nbsp;</o:p></span=
> ><=3D
> > /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&nbsp;</o:p></span=
> ><=3D
> > /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>Bucket stats show 500K
> =
> =3D
> > objects left. Looks like bucket rm is trying to abort all incompleted =3D
> > mutipart. But in bucket stats this operation is not reflected removing =
> =3D
> > objects from stats.<o:p></o:p></span></p><p class=3D3DMsoNormal><span =3D
> > lang=3D3DEN-US =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&nbsp;</o:p></span=
> ><=3D
> > /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> > style=3D3D'color:#1F497D;mso-fareast-language:ES'>May be wait to get up =
> =3D
> > 500K or it&#8217;s a bug?<o:p></o:p></span></p><p =3D
> > class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&nbsp;</o:p></span=
> ><=3D
> > /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'>Regards<o:p></o:p></spa=
> n>=3D
> > </p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'>Manuel<o:p></o:p></span=
> ><=3D
> > /p><p class=3D3DMsoNormal><span lang=3D3DEN-US =3D
> >
> style=3D3D'color:#1F497D;mso-fareast-language:ES'><o:p>&nbsp;</o:p></span=
> ><=3D
> > /p></div></body></html>
> > ------=3D_NextPart_000_02A3_01D58D9A.81B17B70--
> >
> > ------------------------------
> >
> > Date: Mon, 28 Oct 2019 10:48:44 -0400
> > From: Casey Bodley <cbodley@xxxxxxxxxx>
> > Subject:  Re: Static website hosting with RGW
> > To: ceph-users@xxxxxxx
> > Message-ID: <20834361-445e-1ee5-433b-dd4792f90608@xxxxxxxxxx>
> > Content-Type: text/plain; charset=3DUTF-8; format=3Dflowed
> >
> >
> > On 10/24/19 8:38 PM, Oliver Freyermuth wrote:
> > > Dear Cephers,
> > >
> > > I have a question concerning static websites with RGW.
> > > To my understanding, it is best to run >=3D1 RGW client for "classic"
> S=
> 3
> > and in addition operate >=3D1 RGW client for website serving
> > > (potentially with HAProxy or its friends in front) to prevent messup of
> > requests via the different protocols.
> > >
> > > I'd prefer to avoid "*.example.com" entries in DNS if possible.
> > > So my current setup has these settings for the "web" RGW client:
> > >   rgw_enable_static_website =3D true
> > >   rgw_enable_apis =3D s3website
> > >   rgw_dns_s3website_name =3D
> >
> some_value_unused_when_A_records_are_used_pointing_to_the_IP_but_it_needs=
> _to_be_set
> > > and I create simple A records for each website pointing to the IP of
> > this "web" RGW node.
> > >
> > > I can easily upload content for those websites to the other RGW
> > instances which are serving S3,
> > > so S3 and s3website APIs are cleanly separated in separate instances.
> > >
> > > However, one issue remains: How do I run
> > >   s3cmd ws-create
> > > on each website-bucket once?
> > > I can't do that against the "classic" S3-serving RGW nodes. This will
> > give me a 405 (not allowed),
> > > since they do not have rgw_enable_static_website enabled.
> > > I also can not run it against the "web S3" nodes, since they do not
> hav=
> e
> > the S3 API enabled.
> > > Of course I could enable that, but then the RGW node can't cleanly
> > disentangle S3 and website requests since I use A records.
> > >
> > > Does somebody have a good idea on how to solve this issue?
> > > Setting "rgw_enable_static_website =3D true" on the S3-serving RGW
> node=
> s
> > would solve it, but does that have any bad side-effects on their S3
> > operation?
> >
> > Enabling static website on the gateway serving the S3 api does look like
> > the right solution. As far as I can tell, it's only used to control
> > whether the S3 ops for PutBucketWebsite, GetBucketWebsite, and
> > DeleteBucketWebsite are exposed.
> >
> > >
> > > Also, if there's an expert on this: Exposing a bucket under a tenant as
> > static website is not possible since the colon (:) can't be encoded in
> DN=
> S,
> > right?
> > >
> > >
> > > In case somebody also wants to set something like this up, here are the
> > best docs I could find:
> > > https://gist.github.com/robbat2/ec0a66eed28e5f0e1ef7018e9c77910c
> > > and of course:
> > >
> >
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html=
>
> -single/object_gateway_guide_for_red_hat_enterprise_linux/index#configuring=
> <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html=-single/object_gateway_guide_for_red_hat_enterprise_linux/index#configuring=>
> _gateways_for_static_web_hosting
> > >
> > >
> > > Cheers,
> > >       Oliver
> > >
> > >
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> > ------------------------------
> >
> > Subject: Digest Footer
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
> >
> >
> > ------------------------------
> >
> > End of ceph-users Digest, Vol 81, Issue 79
> > ******************************************
> >
>
> --0000000000000dfea80595fe0971
> Content-Type: text/html; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> <div><br></div><div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
> =3D"gmail_attr">On Tue, 29 Oct 2019 at 1:50 am, &lt;<a href=3D"mailto:
> ceph-=
> users-request@xxxxxxx">ceph-users-request@xxxxxxx</a>&gt;
> wrote:<br></div><=
> blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
> .8ex;border-left:1px=
>  #ccc solid;padding-left:1ex">Send ceph-users mailing list submissions
> to<b=
> r>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 <a href=3D"mailto:ceph-users@xxxxxxx";
> target=3D=
> "_blank">ceph-users@xxxxxxx</a><br>
> <br>
> To subscribe or unsubscribe via email, send a message with subject or<br>
> body &#39;help&#39; to<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 <a href=3D"mailto:ceph-users-request@xxxxxxx";
> t=
> arget=3D"_blank">ceph-users-request@xxxxxxx</a><br>
> <br>
> You can reach the person managing the list at<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 <a href=3D"mailto:ceph-users-owner@xxxxxxx";
> tar=
> get=3D"_blank">ceph-users-owner@xxxxxxx</a><br>
> <br>
> When replying, please edit your Subject line so it is more specific<br>
> than &quot;Re: Contents of ceph-users digest...&quot;<br>
> <br>
> Today&#39;s Topics:<br>
> <br>
> =C2=A0 =C2=A01. Re: subtrees have overcommitted (target_size_bytes /
> target=
> _size_ratio)<br>
> =C2=A0 =C2=A0 =C2=A0 (Lars T=C3=A4uber)<br>
> =C2=A0 =C2=A02. After delete 8.5M Objects in a bucket still 500K left<br>
> =C2=A0 =C2=A0 =C2=A0 (EDH - Manuel Rios Fernandez)<br>
> =C2=A0 =C2=A03. Re: Static website hosting with RGW (Casey Bodley)<br>
> <br>
> <br>
> ----------------------------------------------------------------------<br>
> <br>
> Date: Mon, 28 Oct 2019 11:24:54 +0100<br>
> From: Lars T=C3=A4uber &lt;<a href=3D"mailto:taeuber@xxxxxxx";
> target=3D"_bl=
> ank">taeuber@xxxxxxx</a>&gt;<br>
> Subject:  Re: subtrees have overcommitted<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 (target_size_bytes / target_size_ratio)<br>
> To: ceph-users &lt;<a href=3D"mailto:ceph-users@xxxxxxx";
> target=3D"_blank">=
> ceph-users@xxxxxxx</a>&gt;<br>
> Message-ID: &lt;<a href=3D"mailto:20191028112454.0362fe66@xxxxxxx"; target=
> =3D"_blank">20191028112454.0362fe66@xxxxxxx</a>&gt;<br>
> Content-Type: text/plain; charset=3DUTF-8<br>
> <br>
> Is there a way to get rid of this warnings with activated autoscaler
> beside=
> s adding new osds?<br>
> <br>
> Yet I couldn&#39;t get a satisfactory answer to the question why this all
> h=
> appens.<br>
> <br>
> ceph osd pool autoscale-status :<br>
> =C2=A0POOL=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
> =C2=A0SIZE=C2=A0=
>  TARGET SIZE=C2=A0 RATE=C2=A0 RAW CAPACITY=C2=A0 =C2=A0RATIO=C2=A0 TARGET
> R=
> ATIO=C2=A0 BIAS=C2=A0 PG_NUM=C2=A0 NEW PG_NUM=C2=A0 AUTOSCALE <br>
> =C2=A0cephfs_data=C2=A0 =C2=A0 =C2=A0 122.2T=C2=A0 =C2=A0 =C2=A0 =C2=A0
> =C2=
> =A0 =C2=A0 =C2=A0 =C2=A0 1.5=C2=A0 =C2=A0 =C2=A0 =C2=A0 165.4T=C2=A0
> 1.1085=
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 0.8500=C2=A0 =C2=A01.0=C2=A0 =C2=A0 1024=C2=A0
> =
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on <br>
> <br>
> versus<br>
> <br>
> =C2=A0ceph df=C2=A0 :<br>
> RAW STORAGE:<br>
> =C2=A0 =C2=A0 CLASS=C2=A0 =C2=A0 =C2=A0SIZE=C2=A0 =C2=A0 =C2=A0 =C2=A0
> AVAI=
> L=C2=A0 =C2=A0 =C2=A0 =C2=A0USED=C2=A0 =C2=A0 =C2=A0 =C2=A0 RAW USED=C2=A0
> =
> =C2=A0 =C2=A0%RAW USED <br>
> =C2=A0 =C2=A0 hdd=C2=A0 =C2=A0 =C2=A0 =C2=A0165 TiB=C2=A0 =C2=A0 =C2=A0 41
> =
> TiB=C2=A0 =C2=A0 =C2=A0124 TiB=C2=A0 =C2=A0 =C2=A0 124 TiB=C2=A0 =C2=A0
> =C2=
> =A0 =C2=A0 =C2=A074.95 <br>
> <br>
> POOLS:<br>
> =C2=A0 =C2=A0 POOL=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
> I=
> D=C2=A0 =C2=A0 =C2=A0STORED=C2=A0 =C2=A0 =C2=A0OBJECTS=C2=A0 =C2=A0
> =C2=A0U=
> SED=C2=A0 =C2=A0 =C2=A0 =C2=A0 %USED=C2=A0 =C2=A0 =C2=A0MAX AVAIL <br>
> =C2=A0 =C2=A0 cephfs_data=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1=C2=A0 =C2=A0
> =
> =C2=A075 TiB=C2=A0 =C2=A0 =C2=A0 49.31M=C2=A0 =C2=A0 =C2=A0122 TiB=C2=A0 =
> =C2=A0 =C2=A087.16=C2=A0 =C2=A0 =C2=A0 =C2=A0 12 TiB <br>
> <br>
> <br>
> It seems that the overcommitment is wrongly calculated. Isn&#39;t the RATE
> =
> already used to calculate the SIZE?<br>
> <br>
> It seems USED(df) =3D SIZE(autoscale-status)<br>
> Isn&#39;t the RATE already taken into account here?<br>
> <br>
> Could someone please explain the numbers to me?<br>
> <br>
> <br>
> Thanks!<br>
> Lars<br>
> <br>
> Fri, 25 Oct 2019 07:42:58 +0200<br>
> Lars T=C3=A4uber &lt;<a href=3D"mailto:taeuber@xxxxxxx";
> target=3D"_blank">t=
> aeuber@xxxxxxx</a>&gt; =3D=3D&gt; Nathan Fish &lt;<a href=3D"mailto:
> lordcir=
> th@xxxxxxxxx" target=3D"_blank">lordcirth@xxxxxxxxx</a>&gt; :<br>
> &gt; Hi Nathan,<br>
> &gt; <br>
> &gt; Thu, 24 Oct 2019 10:59:55 -0400<br>
> &gt; Nathan Fish &lt;<a href=3D"mailto:lordcirth@xxxxxxxxx";
> target=3D"_blan=
> k">lordcirth@xxxxxxxxx</a>&gt; =3D=3D&gt; Lars T=C3=A4uber &lt;<a
> href=3D"m=
> ailto:taeuber@xxxxxxx" target=3D"_blank">taeuber@xxxxxxx</a>&gt; :<br>
> &gt; &gt; Ah, I see! The BIAS reflects the number of placement groups it
> sh=
> ould<br>
> &gt; &gt; create. Since cephfs metadata pools are usually very small, but
> h=
> ave<br>
> &gt; &gt; many objects and high IO, the autoscaler gives them 4x the
> number=
>  of<br>
> &gt; &gt; placement groups that it would normally give for that amount of
> d=
> ata.<br>
> &gt; &gt;=C2=A0 =C2=A0<br>
> &gt; ah ok, I understand.<br>
> &gt; <br>
> &gt; &gt; So, your cephfs_data is set to a ratio of 0.9, and
> cephfs_metadat=
> a to<br>
> &gt; &gt; 0.3? Are the two pools using entirely different device classes,
> s=
> o<br>
> &gt; &gt; they are not sharing space?=C2=A0 <br>
> &gt; <br>
> &gt; Yes, the metadata is on SSDs and the data on HDDs.<br>
> &gt; <br>
> &gt; &gt; Anyway, I see that your overcommit is only &quot;1.031x&quot;.
> So=
>  if you set<br>
> &gt; &gt; cephfs_data to 0.85, it should go away.=C2=A0 <br>
> &gt; <br>
> &gt; This is not the case. I set the target_ratio to 0.7 and get this:<br>
> &gt; <br>
> &gt;=C2=A0 POOL=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0SIZE=
> =C2=A0 TARGET SIZE=C2=A0 RATE=C2=A0 RAW CAPACITY=C2=A0 =C2=A0RATIO=C2=A0
> TA=
> RGET RATIO=C2=A0 BIAS=C2=A0 PG_NUM=C2=A0 NEW PG_NUM=C2=A0 AUTOSCALE <br>
> &gt;=C2=A0 cephfs_metadata=C2=A0 15736M=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
> =C2=A0 =C2=A0 =C2=A0 3.0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A02454G=C2=A0
> 0.018=
> 8=C2=A0 =C2=A0 =C2=A0 =C2=A0 0.3000=C2=A0 =C2=A04.0=C2=A0 =C2=A0 =C2=A0256=
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on=C2=A0 =C2=A0 =C2=A0
> =C2=
> =A0 <br>
> &gt;=C2=A0 cephfs_data=C2=A0 =C2=A0 =C2=A0 122.2T=C2=A0 =C2=A0 =C2=A0 =C2=
> =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1.5=C2=A0 =C2=A0 =C2=A0 =C2=A0
> 165.4T=C2=A0=
>  1.1085=C2=A0 =C2=A0 =C2=A0 =C2=A0 0.7000=C2=A0 =C2=A01.0=C2=A0 =C2=A0
> 1024=
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on=C2=A0 =C2=A0 =C2=A0
> =C2=
> =A0 <br>
> &gt; <br>
> &gt; The ratio seems to have nothing to do with the target_ratio but the
> SI=
> ZE and the RAW_CAPACITY.<br>
> &gt; Because the pool is still getting more data the SIZE increases and
> the=
> refore the RATIO increases.<br>
> &gt; The RATIO seems to be calculated by this formula<br>
> &gt; RATIO =3D SIZE * RATE / RAW_CAPACITY.<br>
> &gt; <br>
> &gt; This is what I don&#39;t understand. The data in the cephfs_data pool
> =
> seems to need more space than the raw capacity of the cluster provides.
> Hen=
> ce the situation is called &quot;overcommitment&quot;.<br>
> &gt; <br>
> &gt; But why is this only the case when the autoscaler is active?<br>
> &gt; <br>
> &gt; Thanks<br>
> &gt; Lars<br>
> &gt; <br>
> &gt; &gt; <br>
> &gt; &gt; On Thu, Oct 24, 2019 at 10:09 AM Lars T=C3=A4uber &lt;<a
> href=3D"=
> mailto:taeuber@xxxxxxx"; target=3D"_blank">taeuber@xxxxxxx</a>&gt;
> wrote:=C2=
> =A0 <br>
> &gt; &gt; &gt;<br>
> &gt; &gt; &gt; Thanks Nathan for your answer,<br>
> &gt; &gt; &gt;<br>
> &gt; &gt; &gt; but I set the the Target Ratio to 0.9. It is the
> cephfs_data=
>  pool that makes the troubles.<br>
> &gt; &gt; &gt;<br>
> &gt; &gt; &gt; The 4.0 is the BIAS from the cephfs_metadata pool. This
> &quo=
> t;BIAS&quot; is not explained on the page linked below. So I don&#39;t
> know=
>  its meaning.<br>
> &gt; &gt; &gt;<br>
> &gt; &gt; &gt; How can be a pool overcommited when it is the only pool on
> a=
>  set of OSDs?<br>
> &gt; &gt; &gt;<br>
> &gt; &gt; &gt; Best regards,<br>
> &gt; &gt; &gt; Lars<br>
> &gt; &gt; &gt;<br>
> &gt; &gt; &gt; Thu, 24 Oct 2019 09:39:51 -0400<br>
> &gt; &gt; &gt; Nathan Fish &lt;<a href=3D"mailto:lordcirth@xxxxxxxxx";
> targe=
> t=3D"_blank">lordcirth@xxxxxxxxx</a>&gt; =3D=3D&gt; Lars T=C3=A4uber
> &lt;<a=
>  href=3D"mailto:taeuber@xxxxxxx"; target=3D"_blank">taeuber@xxxxxxx</a>&gt;
> =
> :=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; The formatting is mangled on my phone, but if I am
> read=
> ing it correctly,<br>
> &gt; &gt; &gt; &gt; you have set Target Ratio to 4.0. This means you have
> t=
> old the balancer<br>
> &gt; &gt; &gt; &gt; that this pool will occupy 4x the space of your whole
> c=
> luster, and to<br>
> &gt; &gt; &gt; &gt; optimize accordingly. This is naturally a problem.
> Sett=
> ing it to 0 will<br>
> &gt; &gt; &gt; &gt; clear the setting and allow the autobalancer to
> work.<b=
> r>
> &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; On Thu., Oct. 24, 2019, 5:18 a.m. Lars T=C3=A4uber,
> &lt=
> ;<a href=3D"mailto:taeuber@xxxxxxx"; target=3D"_blank">taeuber@xxxxxxx
> </a>&g=
> t; wrote:<br>
> &gt; &gt; &gt; &gt;=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; This question is answered here:<br>
> &gt; &gt; &gt; &gt; &gt; <a href=3D"
> https://ceph.io/rados/new-in-nautilus-p=
> g-merging-and-autotuning/
> <https://ceph.io/rados/new-in-nautilus-p=g-merging-and-autotuning/>"
> rel=3D"noreferrer" target=3D"_blank">https://cep=
> h.io/rados/new-in-nautilus-pg-merging-and-autotuning/</a><br>
> &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; But it tells me that there is more data stored in
> =
> the pool than the raw<br>
> &gt; &gt; &gt; &gt; &gt; capacity provides (taking the replication factor
> R=
> ATE into account) hence<br>
> &gt; &gt; &gt; &gt; &gt; the RATIO being above 1.0 .<br>
> &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; How comes this is the case? - Data is stored
> outsi=
> de of the pool?<br>
> &gt; &gt; &gt; &gt; &gt; How comes this is only the case when the
> autoscale=
> r is active?<br>
> &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; Thanks<br>
> &gt; &gt; &gt; &gt; &gt; Lars<br>
> &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; Thu, 24 Oct 2019 10:36:52 +0200<br>
> &gt; &gt; &gt; &gt; &gt; Lars T=C3=A4uber &lt;<a href=3D"mailto:
> taeuber@bba=
> w.de" target=3D"_blank">taeuber@xxxxxxx</a>&gt; =3D=3D&gt; <a
> href=3D"mailt=
> o:ceph-users@xxxxxxx" target=3D"_blank">ceph-users@xxxxxxx</a> :=C2=A0
> =C2=
> =A0 <br>
> &gt; &gt; &gt; &gt; &gt; &gt; My question requires too complex an
> answer.<b=
> r>
> &gt; &gt; &gt; &gt; &gt; &gt; So let me ask a simple question:<br>
> &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; What does the SIZE of &quot;osd pool
> autoscal=
> e-status&quot; tell/mean/comes from?<br>
> &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; Thanks<br>
> &gt; &gt; &gt; &gt; &gt; &gt; Lars<br>
> &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; Wed, 23 Oct 2019 14:28:10 +0200<br>
> &gt; &gt; &gt; &gt; &gt; &gt; Lars T=C3=A4uber &lt;<a href=3D"mailto:
> taeube=
> r@xxxxxxx" target=3D"_blank">taeuber@xxxxxxx</a>&gt; =3D=3D&gt; <a
> href=3D"=
> mailto:ceph-users@xxxxxxx"; target=3D"_blank">ceph-users@xxxxxxx</a>
> :=C2=A0=
>  =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; Hello everybody!<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; What does this mean?<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0health:
> HEALTH_WARN<b=
> r>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
> =C2=A0=
>  =C2=A01 subtrees have overcommitted pool target_size_bytes<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
> =C2=A0=
>  =C2=A01 subtrees have overcommitted pool target_size_ratio<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; and what does it have to do with the
> aut=
> oscaler?<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; When I deactivate the autoscaler the
> war=
> ning goes away.<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; $ ceph osd pool autoscale-status<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;=C2=A0 POOL=C2=A0 =C2=A0 =C2=A0 =C2=A0 =
> =C2=A0 =C2=A0 =C2=A0 =C2=A0SIZE=C2=A0 TARGET SIZE=C2=A0 RATE=C2=A0 RAW
> CAPA=
> CITY=C2=A0 =C2=A0RATIO=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; TARGET RATIO=C2=A0 BIAS=C2=A0 PG_NUM=C2=A0 NEW
> PG_=
> NUM=C2=A0 AUTOSCALE=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;=C2=A0 cephfs_metadata=C2=A0
> 15106M=C2=A0=
>  =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 3.0=C2=A0 =C2=A0 =C2=A0 =
> =C2=A0 =C2=A02454G=C2=A0 0.0180=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt;=C2=A0 =C2=A00.3000=C2=A0 =C2=A04.0=C2=A0 =C2=A0 =
> =C2=A0256=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on=C2=A0 =C2=A0
> <=
> br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;=C2=A0 cephfs_data=C2=A0 =C2=A0 =C2=A0
> 11=
> 3.6T=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1.5=C2=A0 =C2=
> =A0 =C2=A0 =C2=A0 165.4T=C2=A0 1.0306=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt;=C2=A0 =C2=A00.9000=C2=A0 =C2=A01.0=C2=A0 =C2=A0 =
> =C2=A0512=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on=C2=A0 =C2=A0
> <=
> br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; $ ceph health detail<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; HEALTH_WARN 1 subtrees have
> overcommitte=
> d pool target_size_bytes; 1=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; subtrees have overcommitted pool
> target_size_ratio=
> =C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1
> s=
> ubtrees have overcommitted=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; pool target_size_bytes=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0Pools
> [&#39;cephfs_da=
> ta&#39;] overcommit available storage by 1.031x due=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; to target_size_bytes=C2=A0 =C2=A0 0=C2=A0 on
> pools=
>  []=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1
> s=
> ubtrees have overcommitted=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; pool target_size_ratio=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0Pools
> [&#39;cephfs_da=
> ta&#39;] overcommit available storage by 1.031x due=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; to target_size_ratio 0.900 on pools
> [&#39;cephfs_d=
> ata&#39;]=C2=A0 =C2=A0 <br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; Thanks<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; Lars<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt;
> ________________________________________=
> _______<br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; ceph-users mailing list -- <a
> href=3D"ma=
> ilto:ceph-users@xxxxxxx" target=3D"_blank">ceph-users@xxxxxxx</a><br>
> &gt; &gt; &gt; &gt; &gt; &gt; &gt; To unsubscribe send an email to <a href=
> =3D"mailto:ceph-users-leave@xxxxxxx";
> target=3D"_blank">ceph-users-leave@cep=
> h.io</a>=C2=A0 =C2=A0 <br>
> <br>
> ------------------------------<br>
> <br>
> Date: Mon, 28 Oct 2019 14:18:01 +0100<br>
> From: &quot;EDH - Manuel Rios Fernandez&quot; &lt;<a href=3D"mailto:
> mriosfe=
> r@xxxxxxxxxxxxxxxx" target=3D"_blank">mriosfer@xxxxxxxxxxxxxxxx
> </a>&gt;<br>
> Subject:  After delete 8.5M Objects in a bucket still 500K<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 left<br>
> To: &lt;<a href=3D"mailto:ceph-users@xxxxxxx";
> target=3D"_blank">ceph-users@=
> ceph.io</a>&gt;<br>
> Message-ID: &lt;02a201d58d92$1fe85880$5fb90980$@<a href=3D"
> http://easydatah=
> ost.com" rel=3D"noreferrer" target=3D"_blank">easydatahost.com</a>&gt;<br>
> Content-Type: multipart/alternative;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0
> boundary=3D&quot;----=3D_NextPart_000_02A3_01D5=
> 8D9A.81B17B70&quot;<br>
> <br>
> This is a multipart message in MIME format.<br>
> <br>
> ------=3D_NextPart_000_02A3_01D58D9A.81B17B70<br>
> Content-Type: text/plain;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 charset=3D&quot;us-ascii&quot;<br>
> Content-Transfer-Encoding: 7bit<br>
> <br>
> Hi Ceph&#39;s!<br>
> <br>
> <br>
> <br>
> We started deteling a bucket several days ago. Total size 47TB / 8.5M<br>
> objects.<br>
> <br>
> <br>
> <br>
> Now we see the cli bucket rm stucked and by console drop this messages.<br>
> <br>
> <br>
> <br>
> [root@ceph-rgw03 ~]# 2019-10-28 13:55:43.880 7f0dd92c9700=C2=A0 0<br>
> abort_bucket_multiparts WARNING : aborted 1000 incomplete multipart
> uploads=
> <br>
> <br>
> 2019-10-28 13:56:24.021 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 2000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 13:57:04.726 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 3000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 13:57:45.424 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 4000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 13:58:25.905 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 5000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 13:59:06.898 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 6000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 13:59:47.829 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 7000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:00:42.102 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 8000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:01:23.829 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 9000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:02:06.028 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 10000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:02:48.648 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 11000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:03:29.807 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 12000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:04:11.180 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 13000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:04:52.396 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 14000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:05:33.050 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 15000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:06:13.652 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 16000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:06:54.806 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 17000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:07:35.867 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 18000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:08:16.886 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 19000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:08:57.711 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 20000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:09:38.032 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 21000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:10:18.377 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 22000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:10:58.833 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 23000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:11:39.078 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 24000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:12:24.731 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 25000 incomplete multipart uploads<br>
> <br>
> 2019-10-28 14:13:12.176 7f0dd92c9700=C2=A0 0 abort_bucket_multiparts
> WARNIN=
> G :<br>
> aborted 26000 incomplete multipart uploads<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> Bucket stats show 500K objects left. Looks like bucket rm is trying to
> abor=
> t<br>
> all incompleted mutipart. But in bucket stats this operation is not<br>
> reflected removing objects from stats.<br>
> <br>
> <br>
> <br>
> May be wait to get up 500K or it&#39;s a bug?<br>
> <br>
> <br>
> <br>
> Regards<br>
> <br>
> Manuel<br>
> <br>
> <br>
> <br>
> <br>
> ------=3D_NextPart_000_02A3_01D58D9A.81B17B70<br>
> Content-Type: text/html;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 charset=3D&quot;us-ascii&quot;<br>
> Content-Transfer-Encoding: quoted-printable<br>
> <br>
> &lt;html xmlns:v=3D3D&quot;urn:schemas-microsoft-com:vml&quot; =3D<br>
> xmlns:o=3D3D&quot;urn:schemas-microsoft-com:office:office&quot; =3D<br>
> xmlns:w=3D3D&quot;urn:schemas-microsoft-com:office:word&quot; =3D<br>
> xmlns:m=3D3D&quot;<a href=3D"
> http://schemas.microsoft.com/office/2004/12/om=
> ml" rel=3D"noreferrer" target=3D"_blank">
> http://schemas.microsoft.com/offic=
> e/2004/12/omml <http://schemas.microsoft.com/offic=e/2004/12/omml></a>&quot;
> =3D<br>
> xmlns=3D3D&quot;<a href=3D"http://www.w3.org/TR/REC-html40";
> rel=3D"noreferr=
> er" target=3D"_blank">http://www.w3.org/TR/REC-html40
> </a>&quot;&gt;&lt;head=
> &gt;&lt;meta =3D<br>
> http-equiv=3D3DContent-Type content=3D3D&quot;text/html; =3D<br>
> charset=3D3Dus-ascii&quot;&gt;&lt;meta name=3D3DGenerator
> content=3D3D&quot=
> ;Microsoft Word 15 =3D<br>
> (filtered medium)&quot;&gt;&lt;style&gt;&lt;!--<br>
> /* Font Definitions */<br>
> @font-face<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {font-family:&quot;Cambria Math&quot;;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 panose-1:2 4 5 3 5 4 6 3 2 4;}<br>
> @font-face<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {font-family:Calibri;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 panose-1:2 15 5 2 2 2 4 3 2 4;}<br>
> /* Style Definitions */<br>
> p.MsoNormal, li.MsoNormal, div.MsoNormal<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {margin:0cm;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 margin-bottom:.0001pt;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 font-size:11.0pt;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 font-family:&quot;Calibri&quot;,sans-serif;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 mso-fareast-language:EN-US;}<br>
> a:link, span.MsoHyperlink<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {mso-style-priority:99;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 color:#0563C1;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 text-decoration:underline;}<br>
> a:visited, span.MsoHyperlinkFollowed<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {mso-style-priority:99;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 color:#954F72;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 text-decoration:underline;}<br>
> span.EstiloCorreo17<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {mso-style-type:personal-compose;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 font-family:&quot;Calibri&quot;,sans-serif;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 color:windowtext;}<br>
> .MsoChpDefault<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {mso-style-type:export-only;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 font-family:&quot;Calibri&quot;,sans-serif;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 mso-fareast-language:EN-US;}<br>
> @page WordSection1<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {size:612.0pt 792.0pt;<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 margin:70.85pt 3.0cm 70.85pt 3.0cm;}<br>
> div.WordSection1<br>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 {page:WordSection1;}<br>
> --&gt;&lt;/style&gt;&lt;!--[if gte mso 9]&gt;&lt;xml&gt;<br>
> &lt;o:shapedefaults v:ext=3D3D&quot;edit&quot;
> spidmax=3D3D&quot;1026&quot;=
>  /&gt;<br>
> &lt;/xml&gt;&lt;![endif]--&gt;&lt;!--[if gte mso 9]&gt;&lt;xml&gt;<br>
> &lt;o:shapelayout v:ext=3D3D&quot;edit&quot;&gt;<br>
> &lt;o:idmap v:ext=3D3D&quot;edit&quot; data=3D3D&quot;1&quot; /&gt;<br>
> &lt;/o:shapelayout&gt;&lt;/xml&gt;&lt;![endif]--&gt;&lt;/head&gt;&lt;body
> l=
> ang=3D3DES =3D<br>
> link=3D3D&quot;#0563C1&quot; vlink=3D3D&quot;#954F72&quot;&gt;&lt;div
> class=
> =3D3DWordSection1&gt;&lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;Hi =3D<br>
> Ceph&amp;#8217;s!&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&lt;p class=
> =3D3DMsoNormal&gt;&lt;span =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;&lt;o:p&gt;&am=
> p;nbsp;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;We started
> det=
> eling a =3D<br>
> bucket several days ago. Total size 47TB / 8.5M =3D<br>
> objects.&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&lt;p
> class=3D3DMsoNo=
> rmal&gt;&lt;span lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;&lt;o:p&gt;&am=
> p;nbsp;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;Now we see
> the=
>  cli =3D<br>
> bucket rm stucked and by console drop this =3D<br>
> messages.&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&lt;p
> class=3D3DMsoN=
> ormal&gt;&lt;span lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;&lt;o:p&gt;&am=
> p;nbsp;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;[root@ceph-rgw
> =
> 03 ~]# =3D<br>
> 2019-10-28 13:55:43.880 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts =
> =3D<br>
> WARNING : aborted 1000 incomplete multipart =3D<br>
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&lt;p
> class=3D3DMsoNor=
> mal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 13:=
> 56:24.021 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 2000
> =3D=
> <br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 13:=
> 57:04.726 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 3000
> =3D=
> <br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 13:=
> 57:45.424 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 4000
> =3D=
> <br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 13:=
> 58:25.905 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 5000
> =3D=
> <br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 13:=
> 59:06.898 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 6000
> =3D=
> <br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 13:=
> 59:47.829 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 7000
> =3D=
> <br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 00:42.102 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 8000
> =3D=
> <br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 01:23.829 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 9000
> =3D=
> <br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 02:06.028 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 10000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 02:48.648 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 11000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 03:29.807 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 12000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 04:11.180 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 13000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 04:52.396 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 14000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 05:33.050 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 15000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 06:13.652 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 16000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 06:54.806 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 17000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 07:35.867 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 18000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 08:16.886 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 19000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 08:57.711 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 20000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 09:38.032 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 21000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 10:18.377 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 22000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 10:58.833 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 23000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 11:39.078 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 24000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 12:24.731 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 25000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;2019-10-28
> 14:=
> 13:12.176 =3D<br>
> 7f0dd92c9700&amp;nbsp; 0 abort_bucket_multiparts WARNING : aborted 26000 =
> =3D<br>
> incomplete multipart
> uploads&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&=
> lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;&lt;o:p&gt;&am=
> p;nbsp;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;&lt;o:p&gt;&am=
> p;nbsp;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;Bucket stats
> s=
> how 500K =3D<br>
> objects left. Looks like bucket rm is trying to abort all incompleted
> =3D<b=
> r>
> mutipart. But in bucket stats this operation is not reflected removing
> =3D<=
> br>
> objects from stats.&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;&lt;p
> clas=
> s=3D3DMsoNormal&gt;&lt;span =3D<br>
> lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;&lt;o:p&gt;&am=
> p;nbsp;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;May be wait
> to=
>  get up =3D<br>
> 500K or it&amp;#8217;s a
> bug?&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/p&gt;=
> &lt;p =3D<br>
> class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;&lt;o:p&gt;&am=
> p;nbsp;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;Regards&lt;o:p=
> &gt;&lt;/o:p&gt;&lt;/span&gt;=3D<br>
> &lt;/p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;Manuel&lt;o:p&=
> gt;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;p class=3D3DMsoNormal&gt;&lt;span lang=3D3DEN-US =3D<br>
>
> style=3D3D&#39;color:#1F497D;mso-fareast-language:ES&#39;&gt;&lt;o:p&gt;&am=
> p;nbsp;&lt;/o:p&gt;&lt;/span&gt;&lt;=3D<br>
> /p&gt;&lt;/div&gt;&lt;/body&gt;&lt;/html&gt;<br>
> ------=3D_NextPart_000_02A3_01D58D9A.81B17B70--<br>
> <br>
> ------------------------------<br>
> <br>
> Date: Mon, 28 Oct 2019 10:48:44 -0400<br>
> From: Casey Bodley &lt;<a href=3D"mailto:cbodley@xxxxxxxxxx";
> target=3D"_bla=
> nk">cbodley@xxxxxxxxxx</a>&gt;<br>
> Subject:  Re: Static website hosting with RGW<br>
> To: <a href=3D"mailto:ceph-users@xxxxxxx";
> target=3D"_blank">ceph-users@ceph=
> .io</a><br>
> Message-ID: &lt;<a href=3D"mailto:
> 20834361-445e-1ee5-433b-dd4792f90608@redh=
> at.com" target=3D"_blank">20834361-445e-1ee5-433b-dd4792f90608@xxxxxxxxxx
> </=
> a>&gt;<br>
> Content-Type: text/plain; charset=3DUTF-8; format=3Dflowed<br>
> <br>
> <br>
> On 10/24/19 8:38 PM, Oliver Freyermuth wrote:<br>
> &gt; Dear Cephers,<br>
> &gt;<br>
> &gt; I have a question concerning static websites with RGW.<br>
> &gt; To my understanding, it is best to run &gt;=3D1 RGW client for
> &quot;c=
> lassic&quot; S3 and in addition operate &gt;=3D1 RGW client for website
> ser=
> ving<br>
> &gt; (potentially with HAProxy or its friends in front) to prevent messup
> o=
> f requests via the different protocols.<br>
> &gt;<br>
> &gt; I&#39;d prefer to avoid &quot;*.<a href=3D"http://example.com";
> rel=3D"=
> noreferrer" target=3D"_blank">example.com</a>&quot; entries in DNS if
> possi=
> ble.<br>
> &gt; So my current setup has these settings for the &quot;web&quot; RGW
> cli=
> ent:<br>
> &gt;=C2=A0 =C2=A0rgw_enable_static_website =3D true<br>
> &gt;=C2=A0 =C2=A0rgw_enable_apis =3D s3website<br>
> &gt;=C2=A0 =C2=A0rgw_dns_s3website_name =3D
> some_value_unused_when_A_record=
> s_are_used_pointing_to_the_IP_but_it_needs_to_be_set<br>
> &gt; and I create simple A records for each website pointing to the IP of
> t=
> his &quot;web&quot; RGW node.<br>
> &gt;<br>
> &gt; I can easily upload content for those websites to the other RGW
> instan=
> ces which are serving S3,<br>
> &gt; so S3 and s3website APIs are cleanly separated in separate
> instances.<=
> br>
> &gt;<br>
> &gt; However, one issue remains: How do I run<br>
> &gt;=C2=A0 =C2=A0s3cmd ws-create<br>
> &gt; on each website
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux