I'm quite sure that this could result in the impact you're seeing. To
confirm that suspicion you could stop deleting and wait a couple of
days to see if the usage stabilizes. And if it does, maybe delete less
files at once or so to see how far you can tweak it. That would be my
approach.
Zitat von Lars Köppel <lars.koeppel@xxxxxxxxxx>:
> We have been using snapshots for a long time.
> The only change in usage is that we are currently deleting many small
files
> from the system. Because this is slow (~150 requests/s) this is
running for
> the last few weeks. Could such a load result in a problem with the MDS?
>
> I have to ask for permission to order more drives. This could take some
> time.
>
> [image: ariadne.ai Logo] Lars Köppel
> Developer
> Email: lars.koeppel@xxxxxxxxxx
> Phone: +49 6221 5993580 <+4962215993580>
> ariadne.ai (Germany) GmbH
> Häusserstraße 3, 69115 Heidelberg
> Amtsgericht Mannheim, HRB 744040
> Geschäftsführer: Dr. Fabian Svara
> https://ariadne.ai
>
>
> On Thu, Jun 13, 2024 at 12:55 PM Eugen Block <eblock@xxxxxx> wrote:
>
>> Downgrading isn't supported, I don't think that would be a good idea.
>> I also don't see anything obvious standing out in the pg output. Any
>> chance you can add more OSDs to the metadata pool to see if it stops
>> at some point? Did the cluster usage change in any way? For example
>> cephfs snapshots which haven't been used before or something like
that?
>>
>> Zitat von Lars Köppel <lars.koeppel@xxxxxxxxxx>:
>>
>> > I updated from 17.2.6 to 17.2.7 and a few hours later to 18.2.2.
>> > Would it be an option to go back to 17.2.6?
>> >
>> >
>> > [image: ariadne.ai Logo] Lars Köppel
>> > Developer
>> > Email: lars.koeppel@xxxxxxxxxx
>> > Phone: +49 6221 5993580 <+4962215993580>
>> > ariadne.ai (Germany) GmbH
>> > Häusserstraße 3, 69115 Heidelberg
>> > Amtsgericht Mannheim, HRB 744040
>> > Geschäftsführer: Dr. Fabian Svara
>> > https://ariadne.ai
>> >
>> >
>> > On Wed, Jun 12, 2024 at 5:30 PM Eugen Block <eblock@xxxxxx> wrote:
>> >
>> >> Which version did you upgrade from to 18.2.2?
>> >> I can’t pin it down to a specific issue, but somewhere in the back
of
>> >> my mind is something related to a new omap format or something. But
>> >> I’m really not sure at all.
>> >>
>> >> Zitat von Lars Köppel <lars.koeppel@xxxxxxxxxx>:
>> >>
>> >> > I am happy to help you with as much information as possible. I
>> probably
>> >> > just don't know where to look for it.
>> >> > Below are the requested information. The cluster is rebuilding
the
>> >> > zapped OSD at the moment. This will probably take the next few
days.
>> >> >
>> >> >
>> >> > sudo ceph pg ls-by-pool metadata
>> >> > PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES
OMAP_BYTES*
>> >> > OMAP_KEYS* LOG LOG_DUPS STATE
>> >> > SINCE VERSION REPORTED UP ACTING
>> >> > SCRUB_STAMP DEEP_SCRUB_STAMP
>> >> > LAST_SCRUB_DURATION SCRUB_SCHEDULING
>> >> > 10.0 5217325 4994695 0 0 4194304
5880891340
>> >> > 9393865 1885 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'180849582 79875:391519635 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T09:08:09.829362+0000 2024-05-28T05:52:59.321589+0000
>> >> > 627 periodic scrub scheduled @
2024-06-17T08:21:31.808348+0000
>> >> > 10.1 5214785 5193424 0 0 0
5843682713
>> >> > 9410150 1912 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180914288 79875:342746928 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-01T15:56:28.927288+0000 2024-05-27T03:31:37.682966+0000
>> >> > 966 queued for scrub
>> >> > 10.2 5218432 5187168 0 0 0
6402011266
>> >> > 9812513 1874 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180970531 79875:341340204 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T13:40:58.994256+0000 2024-06-11T13:40:58.994256+0000
>> >> > 1942 periodic scrub scheduled @
2024-06-17T06:07:15.329675+0000
>> >> > 10.3 5217413 5217413 0 0 8388788
5766005023
>> >> > 9271787 1923 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181012233 79875:388295881 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-12T00:35:56.965547+0000 2024-05-23T19:54:56.121729+0000
>> >> > 492 periodic scrub scheduled @
2024-06-18T06:39:31.103864+0000
>> >> > 10.4 5220069 5220069 0 0 12583466
6027548724
>> >> > 9537290 1959 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181576075 79875:405295868 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T17:47:22.923514+0000 2024-05-31T02:06:55.339574+0000
>> >> > 581 periodic scrub scheduled @
2024-06-17T00:59:37.214420+0000
>> >> > 10.5 5216162 5211999 0 0 4194304
5941347251
>> >> > 9542764 1930 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180455793 79875:338418517 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-11T22:50:16.170708+0000 2024-05-30T23:49:54.316379+0000
>> >> > 528 periodic scrub scheduled @
2024-06-17T04:39:25.905185+0000
>> >> > 10.6 5216100 4980459 0 0 4521984
6428088514
>> >> > 9850762 1911 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'184045876 79875:396809795 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T22:24:05.102716+0000 2024-06-11T22:24:05.102716+0000
>> >> > 1082 periodic scrub scheduled @
2024-06-17T07:58:44.289885+0000
>> >> > 10.7 5218232 5218232 0 0 4194304
6377065363
>> >> > 9849360 1919 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182672562 79875:342449062 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-11T06:22:15.689422+0000 2024-06-11T06:22:15.689422+0000
>> >> > 8225 periodic scrub scheduled @
2024-06-17T13:05:59.225052+0000
>> >> > 10.8 5219620 5182816 0 0 0
6167304290
>> >> > 9691796 1896 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'179628377 79875:378022884 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-11T22:06:01.386763+0000 2024-06-11T22:06:01.386763+0000
>> >> > 1286 periodic scrub scheduled @
2024-06-17T07:54:54.133093+0000
>> >> > 10.9 5219448 5164591 0 0 8388698
5796048346
>> >> > 9338312 1868 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181739392 79875:387412389 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T05:21:00.586747+0000 2024-05-26T11:10:59.780673+0000
>> >> > 539 periodic scrub scheduled @
2024-06-18T15:32:59.155092+0000
>> >> > 10.a 5219861 5163635 0 0 12582912
5841839055
>> >> > 9387200 1916 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180205688 79875:379381294 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T12:35:05.571200+0000 2024-05-22T11:07:16.041773+0000
>> >> > 1093 periodic deep scrub scheduled @
>> >> 2024-06-17T05:21:40.136463+0000
>> >> > 10.b 5217949 5217949 0 0 16777216
5935863260
>> >> > 9462127 1881 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181655745 79875:343806807 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T22:41:28.976920+0000 2024-05-26T08:43:29.217457+0000
>> >> > 520 periodic scrub scheduled @
2024-06-17T17:44:32.764093+0000
>> >> > 10.c 5221697 5217118 0 0 4194304
6015217841
>> >> > 9574445 1928 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180892826 79875:341490398 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-11T09:20:58.443473+0000 2024-05-30T00:13:50.306507+0000
>> >> > 768 periodic scrub scheduled @
2024-06-16T19:41:21.977436+0000
>> >> > 10.d 5217727 4908764 0 0 0
5825598519
>> >> > 9349877 1930 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'179050347 79875:387455993 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-02T07:27:27.873631+0000 2024-05-20T03:58:30.225170+0000
>> >> > 952 queued for deep scrub
>> >> > 10.e 5215040 5210572 0 0 4194304
5790634469
>> >> > 9327651 1905 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180049044 79875:377196119 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T05:03:08.521423+0000 2024-05-22T13:47:46.753131+0000
>> >> > 527 periodic scrub scheduled @
2024-06-17T17:30:36.079456+0000
>> >> > 10.f 5217202 4885952 0 0 8388630
6005274626
>> >> > 9589178 1917 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'181953069 79875:396402453 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T12:20:49.661741+0000 2024-06-11T12:20:49.661741+0000
>> >> > 3949 periodic scrub scheduled @
2024-06-17T18:37:10.974112+0000
>> >> > 10.10 5214985 5157391 0 0 4194394
6275024632
>> >> > 9721505 1922 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181801708 79875:395055895 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T01:49:47.831191+0000 2024-06-11T01:49:47.831191+0000
>> >> > 9579 periodic scrub scheduled @
2024-06-16T11:40:28.305905+0000
>> >> > 10.11 5219236 4983134 0 0 4194304
5787580073
>> >> > 9316893 1901 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'181011816 79875:343488767 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-03T00:06:14.862209+0000 2024-05-27T00:05:56.468361+0000
>> >> > 960 queued for scrub
>> >> > 10.12 5214103 5182770 0 0 90
5970750188
>> >> > 9505613 1891 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180210613 79875:341042616 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-12T06:39:48.252419+0000 2024-05-28T01:32:03.381577+0000
>> >> > 527 periodic scrub scheduled @
2024-06-18T11:23:33.994438+0000
>> >> > 10.13 5214580 5214580 0 0 8388642
5793432968
>> >> > 9335940 1957 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182394614 79875:393423214 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-10T17:52:04.690203+0000 2024-05-22T03:39:02.453336+0000
>> >> > 1151 periodic deep scrub scheduled @
>> >> 2024-06-16T00:28:37.682765+0000
>> >> > 10.14 5218591 5218591 0 0 5053325
6046268958
>> >> > 9582481 1915 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'184576050 79875:425213800 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-03T05:36:07.656965+0000 2024-05-28T22:46:16.814318+0000
>> >> > 985 queued for scrub
>> >> > 10.15 5215919 4907184 0 0 11939335
5752284246
>> >> > 9285870 1889 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'179713036 79875:337938973 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T14:06:50.138992+0000 2024-05-24T07:52:22.522455+0000
>> >> > 786 periodic scrub scheduled @
2024-06-17T13:00:48.869790+0000
>> >> > 10.16 5221091 5221091 0 0 8388608
5966308565
>> >> > 9487816 1916 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'183445984 79875:399413009 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-03T02:01:53.649417+0000 2024-06-03T02:01:53.649417+0000
>> >> > 6935 queued for scrub
>> >> > 10.17 5218069 5194732 0 0 4194304
6231638001
>> >> > 9735915 1890 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182390128 79875:340719915 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-12T06:31:01.523916+0000 2024-06-12T06:31:01.523916+0000
>> >> > 1272 periodic scrub scheduled @
2024-06-17T16:33:45.723673+0000
>> >> > 10.18 5214253 5159897 0 0 0
6128506469
>> >> > 9673261 1886 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'179754208 79875:389150273 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T02:53:31.027154+0000 2024-06-12T02:53:31.027154+0000
>> >> > 1308 periodic scrub scheduled @
2024-06-17T22:19:41.359070+0000
>> >> > 10.19 5220131 5183342 0 0 0
5869775360
>> >> > 9460583 1953 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181750923 79875:393162989 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T19:21:24.106504+0000 2024-05-22T15:42:01.721091+0000
>> >> > 522 periodic scrub scheduled @
2024-06-17T20:09:55.173174+0000
>> >> > 10.1a 5217968 5186700 0 0 4194304
6183652625
>> >> > 9733847 1896 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181056240 79875:394241128 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-02T15:37:49.730965+0000 2024-06-02T15:37:49.730965+0000
>> >> > 7347 queued for scrub
>> >> > 10.1b 5221259 5161879 0 0 0
5989030303
>> >> > 9561035 1875 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181052305 79875:343144823 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-12T06:01:07.213894+0000 2024-06-02T00:23:51.063901+0000
>> >> > 543 periodic scrub scheduled @
2024-06-17T15:51:21.700275+0000
>> >> > 10.1c 5217034 5217034 0 0 554
5979861913
>> >> > 9504442 1875 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180913313 79875:341328275 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-01T20:02:47.025129+0000 2024-06-01T20:02:47.025129+0000
>> >> > 6568 queued for scrub
>> >> > 10.1d 5219691 4997027 0 0 12582912
6081321658
>> >> > 9599164 1918 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'179638837 79875:391808225 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T20:27:02.506715+0000 2024-05-31T22:58:06.578167+0000
>> >> > 517 periodic scrub scheduled @
2024-06-16T21:05:59.442200+0000
>> >> > 10.1e 5213369 5208776 0 0 4194304
6295629499
>> >> > 9765317 1941 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181145210 79875:391326487 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T07:49:28.432084+0000 2024-06-12T07:49:28.432084+0000
>> >> > 1990 periodic scrub scheduled @
2024-06-18T01:04:07.677468+0000
>> >> > 10.1f 5218399 5000094 0 0 8388608
5821063844
>> >> > 9421222 1896 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'180960296 79875:400635456 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T13:53:40.278050+0000 2024-05-22T19:39:02.345663+0000
>> >> > 761 periodic scrub scheduled @
2024-06-16T22:24:39.094932+0000
>> >> > 10.20 5216437 5185379 0 0 0
6280994122
>> >> > 9719574 1945 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181610117 79875:391874431 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T12:58:57.754648+0000 2024-06-11T12:58:57.754648+0000
>> >> > 2288 periodic scrub scheduled @
2024-06-17T23:33:35.937406+0000
>> >> > 10.21 5217349 5217349 0 0 8388698
6348931429
>> >> > 9822197 1867 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181724357 79875:343883762 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T18:28:34.044546+0000 2024-06-11T18:28:34.044546+0000
>> >> > 1817 periodic scrub scheduled @
2024-06-17T12:18:47.312213+0000
>> >> > 10.22 5215544 4943475 0 0 4194304
5982815232
>> >> > 9482159 1899 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'180230331 79875:340839853 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T15:21:21.994674+0000 2024-05-30T08:01:03.301650+0000
>> >> > 649 periodic scrub scheduled @
2024-06-17T01:33:20.228420+0000
>> >> > 10.23 5217758 5217758 0 0 146
5804086623
>> >> > 9336298 1920 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180948432 79875:388158699 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-03T05:19:32.311846+0000 2024-05-27T08:13:15.250058+0000
>> >> > 971 queued for scrub
>> >> > 10.24 5221505 5221505 0 0 0
6360995644
>> >> > 9816333 1925 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182198758 79875:406847098 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-12T04:00:44.315682+0000 2024-06-12T04:00:44.315682+0000
>> >> > 2046 periodic scrub scheduled @
2024-06-18T01:00:22.708069+0000
>> >> > 10.25 5214911 5214911 0 0 12583002
5726320657
>> >> > 9264352 1910 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180132024 79875:338167995 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T14:43:47.076906+0000 2024-05-23T12:16:34.271856+0000
>> >> > 715 periodic scrub scheduled @
2024-06-17T14:42:04.951046+0000
>> >> > 10.26 5217588 4973667 0 0 0
6040775867
>> >> > 9598406 1927 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'188144536 79875:400277134 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-02T06:55:09.016528+0000 2024-06-02T06:55:09.016528+0000
>> >> > 7047 queued for scrub
>> >> > 10.27 5218490 5218490 0 0 8388630
6265942186
>> >> > 9818746 1941 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182400328 79875:342052414 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-12T01:01:06.674323+0000 2024-06-12T01:01:06.674323+0000
>> >> > 1500 periodic scrub scheduled @
2024-06-18T09:04:18.289464+0000
>> >> > 10.28 5221153 5221153 0 0 0
6049095371
>> >> > 9597182 1913 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'179438777 79875:378265375 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-12T04:09:58.896192+0000 2024-06-01T04:32:46.011974+0000
>> >> > 553 periodic scrub scheduled @
2024-06-18T00:53:21.846375+0000
>> >> > 10.29 5214066 5214066 0 0 4194304
5767555623
>> >> > 9316899 1937 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181568997 79875:387683159 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-10T22:48:57.757493+0000 2024-05-21T07:14:53.348341+0000
>> >> > 938 periodic deep scrub scheduled @
>> >> 2024-06-16T06:24:55.108695+0000
>> >> > 10.2a 5216314 4977811 0 0 4194304
6012567810
>> >> > 9563804 1951 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'181140734 79875:381017643 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-11T09:57:29.097708+0000 2024-05-30T03:47:22.798044+0000
>> >> > 1169 periodic scrub scheduled @
2024-06-16T10:08:48.053531+0000
>> >> > 10.2b 5219620 5219620 0 0 8388608
5829917912
>> >> > 9412426 1889 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180465782 79875:342263856 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-12T08:30:08.941782+0000 2024-05-24T22:00:29.535205+0000
>> >> > 636 periodic scrub scheduled @
2024-06-17T09:00:09.155675+0000
>> >> > 10.2c 5214087 5214087 0 0 0
5874182922
>> >> > 9375708 1871 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180616041 79875:341244736 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-10T23:10:08.184410+0000 2024-05-27T17:41:07.583255+0000
>> >> > 1268 periodic scrub scheduled @
2024-06-16T00:28:52.710385+0000
>> >> > 10.2d 5213968 4984822 0 0 4337824
5835754026
>> >> > 9417033 1947 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'178824607 79875:386427308 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-12T09:27:43.774331+0000 2024-05-27T05:33:45.293266+0000
>> >> > 494 periodic scrub scheduled @
2024-06-18T10:19:18.219366+0000
>> >> > 10.2e 5216263 4974456 0 0 12582912
6025714323
>> >> > 9553880 1940 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'180639631 79875:378284872 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-12T01:19:42.477063+0000 2024-05-31T08:42:41.293335+0000
>> >> > 526 periodic scrub scheduled @
2024-06-17T09:53:24.981953+0000
>> >> > 10.2f 5220057 5220057 0 0 13839831
5804027574
>> >> > 9370862 1882 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180961915 79875:394513378 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-11T19:58:05.145147+0000 2024-05-24T20:18:32.301831+0000
>> >> > 535 periodic scrub scheduled @
2024-06-18T03:16:19.773648+0000
>> >> > 10.30 5217090 4955469 0 0 12582934
6438521431
>> >> > 9924825 1869 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 2h 79875'181488488 79875:404940886 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T07:16:18.095439+0000 2024-06-12T07:16:18.095439+0000
>> >> > 2190 periodic scrub scheduled @
2024-06-18T10:42:55.574570+0000
>> >> > 10.31 5218176 5218176 0 0 4194304
6215016361
>> >> > 9752698 1897 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181622879 79875:343554151 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-12T09:00:15.941065+0000 2024-06-12T09:00:15.941065+0000
>> >> > 1806 periodic scrub scheduled @
2024-06-18T02:46:00.756777+0000
>> >> > 10.32 5223171 4704732 0 0 8388608
6205669256
>> >> > 9800320 1920 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180911263 79875:342754128 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T19:49:09.010583+0000 2024-06-11T19:49:09.010583+0000
>> >> > 1664 periodic scrub scheduled @
2024-06-17T07:07:21.062789+0000
>> >> > 10.33 5215341 4525338 0 0 4194304
6066965588
>> >> > 9635403 1888 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181467497 79875:402958597 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-12T01:59:23.933458+0000 2024-05-31T17:01:38.620377+0000
>> >> > 512 periodic scrub scheduled @
2024-06-17T22:07:45.026120+0000
>> >> > 10.34 5217342 5217342 0 0 8388608
5999920961
>> >> > 9570527 1956 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182013407 79875:419219483 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-12T09:10:17.390128+0000 2024-06-01T15:40:23.241769+0000
>> >> > 601 periodic scrub scheduled @
2024-06-18T09:56:03.061859+0000
>> >> > 10.35 5222114 5222114 0 0 8388608
6021420075
>> >> > 9571300 1952 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180948342 79875:338598678 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T22:59:17.017317+0000 2024-05-30T15:59:15.553266+0000
>> >> > 539 periodic scrub scheduled @
2024-06-17T02:41:31.718321+0000
>> >> > 10.36 5223657 5223657 0 0 4331271
5844578100
>> >> > 9347405 1903 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'183363403 79875:408484683 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T18:37:46.653097+0000 2024-05-25T14:13:51.648712+0000
>> >> > 553 periodic scrub scheduled @
2024-06-17T12:58:00.134657+0000
>> >> > 10.37 5219248 5219248 0 0 4194304
6072826085
>> >> > 9579049 1936 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182091890 79875:341545087 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-11T14:19:40.744743+0000 2024-05-29T18:40:12.602405+0000
>> >> > 771 periodic scrub scheduled @
2024-06-16T18:24:26.104867+0000
>> >> > 10.38 5214022 5214022 0 0 4194858
5780805541
>> >> > 9299038 1951 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'178809936 79875:390113817 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-02T00:55:22.046433+0000 2024-05-25T16:49:10.279003+0000
>> >> > 931 queued for scrub
>> >> > 10.39 5220692 4542094 0 0 0
6401799077
>> >> > 9864164 1872 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'182881827 79875:402437400 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-12T01:42:35.239155+0000 2024-06-12T01:42:35.239155+0000
>> >> > 1370 periodic scrub scheduled @
2024-06-17T10:51:19.580502+0000
>> >> > 10.3a 5215648 4552651 0 0 0
6194465524
>> >> > 9713558 1898 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181458295 79875:394916652 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-11T21:19:29.020889+0000 2024-06-11T21:19:29.020889+0000
>> >> > 1227 periodic scrub scheduled @
2024-06-17T07:19:46.503819+0000
>> >> > 10.3b 5218220 5218220 0 0 8388608
5951035110
>> >> > 9490698 1878 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180815984 79875:342438023 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-03T05:03:19.483049+0000 2024-05-28T18:50:55.369009+0000
>> >> > 1004 queued for scrub
>> >> > 10.3c 5221615 5221615 0 0 12582912
6260894058
>> >> > 9727358 1938 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181561627 79875:342153262 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T19:12:06.206008+0000 2024-06-11T19:12:06.206008+0000
>> >> > 1471 periodic scrub scheduled @
2024-06-17T02:09:19.786893+0000
>> >> > 10.3d 5213771 5213771 0 0 0
6215975311
>> >> > 9699615 1946 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'179354493 79875:399858588 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T04:05:10.877703+0000 2024-06-11T04:05:10.877703+0000
>> >> > 8123 periodic scrub scheduled @
2024-06-17T08:03:22.770035+0000
>> >> > 10.3e 5219446 4659577 0 0 4194304
5852588566
>> >> > 9370152 1933 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180035882 79875:391595513 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T20:35:43.624903+0000 2024-05-22T21:40:55.606650+0000
>> >> > 520 periodic scrub scheduled @
2024-06-17T00:54:06.347642+0000
>> >> > 10.3f 5219729 4889443 0 0 16777216
5925457842
>> >> > 9484458 1948 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'181330738 79875:409113809 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-12T05:12:01.852109+0000 2024-05-27T20:03:22.704122+0000
>> >> > 533 periodic scrub scheduled @
2024-06-17T13:07:19.370647+0000
>> >> > 10.40 5222241 4445164 0 0 0
6312545191
>> >> > 9773697 1873 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181341418 79875:393186178 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-10T20:00:37.561045+0000 2024-06-10T20:00:37.561045+0000
>> >> > 7712 periodic scrub scheduled @
2024-06-16T05:59:45.317316+0000
>> >> > 10.41 5215195 5183508 0 0 4194326
6000770355
>> >> > 9529537 1938 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181494334 79875:343712275 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-12T04:18:58.693597+0000 2024-06-01T11:05:59.370690+0000
>> >> > 539 periodic scrub scheduled @
2024-06-17T15:34:34.997281+0000
>> >> > 10.42 5219564 4681323 0 0 12582912
5822745109
>> >> > 9331625 1916 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'179811520 79875:340958258 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-02T19:58:06.439770+0000 2024-05-21T18:28:28.794738+0000
>> >> > 11044 queued for deep scrub
>> >> > 10.43 5219412 4599458 0 0 594
5737440613
>> >> > 9239869 1950 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180634536 79875:388879991 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-01T18:13:17.068416+0000 2024-05-20T16:32:20.302230+0000
>> >> > 1002 queued for deep scrub
>> >> > 10.44 5216008 5216008 0 0 4194304
6270195298
>> >> > 9786806 1944 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182088681 79875:406175480 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-12T00:19:01.240405+0000 2024-06-12T00:19:01.240405+0000
>> >> > 1593 periodic scrub scheduled @
2024-06-17T12:19:53.888923+0000
>> >> > 10.45 5216238 4694313 0 0 0
5714807184
>> >> > 9223666 1944 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'179709878 79875:338307727 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-02T00:39:50.657116+0000 2024-05-21T02:52:14.376959+0000
>> >> > 958 queued for deep scrub
>> >> > 10.46 5217950 5217950 0 0 9729074
5945415128
>> >> > 9499020 1915 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'183726716 79875:395024265 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T14:56:28.875949+0000 2024-05-29T04:19:53.409918+0000
>> >> > 724 periodic scrub scheduled @
2024-06-16T17:42:09.428674+0000
>> >> > 10.47 5219125 5219125 0 0 4194304
5775128028
>> >> > 9299099 1864 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181328922 79875:340510799 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-11T17:58:16.753327+0000 2024-05-23T14:24:25.876796+0000
>> >> > 652 periodic scrub scheduled @
2024-06-17T20:23:31.713805+0000
>> >> > 10.48 5216471 5185269 0 0 4194326
6352053972
>> >> > 9843580 1929 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'179547710 79875:378125137 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T23:22:17.435614+0000 2024-06-11T23:22:17.435614+0000
>> >> > 1379 periodic scrub scheduled @
2024-06-17T19:10:56.456654+0000
>> >> > 10.49 5217119 4580870 0 0 4194304
5971558944
>> >> > 9526047 1879 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'182489297 79875:389298429 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T08:57:42.791265+0000 2024-05-29T07:13:57.867295+0000
>> >> > 959 periodic scrub scheduled @
2024-06-17T06:44:34.852894+0000
>> >> > 10.4a 5217577 4909174 0 0 4194304
5960086599
>> >> > 9511657 1910 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'180982438 79875:381432369 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-12T02:31:42.576663+0000 2024-06-01T09:08:07.860775+0000
>> >> > 480 periodic scrub scheduled @
2024-06-18T01:57:37.976185+0000
>> >> > 10.4b 5220270 5220270 0 0 159808
6313986186
>> >> > 9763672 1934 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181279214 79875:342509944 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-11T11:03:38.997373+0000 2024-06-11T11:03:38.997373+0000
>> >> > 3970 periodic scrub scheduled @
2024-06-17T17:04:16.317699+0000
>> >> > 10.4c 5217864 5217864 0 0 4194304
6104537460
>> >> > 9679170 1948 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181081348 79875:341937760 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-12T06:09:49.667680+0000 2024-06-01T13:06:12.938513+0000
>> >> > 520 periodic scrub scheduled @
2024-06-18T11:39:03.277402+0000
>> >> > 10.4d 5217604 4905278 0 0 8388608
6250536027
>> >> > 9765571 1917 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'179562016 79875:387768172 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T13:38:28.479053+0000 2024-06-11T13:38:28.479053+0000
>> >> > 2004 periodic scrub scheduled @
2024-06-17T15:12:47.615309+0000
>> >> > 10.4e 5220202 5220202 0 0 12215022
5781813383
>> >> > 9291138 1940 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180360470 79875:377712114 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-01T22:06:12.347760+0000 2024-05-25T19:34:49.805104+0000
>> >> > 982 queued for scrub
>> >> > 10.4f 5215243 4561466 0 0 0
6099447266
>> >> > 9605810 1930 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181794065 79875:396660094 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-02T13:35:21.451128+0000 2024-06-02T13:35:21.451128+0000
>> >> > 7316 queued for scrub
>> >> > 10.50 5215219 5184003 0 0 0
6151789627
>> >> > 9674779 1871 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181695151 79875:405060298 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T16:42:00.221013+0000 2024-06-11T16:42:00.221013+0000
>> >> > 2385 periodic scrub scheduled @
2024-06-17T10:54:13.580253+0000
>> >> > 10.51 5218666 5218666 0 0 8388608
5774655330
>> >> > 9331373 1920 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180711664 79875:342978462 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T14:31:51.402844+0000 2024-05-24T11:26:15.539677+0000
>> >> > 730 periodic scrub scheduled @
2024-06-17T20:48:06.862863+0000
>> >> > 10.52 5218435 5187050 0 0 4194304
6375954134
>> >> > 9843553 1935 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180834874 79875:341502004 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T08:41:43.586833+0000 2024-06-11T08:41:43.586833+0000
>> >> > 8368 periodic scrub scheduled @
2024-06-16T08:59:08.248156+0000
>> >> > 10.53 5216822 5216822 0 0 10001995
5786896551
>> >> > 9368323 1928 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181337168 79875:402047940 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-12T04:54:21.147742+0000 2024-05-23T23:02:36.154347+0000
>> >> > 557 periodic scrub scheduled @
2024-06-17T11:39:03.144049+0000
>> >> > 10.54 5213949 4567626 0 0 8388608
6095478377
>> >> > 9664814 1944 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'182179986 79875:421118572 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T09:19:30.470319+0000 2024-06-01T17:56:32.089348+0000
>> >> > 551 periodic scrub scheduled @
2024-06-18T09:51:50.591138+0000
>> >> > 10.55 5217389 5217389 0 0 4194948
5905845521
>> >> > 9403344 1950 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'183481821 79875:341223864 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T15:31:05.524698+0000 2024-05-29T11:50:10.322128+0000
>> >> > 584 periodic scrub scheduled @
2024-06-17T04:28:55.809357+0000
>> >> > 10.56 5219142 5219142 0 0 0
6311549101
>> >> > 9773102 1897 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'184062267 79875:408792597 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-10T22:17:28.523159+0000 2024-06-10T22:17:28.523159+0000
>> >> > 8211 periodic scrub scheduled @
2024-06-17T03:16:39.913869+0000
>> >> > 10.57 5219035 4894148 0 0 180
5862209193
>> >> > 9396761 1895 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'181886155 79875:340148291 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-10T22:33:19.364284+0000 2024-05-28T12:05:26.762682+0000
>> >> > 951 periodic scrub scheduled @
2024-06-16T00:55:55.641233+0000
>> >> > 10.58 5218318 5185905 0 0 9878587
5911568798
>> >> > 9459707 1875 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'179109213 79875:390200207 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T13:08:37.380203+0000 2024-05-30T10:19:35.091370+0000
>> >> > 578 periodic scrub scheduled @
2024-06-17T15:53:56.734391+0000
>> >> > 10.59 5218315 5185838 0 0 4194304
5727052522
>> >> > 9214424 1875 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181700141 79875:400297310 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T11:15:00.221765+0000 2024-05-21T11:28:39.056382+0000
>> >> > 682 periodic deep scrub scheduled @
>> >> 2024-06-16T13:00:43.750185+0000
>> >> > 10.5a 5218301 5218301 0 0 4194304
5833097049
>> >> > 9365354 1942 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180699909 79875:393344045 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-12T01:50:51.954645+0000 2024-05-25T00:11:49.917556+0000
>> >> > 497 periodic scrub scheduled @
2024-06-17T12:05:07.871109+0000
>> >> > 10.5b 5215134 5215134 0 0 4194802
6070043467
>> >> > 9585134 1892 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181149500 79875:342539489 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-12T08:19:32.265112+0000 2024-06-02T11:33:23.094569+0000
>> >> > 714 periodic scrub scheduled @
2024-06-17T23:10:14.905316+0000
>> >> > 10.5c 5221503 5221503 0 0 4194304
5989312430
>> >> > 9524556 1939 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180748682 79875:340750417 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T09:37:59.483197+0000 2024-05-28T03:41:58.475567+0000
>> >> > 1021 periodic scrub scheduled @
2024-06-17T16:20:33.624394+0000
>> >> > 10.5d 5222202 5222202 0 0 8388608
6031557641
>> >> > 9520790 1928 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'179508607 79875:399919359 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-11T17:10:33.165278+0000 2024-05-30T05:53:10.772736+0000
>> >> > 575 periodic scrub scheduled @
2024-06-17T11:34:40.016966+0000
>> >> > 10.5e 5212770 4506643 0 0 0
5830165136
>> >> > 9388228 1938 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180002484 79875:391844220 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T22:32:49.026280+0000 2024-05-24T15:31:50.362016+0000
>> >> > 523 periodic scrub scheduled @
2024-06-18T06:19:36.998416+0000
>> >> > 10.5f 5221504 5221504 0 0 4194394
5807975316
>> >> > 9387383 1879 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181161493 79875:408530090 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T15:10:33.261745+0000 2024-05-25T06:37:01.629711+0000
>> >> > 845 periodic scrub scheduled @
2024-06-18T01:25:00.005508+0000
>> >> > 10.60 5217784 5186615 0 0 4194304
6208553838
>> >> > 9749044 1962 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181569072 79875:392211608 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T21:44:35.372861+0000 2024-06-11T21:44:35.372861+0000
>> >> > 1505 periodic scrub scheduled @
2024-06-17T19:37:54.401372+0000
>> >> > 10.61 5221292 4586636 0 0 8388608
5766320671
>> >> > 9280184 1910 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180945714 79875:344272511 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T16:51:25.519891+0000 2024-05-24T17:37:21.814566+0000
>> >> > 566 periodic scrub scheduled @
2024-06-18T02:56:43.473028+0000
>> >> > 10.62 5218498 4699002 0 0 4194466
6027128219
>> >> > 9555111 1884 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180387699 79875:341929273 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-12T03:10:08.547987+0000 2024-06-01T02:47:43.109035+0000
>> >> > 509 periodic scrub scheduled @
2024-06-17T23:58:36.771468+0000
>> >> > 10.63 5219101 5219101 0 0 0
6040382110
>> >> > 9500040 1885 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181623402 79875:389756489 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-02T23:50:14.947543+0000 2024-06-02T23:50:14.947543+0000
>> >> > 13925 queued for scrub
>> >> > 10.64 5217025 5217025 0 0 2277792
6222337345
>> >> > 9701950 1866 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181853168 79875:406174284 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-02T04:57:41.872723+0000 2024-06-02T04:57:41.872723+0000
>> >> > 6394 queued for scrub
>> >> > 10.65 5217908 4590550 0 0 4194326
6223151288
>> >> > 9709345 1916 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181196382 79875:340107600 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T15:53:28.725001+0000 2024-06-11T15:53:28.725001+0000
>> >> > 1343 periodic scrub scheduled @
2024-06-17T11:59:39.917825+0000
>> >> > 10.66 5218911 4893654 0 0 4194304
5815703055
>> >> > 9338987 1939 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'183062531 79875:395193073 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-12T01:10:56.919233+0000 2024-05-25T04:50:04.300293+0000
>> >> > 590 periodic scrub scheduled @
2024-06-18T11:39:55.201903+0000
>> >> > 10.67 5217539 4587208 0 0 4194304
6095526883
>> >> > 9640323 1955 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181875315 79875:342878178 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T07:58:41.522730+0000 2024-06-01T07:20:59.951966+0000
>> >> > 553 periodic scrub scheduled @
2024-06-18T05:40:51.996147+0000
>> >> > 10.68 5215852 4505902 0 0 8388608
6093858586
>> >> > 9596421 1926 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'179490643 79875:379632203 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T16:02:14.995195+0000 2024-05-30T22:02:42.335910+0000
>> >> > 526 periodic scrub scheduled @
2024-06-17T02:53:33.980471+0000
>> >> > 10.69 5216596 5216596 0 0 4194304
5833268044
>> >> > 9400414 1899 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181876551 79875:387059449 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T17:00:57.111851+0000 2024-05-23T07:05:05.989427+0000
>> >> > 572 periodic scrub scheduled @
2024-06-17T18:52:42.850336+0000
>> >> > 10.6a 5214106 4497370 0 0 0
5745504618
>> >> > 9296938 1869 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180109669 79875:381871149 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T00:27:45.155477+0000 2024-05-25T09:22:19.570011+0000
>> >> > 494 periodic scrub scheduled @
2024-06-18T07:00:30.017009+0000
>> >> > 10.6b 5219339 5219339 0 0 0
5801390847
>> >> > 9370084 1957 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180041524 79875:342270608 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-12T03:01:39.358464+0000 2024-05-23T01:04:22.522035+0000
>> >> > 488 periodic scrub scheduled @
2024-06-17T19:03:41.161095+0000
>> >> > 10.6c 5217917 5217917 0 0 554
6100979305
>> >> > 9612250 1955 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180968918 79875:341488318 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T18:47:35.969345+0000 2024-05-31T11:54:59.559316+0000
>> >> > 588 periodic scrub scheduled @
2024-06-18T06:02:38.982765+0000
>> >> > 10.6d 5221048 4550388 0 0 4194304
6024067862
>> >> > 9621947 1927 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'179373269 79875:388095690 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T03:26:37.931310+0000 2024-05-31T04:10:41.991457+0000
>> >> > 502 periodic scrub scheduled @
2024-06-18T08:52:34.478975+0000
>> >> > 10.6e 5215550 4888179 0 0 8388608
6110450001
>> >> > 9666892 1906 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'180885974 79875:380260263 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-11T20:18:23.657558+0000 2024-06-11T20:18:23.657558+0000
>> >> > 1218 periodic scrub scheduled @
2024-06-18T07:12:57.858451+0000
>> >> > 10.6f 5215850 4566820 0 0 0
6107146788
>> >> > 9634911 1923 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181752773 79875:396708021 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-10T17:32:51.735004+0000 2024-06-10T17:32:51.735004+0000
>> >> > 10119 periodic scrub scheduled @
2024-06-17T02:47:23.675814+0000
>> >> > 10.70 5218211 4672319 0 0 4194304
5793441438
>> >> > 9371563 1881 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180405012 79875:404578822 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-03T04:46:35.130293+0000 2024-05-22T07:50:20.898441+0000
>> >> > 969 queued for deep scrub
>> >> > 10.71 5218299 4599312 0 0 4194304
5807950891
>> >> > 9349720 1906 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180690682 79875:344108361 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-12T03:18:16.645927+0000 2024-05-23T09:32:56.055703+0000
>> >> > 487 periodic scrub scheduled @
2024-06-17T13:47:03.236787+0000
>> >> > 10.72 5223470 4922347 0 0 8388608
6416468325
>> >> > 9922777 1958 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'180797342 79875:342356916 [0,1,2]p0 [1,2]p1
>> >> > 2024-06-12T05:42:47.545762+0000 2024-06-12T05:42:47.545762+0000
>> >> > 1307 periodic scrub scheduled @
2024-06-18T13:20:41.826465+0000
>> >> > 10.73 5218301 4520651 0 0 0
6293832301
>> >> > 9738943 1953 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'182246986 79875:403792984 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-11T20:59:02.079157+0000 2024-06-11T20:59:02.079157+0000
>> >> > 1399 periodic scrub scheduled @
2024-06-17T01:00:32.819334+0000
>> >> > 10.74 5219061 4595112 0 0 0
5974775781
>> >> > 9469653 1943 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181947609 79875:420571221 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-03T02:17:19.687482+0000 2024-05-28T08:43:38.825954+0000
>> >> > 927 queued for scrub
>> >> > 10.75 5220994 4596634 0 0 25165824
5822315216
>> >> > 9362548 1877 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'180376305 79875:338847231 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-03T05:52:28.743255+0000 2024-05-24T05:11:00.211501+0000
>> >> > 981 queued for scrub
>> >> > 10.76 5222111 4892295 0 0 0
6067998823
>> >> > 9574254 1935 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 2h 79875'183871180 79875:409218100 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-03T04:14:04.513004+0000 2024-06-03T04:14:04.513004+0000
>> >> > 6987 queued for scrub
>> >> > 10.77 5219020 4574683 0 0 0
5889074277
>> >> > 9460824 1896 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181614825 79875:342558162 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-12T08:07:38.169523+0000 2024-05-27T15:15:35.587972+0000
>> >> > 536 periodic scrub scheduled @
2024-06-17T18:29:44.366900+0000
>> >> > 10.78 5220442 4693235 0 0 8388608
5879566566
>> >> > 9395501 1923 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'179031456 79875:391462415 [1,0,2]p1 [1,2]p1
>> >> > 2024-06-02T09:32:42.834344+0000 2024-05-27T10:41:23.822124+0000
>> >> > 994 queued for scrub
>> >> > 10.79 5219734 4702117 0 0 4194304
6129933545
>> >> > 9646669 1926 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'182574416 79875:402249812 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-12T05:52:02.959814+0000 2024-06-01T00:53:49.548055+0000
>> >> > 555 periodic scrub scheduled @
2024-06-17T17:48:00.189475+0000
>> >> > 10.7a 5215303 4526846 0 0 0
6316171107
>> >> > 9817029 1932 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181765231 79875:395877057 [2,0,1]p2 [2,1]p2
>> >> > 2024-06-12T02:23:42.692096+0000 2024-06-12T02:23:42.692096+0000
>> >> > 1457 periodic scrub scheduled @
2024-06-17T22:13:46.037000+0000
>> >> > 10.7b 5217213 4606816 0 0 4194304
6226859283
>> >> > 9718621 1952 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'181413541 79875:343968658 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T23:52:28.414752+0000 2024-06-11T23:52:28.414752+0000
>> >> > 1308 periodic scrub scheduled @
2024-06-17T18:52:39.473049+0000
>> >> > 10.7c 5215075 5215075 0 0 4194304
6013877208
>> >> > 9602345 1867 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'180991860 79875:341775615 [0,2,1]p0 [1,2]p1
>> >> > 2024-06-11T23:30:40.614412+0000 2024-05-31T14:10:30.542603+0000
>> >> > 503 periodic scrub scheduled @
2024-06-17T02:30:15.591036+0000
>> >> > 10.7d 5220700 4569720 0 0 5837609
6258736025
>> >> > 9764836 1897 3000
>> >> active+undersized+degraded+remapped+backfilling
>> >> > 3h 79875'179741885 79875:401448508 [2,1,0]p2 [2,1]p2
>> >> > 2024-06-12T04:45:05.030905+0000 2024-06-12T04:45:05.030905+0000
>> >> > 1565 periodic scrub scheduled @
2024-06-17T04:54:42.039122+0000
>> >> > 10.7e 5217113 5217113 0 0 16777216
6072081676
>> >> > 9598390 1902 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'181040361 79875:391032348 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T17:37:26.589968+0000 2024-06-11T17:37:26.589968+0000
>> >> > 1613 periodic scrub scheduled @
2024-06-17T05:06:05.391340+0000
>> >> > 10.7f 5218719 5218719 0 0 9909061
6130469539
>> >> > 9657824 1943 3000
>> >> active+undersized+degraded+remapped+backfill_wait
>> >> > 3h 79875'182283514 79875:408550888 [1,2,0]p1 [1,2]p1
>> >> > 2024-06-11T13:05:03.552526+0000 2024-06-11T13:05:03.552526+0000
>> >> > 1796 periodic scrub scheduled @
2024-06-17T09:57:04.545523+0000
>> >> >
>> >> > * NOTE: Omap statistics are gathered during deep scrub and may be
>> >> > inaccurate soon afterwards depending on utilization. See
>> >> >
http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics
>> for
>> >> > further details.
>> >> >
>> >> >
>> >> >
>> >> > sudo ceph osd df
>> >> > ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP
>> META
>> >> > AVAIL %USE VAR PGS STATUS
>> >> > 0 ssd 3.63869 1.00000 3.6 TiB 82 GiB 50 MiB 40
GiB
>> 42
>> >> GiB
>> >> > 3.6 TiB 2.21 0.03 0 up
>> >> > 1 ssd 3.63869 1.00000 3.6 TiB 2.0 TiB 16 GiB 1021
GiB
>> 1.0
>> >> TiB
>> >> > 1.6 TiB 55.41 0.75 161 up
>> >> > 2 ssd 3.63869 1.00000 3.6 TiB 3.0 TiB 16 GiB 1.1
TiB
>> 1.9
>> >> TiB
>> >> > 676 GiB 81.86 1.11 161 up
>> >> >
>> >> >
>> >> > sudo ceph df detail
>> >> > --- RAW STORAGE ---
>> >> > CLASS SIZE AVAIL USED RAW USED %RAW USED
>> >> > hdd 1.2 PiB 308 TiB 871 TiB 871 TiB 73.86
>> >> > ssd 11 TiB 5.8 TiB 5.1 TiB 5.1 TiB 46.51
>> >> > TOTAL 1.2 PiB 314 TiB 876 TiB 876 TiB 73.61
>> >> >
>> >> > --- POOLS ---
>> >> > POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED
>> (DATA)
>> >> > (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED
>> COMPR
>> >> > UNDER COMPR
>> >> > .mgr 1 1 827 MiB 827 MiB 0 B 208 1.6 GiB
1.6
>> GiB
>> >> > 0 B 0.11 734 GiB N/A 50 GiB N/A
0
>> B
>> >> > 0 B
>> >> > metadata 10 128 1.0 TiB 647 MiB 1.0 TiB 667.90M 2.1 TiB
1.3
>> GiB
>> >> > 2.1 TiB 59.45 719 GiB N/A 3.5 TiB N/A
>> 0 B
>> >> > 0 B
>> >> > data 11 2048 288 TiB 288 TiB 0 B 566.97M 863 TiB
863
>> TiB
>> >> > 0 B 81.10 67 TiB N/A N/A N/A
3.4
>> TiB
>> >> > 6.8 TiB
>> >> > ssd-data 18 32 14 GiB 14 GiB 0 B 265.17k 29 GiB
29
>> GiB
>> >> > 0 B 1.90 734 GiB N/A N/A N/A
0
>> B
>> >> > 0 B
>> >> >
>> >> >
>> >> > sudo ceph balancer status
>> >> > {
>> >> > "active": true,
>> >> > "last_optimize_duration": "0:00:00.000170",
>> >> > "last_optimize_started": "Wed Jun 12 13:14:49 2024",
>> >> > "mode": "upmap",
>> >> > "no_optimization_needed": true,
>> >> > "optimize_result": "Some objects (0.172816) are degraded; try
>> again
>> >> > later",
>> >> > "plans": []
>> >> > }
>> >> >
>> >> >
>> >> >
>> >> > [image: ariadne.ai Logo] Lars Köppel
>> >> > Developer
>> >> > Email: lars.koeppel@xxxxxxxxxx
>> >> > Phone: +49 6221 5993580 <+4962215993580>
>> >> > ariadne.ai (Germany) GmbH
>> >> > Häusserstraße 3, 69115 Heidelberg
>> >> > Amtsgericht Mannheim, HRB 744040
>> >> > Geschäftsführer: Dr. Fabian Svara
>> >> > https://ariadne.ai
>> >> >
>> >> >
>> >> > On Wed, Jun 12, 2024 at 2:53 PM Anthony D'Atri <
>> anthony.datri@xxxxxxxxx>
>> >> > wrote:
>> >> >
>> >> >> If you have:
>> >> >>
>> >> >> * pg_num too low (defaults are too low)
>> >> >> * pg_num not a power of 2
>> >> >> * pg_num != number of OSDs in the pool
>> >> >> * balancer not enabled
>> >> >>
>> >> >> any of those might result in imbalance.
>> >> >>
>> >> >> > On Jun 12, 2024, at 07:33, Eugen Block <eblock@xxxxxx> wrote:
>> >> >> >
>> >> >> > I don't have any good explanation at this point. Can you
share some
>> >> more
>> >> >> information like:
>> >> >> >
>> >> >> > ceph pg ls-by-pool <cephfs_metadata>
>> >> >> > ceph osd df (for the relevant OSDs)
>> >> >> > ceph df
>> >> >> >
>> >> >> > Thanks,
>> >> >> > Eugen
>> >> >> >
>> >> >> > Zitat von Lars Köppel <lars.koeppel@xxxxxxxxxx>:
>> >> >> >
>> >> >> >> Since my last update the size of the largest OSD increased
by 0.4
>> TiB
>> >> >> while
>> >> >> >> the smallest one only increased by 0.1 TiB. How is this
possible?
>> >> >> >>
>> >> >> >> Because the metadata pool reported to have only 900MB space
left,
>> I
>> >> >> stopped
>> >> >> >> the hot-standby MDS. This gave me 8GB back but these filled
up in
>> the
>> >> >> last
>> >> >> >> 2h.
>> >> >> >> I think I have to zap the next OSD because the filesystem is
>> getting
>> >> >> read
>> >> >> >> only...
>> >> >> >>
>> >> >> >> How is it possible that an OSD has over 1 TiB less data on it
>> after a
>> >> >> >> rebuild? And how is it possible to have so different sizes of
>> OSDs?
>> >> >> >>
>> >> >> >>
>> >> >> >> [image: ariadne.ai Logo] Lars Köppel
>> >> >> >> Developer
>> >> >> >> Email: lars.koeppel@xxxxxxxxxx
>> >> >> >> Phone: +49 6221 5993580 <+4962215993580>
>> >> >> >> ariadne.ai (Germany) GmbH
>> >> >> >> Häusserstraße 3, 69115 Heidelberg
>> >> >> >> Amtsgericht Mannheim, HRB 744040
>> >> >> >> Geschäftsführer: Dr. Fabian Svara
>> >> >> >> https://ariadne.ai
>> >> >> >>
>> >> >> >>
>> >> >> >> On Tue, Jun 11, 2024 at 3:47 PM Lars Köppel <
>> lars.koeppel@xxxxxxxxxx
>> >> >
>> >> >> wrote:
>> >> >> >>
>> >> >> >>> Only in warning mode. And there were no PG splits or merges
in
>> the
>> >> >> last 2
>> >> >> >>> month.
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> [image: ariadne.ai Logo] Lars Köppel
>> >> >> >>> Developer
>> >> >> >>> Email: lars.koeppel@xxxxxxxxxx
>> >> >> >>> Phone: +49 6221 5993580 <+4962215993580>
>> >> >> >>> ariadne.ai (Germany) GmbH
>> >> >> >>> Häusserstraße 3, 69115 Heidelberg
>> >> >> >>> Amtsgericht Mannheim, HRB 744040
>> >> >> >>> Geschäftsführer: Dr. Fabian Svara
>> >> >> >>> https://ariadne.ai
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> On Tue, Jun 11, 2024 at 3:32 PM Eugen Block <eblock@xxxxxx>
>> wrote:
>> >> >> >>>
>> >> >> >>>> I don't think scrubs can cause this. Do you have autoscaler
>> >> enabled?
>> >> >> >>>>
>> >> >> >>>> Zitat von Lars Köppel <lars.koeppel@xxxxxxxxxx>:
>> >> >> >>>>
>> >> >> >>>> > Hi,
>> >> >> >>>> >
>> >> >> >>>> > thank you for your response.
>> >> >> >>>> >
>> >> >> >>>> > I don't think this thread covers my problem, because the
OSDs
>> for
>> >> >> the
>> >> >> >>>> > metadata pool fill up at different rates. So I would
think
>> this
>> >> is
>> >> >> no
>> >> >> >>>> > direct problem with the journal.
>> >> >> >>>> > Because we had earlier problems with the journal I
changed
>> some
>> >> >> >>>> > settings(see below). I already restarted all MDS multiple
>> times
>> >> but
>> >> >> no
>> >> >> >>>> > change here.
>> >> >> >>>> >
>> >> >> >>>> > The health warnings regarding cache pressure resolve
normally
>> >> after
>> >> >> a
>> >> >> >>>> > short period of time, when the heavy load on the client
ends.
>> >> >> Sometimes
>> >> >> >>>> it
>> >> >> >>>> > stays a bit longer because an rsync is running and
copying
>> data
>> >> on
>> >> >> the
>> >> >> >>>> > cluster(rsync is not good at releasing the caps).
>> >> >> >>>> >
>> >> >> >>>> > Could it be a problem if scrubs run most of the time in
the
>> >> >> background?
>> >> >> >>>> Can
>> >> >> >>>> > this block any other tasks or generate new data itself?
>> >> >> >>>> >
>> >> >> >>>> > Best regards,
>> >> >> >>>> > Lars
>> >> >> >>>> >
>> >> >> >>>> >
>> >> >> >>>> > global basic
mds_cache_memory_limit
>> >> >> >>>> > 17179869184
>> >> >> >>>> > global advanced
mds_max_caps_per_client
>> >> >> >>>> > 16384
>> >> >> >>>> > global advanced
>> >> >> >>>> mds_recall_global_max_decay_threshold
>> >> >> >>>> > 262144
>> >> >> >>>> > global advanced
>> mds_recall_max_decay_rate
>> >> >> >>>> > 1.000000
>> >> >> >>>> > global advanced
>> >> mds_recall_max_decay_threshold
>> >> >> >>>> > 262144
>> >> >> >>>> > mds advanced
mds_cache_trim_threshold
>> >> >> >>>> > 131072
>> >> >> >>>> > mds advanced mds_heartbeat_grace
>> >> >> >>>> > 120.000000
>> >> >> >>>> > mds advanced
>> mds_heartbeat_reset_grace
>> >> >> >>>> > 7400
>> >> >> >>>> > mds advanced mds_tick_interval
>> >> >> >>>> > 3.000000
>> >> >> >>>> >
>> >> >> >>>> >
>> >> >> >>>> > [image: ariadne.ai Logo] Lars Köppel
>> >> >> >>>> > Developer
>> >> >> >>>> > Email: lars.koeppel@xxxxxxxxxx
>> >> >> >>>> > Phone: +49 6221 5993580 <+4962215993580>
>> >> >> >>>> > ariadne.ai (Germany) GmbH
>> >> >> >>>> > Häusserstraße 3, 69115 Heidelberg
>> >> >> >>>> > Amtsgericht Mannheim, HRB 744040
>> >> >> >>>> > Geschäftsführer: Dr. Fabian Svara
>> >> >> >>>> > https://ariadne.ai
>> >> >> >>>> >
>> >> >> >>>> >
>> >> >> >>>> > On Tue, Jun 11, 2024 at 2:05 PM Eugen Block <
eblock@xxxxxx>
>> >> wrote:
>> >> >> >>>> >
>> >> >> >>>> >> Hi,
>> >> >> >>>> >>
>> >> >> >>>> >> can you check if this thread [1] applies to your
situation?
>> You
>> >> >> don't
>> >> >> >>>> >> have multi-active MDS enabled, but maybe it's still some
>> journal
>> >> >> >>>> >> trimming, or maybe misbehaving clients? In your first
post
>> there
>> >> >> were
>> >> >> >>>> >> health warnings regarding cache pressure and cache
size. Are
>> >> those
>> >> >> >>>> >> resolved?
>> >> >> >>>> >>
>> >> >> >>>> >> [1]
>> >> >> >>>> >>
>> >> >> >>>> >>
>> >> >> >>>>
>> >> >>
>> >>
>>
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/7U27L27FHHPDYGA6VNNVWGLTXCGP7X23/#VOOV235D4TP5TEOJUWHF4AVXIOTHYQQE
>> >> >> >>>> >>
>> >> >> >>>> >> Zitat von Lars Köppel <lars.koeppel@xxxxxxxxxx>:
>> >> >> >>>> >>
>> >> >> >>>> >> > Hello everyone,
>> >> >> >>>> >> >
>> >> >> >>>> >> > short update to this problem.
>> >> >> >>>> >> > The zapped OSD is rebuilt and it has now 1.9 TiB (the
>> expected
>> >> >> size
>> >> >> >>>> >> ~50%).
>> >> >> >>>> >> > The other 2 OSDs are now at 2.8 respectively 3.2 TiB.
They
>> >> >> jumped up
>> >> >> >>>> and
>> >> >> >>>> >> > down a lot but the higher one has now also reached
>> 'nearfull'
>> >> >> >>>> status. How
>> >> >> >>>> >> > is this possible? What is going on?
>> >> >> >>>> >> >
>> >> >> >>>> >> > Does anyone have a solution how to fix this without
zapping
>> >> the
>> >> >> OSD?
>> >> >> >>>> >> >
>> >> >> >>>> >> > Best regards,
>> >> >> >>>> >> > Lars
>> >> >> >>>> >> >
>> >> >> >>>> >> >
>> >> >> >>>> >> > [image: ariadne.ai Logo] Lars Köppel
>> >> >> >>>> >> > Developer
>> >> >> >>>> >> > Email: lars.koeppel@xxxxxxxxxx
>> >> >> >>>> >> > Phone: +49 6221 5993580 <+4962215993580>
>> >> >> >>>> >> > ariadne.ai (Germany) GmbH
>> >> >> >>>> >> > Häusserstraße 3, 69115 Heidelberg
>> >> >> >>>> >> > Amtsgericht Mannheim, HRB 744040
>> >> >> >>>> >> > Geschäftsführer: Dr. Fabian Svara
>> >> >> >>>> >> > https://ariadne.ai
>> >> >> >>>> >> > _______________________________________________
>> >> >> >>>> >> > ceph-users mailing list -- ceph-users@xxxxxxx
>> >> >> >>>> >> > To unsubscribe send an email to
ceph-users-leave@xxxxxxx
>> >> >> >>>> >>
>> >> >> >>>> >>
>> >> >> >>>> >> _______________________________________________
>> >> >> >>>> >> ceph-users mailing list -- ceph-users@xxxxxxx
>> >> >> >>>> >> To unsubscribe send an email to
ceph-users-leave@xxxxxxx
>> >> >> >>>> >>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >
>> >> >> >
>> >> >> > _______________________________________________
>> >> >> > ceph-users mailing list -- ceph-users@xxxxxxx
>> >> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>> >>
>>
>>
>>
>>