Re: Help needed to configure erasure coding LRC plugin

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

Yes, sure, no problem to share it. I attach it to this email (as it may clutter the discussion if inline).

If somebody on the list has some clue on the LRC plugin, I'm still interested by understand what I'm doing wrong!

Cheers,

Michel

Le 04/05/2023 à 15:07, Eugen Block a écrit :
Hi,

I don't think you've shared your osd tree yet, could you do that? Apparently nobody else but us reads this thread or nobody reading this uses the LRC plugin. ;-)

Thanks,
Eugen

Zitat von Michel Jouvin <michel.jouvin@xxxxxxxxxxxxxxx>:

Hi,

I had to restart one of my OSD server today and the problem showed up again. This time I managed to capture "ceph health detail" output showing the problem with the 2 PGs:

[WRN] PG_AVAILABILITY: Reduced data availability: 2 pgs inactive, 2 pgs down     pg 56.1 is down, acting [208,65,73,206,197,193,144,155,178,182,183,133,17,NONE,36,NONE,230,NONE]     pg 56.12 is down, acting [NONE,236,28,228,218,NONE,215,117,203,213,204,115,136,181,171,162,137,128]

I still doesn't understand why, if I am supposed to survive to a datacenter failure, I cannot survive to 3 OSDs down on the same host, hosting shards for the PG. In the second case it is only 2 OSDs down but I'm surprised they don't seem in the same "group" of OSD (I'd expected all the the OSDs of one datacenter to be in the same groupe of 5 if the order given really reflects the allocation done...

Still interested by some explanation on what I'm doing wrong! Best regards,

Michel

Le 03/05/2023 à 10:21, Eugen Block a écrit :
I think I got it wrong with the locality setting, I'm still limited by the number of hosts I have available in my test cluster, but as far as I got with failure-domain=osd I believe k=6, m=3, l=3 with locality=datacenter could fit your requirement, at least with regards to the recovery bandwidth usage between DCs, but the resiliency would not match your requirement (one DC failure). That profile creates 3 groups of 4 chunks (3 data/coding chunks and one parity chunk) across three DCs, in total 12 chunks. The min_size=7 would not allow an entire DC to go down, I'm afraid, you'd have to reduce it to 6 to allow reads/writes in a disaster scenario. I'm still not sure if I got it right this time, but maybe you're better off without the LRC plugin with the limited number of hosts. Instead you could use the jerasure plugin with a profile like k=4 m=5 allowing an entire DC to fail without losing data access (we have one customer using that).

Zitat von Eugen Block <eblock@xxxxxx>:

Hi,

disclaimer: I haven't used LRC in a real setup yet, so there might be some misunderstandings on my side. But I tried to play around with one of my test clusters (Nautilus). Because I'm limited in the number of hosts (6 across 3 virtual DCs) I tried two different profiles with lower numbers to get a feeling for how that works.

# first attempt
ceph:~ # ceph osd erasure-code-profile set LRCprofile plugin=lrc k=4 m=2 l=3 crush-failure-domain=host

For every third OSD one parity chunk is added, so 2 more chunks to store ==> 8 chunks in total. Since my failure-domain is host and I only have 6 I get incomplete PGs.

# second attempt
ceph:~ # ceph osd erasure-code-profile set LRCprofile plugin=lrc k=2 m=2 l=2 crush-failure-domain=host

This gives me 6 chunks in total to store across 6 hosts which works:

ceph:~ # ceph pg ls-by-pool lrcpool
PG   OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE        SINCE VERSION REPORTED UP                    ACTING SCRUB_STAMP DEEP_SCRUB_STAMP 50.0       1        0         0       0   619 0          0 1 active+clean   72s 18410'1 18415:54 [27,13,0,2,25,7]p27 [27,13,0,2,25,7]p27 2023-05-02 14:53:54.322135 2023-05-02 14:53:54.322135 50.1       0        0         0       0     0 0          0 0 active+clean    6m     0'0 18414:26 [27,33,22,6,13,34]p27 [27,33,22,6,13,34]p27 2023-05-02 14:53:54.322135 2023-05-02 14:53:54.322135 50.2       0        0         0       0     0 0          0 0 active+clean    6m     0'0 18413:25 [1,28,14,4,31,21]p1 [1,28,14,4,31,21]p1 2023-05-02 14:53:54.322135 2023-05-02 14:53:54.322135 50.3       0        0         0       0     0 0          0 0 active+clean    6m     0'0 18413:24 [8,16,26,33,7,25]p8 [8,16,26,33,7,25]p8 2023-05-02 14:53:54.322135 2023-05-02 14:53:54.322135

After stopping all OSDs on one host I was still able to read and write into the pool, but after stopping a second host one PG from that pool went "down". That I don't fully understand yet, but I just started to look into it. With your setup (12 hosts) I would recommend to not utilize all of them so you have capacity to recover, let's say one "spare" host per DC, leaving 9 hosts in total. A profile with k=3 m=3 l=2 could make sense here, resulting in 9 total chunks (one more parity chunks for every other OSD), min_size 4. But as I wrote, it probably doesn't have the resiliency for a DC failure, so that needs some further investigation.

Regards,
Eugen

Zitat von Michel Jouvin <michel.jouvin@xxxxxxxxxxxxxxx>:

Hi,

No... our current setup is 3 datacenters with the same configuration, i.e. 1 mon/mgr + 4 OSD servers with 16 OSDs each. Thus the total of 12 OSDs servers. As with LRC plugin, k+m must be a multiple of l, I found that k=9/m=66/l=5 with crush-locality=datacenter was achieving my goal of being resilient to a datacenter failure. Because I had this, I considered that lowering the crush failure domain to osd was not a major issue in my case (as it would not be worst than a datacenter failure if all the shards are on the same server in a datacenter) and was working around the lack of hosts for k=9/m=6 (15 OSDs).

May be it helps, if I give the erasure code profile used:

crush-device-class=hdd
crush-failure-domain=osd
crush-locality=datacenter
crush-root=default
k=9
l=5
m=6
plugin=lrc

The previously mentioned strange number for min_size for the pool created with this profile has vanished after Quincy upgrade as this parameter is no longer in the CRUH map rule! and the `ceph osd pool get` command reports the expected number (10):

---------

ceph osd pool get fink-z1.rgw.buckets.data min_size
min_size: 10
--------

Cheers,

Michel

Le 29/04/2023 à 20:36, Curt a écrit :
Hello,

What is your current setup, 1 server pet data center with 12 osd each? What is your current crush rule and LRC crush rule?


On Fri, Apr 28, 2023, 12:29 Michel Jouvin <michel.jouvin@xxxxxxxxxxxxxxx> wrote:

  Hi,

  I think I found a possible cause of my PG down but still
  understand why.
  As explained in a previous mail, I setup a 15-chunk/OSD EC pool (k=9,   m=6) but I have only 12 OSD servers in the cluster. To workaround the
  problem I defined the failure domain as 'osd' with the reasoning
  that as
  I was using the LRC plugin, I had the warranty that I could loose
  a site
  without impact, thus the possibility to loose 1 OSD server. Am I
  wrong?

  Best regards,

  Michel

  Le 24/04/2023 à 13:24, Michel Jouvin a écrit :
  > Hi,
  >
  > I'm still interesting by getting feedback from those using the LRC   > plugin about the right way to configure it... Last week I upgraded
  > from Pacific to Quincy (17.2.6) with cephadm which is doing the
  > upgrade host by host, checking if an OSD is ok to stop before
  actually
  > upgrading it. I had the surprise to see 1 or 2 PGs down at some
  points
  > in the upgrade (happened not for all OSDs but for every
  > site/datacenter). Looking at the details with "ceph health
  detail", I
  > saw that for these PGs there was 3 OSDs down but I was expecting
  the
  > pool to be resilient to 6 OSDs down (5 for R/W access) so I'm
  > wondering if there is something wrong in our pool configuration
  (k=9,
  > m=6, l=5).
  >
  > Cheers,
  >
  > Michel
  >
  > Le 06/04/2023 à 08:51, Michel Jouvin a écrit :
  >> Hi,
  >>
  >> Is somebody using LRC plugin ?
  >>
  >> I came to the conclusion that LRC  k=9, m=3, l=4 is not the
  same as
  >> jerasure k=9, m=6 in terms of protection against failures and
  that I
  >> should use k=9, m=6, l=5 to get a level of resilience >= jerasure
  >> k=9, m=6. The example in the documentation (k=4, m=2, l=3)
  suggests
  >> that this LRC configuration gives something better than
  jerasure k=4,
  >> m=2 as it is resilient to 3 drive failures (but not 4 if I
  understood
  >> properly). So how many drives can fail in the k=9, m=6, l=5
  >> configuration first without loosing RW access and second without
  >> loosing data?
  >>
  >> Another thing that I don't quite understand is that a pool created
  >> with this configuration (and failure domain=osd,
  locality=datacenter)
  >> has a min_size=3 (max_size=18 as expected). It seems wrong to
  me, I'd
  >> expected something ~10 (depending on answer to the previous
  question)...
  >>
  >> Thanks in advance if somebody could provide some sort of
  >> authoritative answer on these 2 questions. Best regards,
  >>
  >> Michel
  >>
  >> Le 04/04/2023 à 15:53, Michel Jouvin a écrit :
  >>> Answering to myself, I found the reason for 2147483647: it's
  >>> documented as a failure to find enough OSD (missing OSDs). And
  it is
  >>> normal as I selected different hosts for the 15 OSDs but I
  have only
  >>> 12 hosts!
  >>>
  >>> I'm still interested by an "expert" to confirm that LRC  k=9,
  m=3,
  >>> l=4 configuration is equivalent, in terms of redundancy, to a
  >>> jerasure configuration with k=9, m=6.
  >>>
  >>> Michel
  >>>
  >>> Le 04/04/2023 à 15:26, Michel Jouvin a écrit :
  >>>> Hi,
  >>>>
  >>>> As discussed in another thread (Crushmap rule for
  multi-datacenter
  >>>> erasure coding), I'm trying to create an EC pool spanning 3
  >>>> datacenters (datacenters are present in the crushmap), with the
  >>>> objective to be resilient to 1 DC down, at least keeping the
  >>>> readonly access to the pool and if possible the read-write
  access,
  >>>> and have a storage efficiency better than 3 replica (let say a
  >>>> storage overhead <= 2).
  >>>>
  >>>> In the discussion, somebody mentioned LRC plugin as a possible
  >>>> jerasure alternative to implement this without tweaking the
  >>>> crushmap rule to implement the 2-step OSD allocation. I
  looked at
  >>>> the documentation
  >>>>
(https://docs.ceph.com/en/latest/rados/operations/erasure-code-lrc/)
  >>>> but I have some questions if someone has experience/expertise
  with
  >>>> this LRC plugin.
  >>>>
  >>>> I tried to create a rule for using 5 OSDs per datacenter (15 in   >>>> total), with 3 (9 in total) being data chunks and others being
  >>>> coding chunks. For this, based of my understanding of
  examples, I
  >>>> used k=9, m=3, l=4. Is it right? Is this configuration
  equivalent,
  >>>> in terms of redundancy, to a jerasure configuration with k=9,
  m=6?
  >>>>
  >>>> The resulting rule, which looks correct to me, is:
  >>>>
  >>>> --------
  >>>>
  >>>> {
  >>>>     "rule_id": 6,
  >>>>     "rule_name": "test_lrc_2",
  >>>>     "ruleset": 6,
  >>>>     "type": 3,
  >>>>     "min_size": 3,
  >>>>     "max_size": 15,
  >>>>     "steps": [
  >>>>         {
  >>>>             "op": "set_chooseleaf_tries",
  >>>>             "num": 5
  >>>>         },
  >>>>         {
  >>>>             "op": "set_choose_tries",
  >>>>             "num": 100
  >>>>         },
  >>>>         {
  >>>>             "op": "take",
  >>>>             "item": -4,
  >>>>             "item_name": "default~hdd"
  >>>>         },
  >>>>         {
  >>>>             "op": "choose_indep",
  >>>>             "num": 3,
  >>>>             "type": "datacenter"
  >>>>         },
  >>>>         {
  >>>>             "op": "chooseleaf_indep",
  >>>>             "num": 5,
  >>>>             "type": "host"
  >>>>         },
  >>>>         {
  >>>>             "op": "emit"
  >>>>         }
  >>>>     ]
  >>>> }
  >>>>
  >>>> ------------
  >>>>
  >>>> Unfortunately, it doesn't work as expected: a pool created with
  >>>> this rule ends up with its pages active+undersize, which is
  >>>> unexpected for me. Looking at 'ceph health detail` output, I see
  >>>> for each page something like:
  >>>>
  >>>> pg 52.14 is stuck undersized for 27m, current state
  >>>> active+undersized, last acting
  >>>>
[90,113,2147483647,103,64,147,164,177,2147483647,133,58,28,8,32,2147483647]
  >>>>
  >>>> For each PG, there is 3 '2147483647' entries and I guess it
  is the
  >>>> reason of the problem. What are these entries about? Clearly
  it is
  >>>> not OSD entries... Looks like a negative number, -1, which in
  terms
  >>>> of crushmap ID is the crushmap root (named "default" in our
  >>>> configuration). Any trivial mistake I would have made?
  >>>>
  >>>> Thanks in advance for any help or for sharing any successful
  >>>> configuration?
  >>>>
  >>>> Best regards,
  >>>>
  >>>> Michel
  >>>> _______________________________________________
  >>>> ceph-users mailing list -- ceph-users@xxxxxxx
  >>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
  _______________________________________________
  ceph-users mailing list -- ceph-users@xxxxxxx
  To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
ID   CLASS  WEIGHT      TYPE NAME                 STATUS     REWEIGHT  PRI-AFF
 -1         1906.55798  root default                                          
-40          439.28101      datacenter irfu                                   
-28          109.81999          host irfuvd-osd1                              
128    hdd     8.90900              osd.128              up   1.00000  1.00000
129    hdd     8.90900              osd.129              up   1.00000  1.00000
130    hdd     8.90900              osd.130              up   1.00000  1.00000
131    hdd     8.90900              osd.131              up   1.00000  1.00000
132    hdd     8.90900              osd.132              up   1.00000  1.00000
133    hdd     8.90900              osd.133              up   1.00000  1.00000
134    hdd     8.90900              osd.134              up   1.00000  1.00000
135    hdd     8.90900              osd.135              up   1.00000  1.00000
136    hdd     8.90900              osd.136              up   1.00000  1.00000
137    hdd     8.90900              osd.137              up   1.00000  1.00000
138    hdd     8.90900              osd.138              up   1.00000  1.00000
139    hdd     8.90900              osd.139              up   1.00000  1.00000
140    ssd     0.72699              osd.140              up   1.00000  1.00000
141    ssd     0.72699              osd.141              up   1.00000  1.00000
142    ssd     0.72699              osd.142              up   1.00000  1.00000
143    ssd     0.72699              osd.143              up   1.00000  1.00000
-31          109.81999          host irfuvd-osd2                              
144    hdd     8.90900              osd.144              up   1.00000  1.00000
145    hdd     8.90900              osd.145              up   1.00000  1.00000
146    hdd     8.90900              osd.146              up   1.00000  1.00000
147    hdd     8.90900              osd.147              up   1.00000  1.00000
148    hdd     8.90900              osd.148              up   1.00000  1.00000
149    hdd     8.90900              osd.149              up   1.00000  1.00000
150    hdd     8.90900              osd.150              up   1.00000  1.00000
151    hdd     8.90900              osd.151              up   1.00000  1.00000
152    hdd     8.90900              osd.152              up   1.00000  1.00000
153    hdd     8.90900              osd.153              up   1.00000  1.00000
154    hdd     8.90900              osd.154              up   1.00000  1.00000
155    hdd     8.90900              osd.155              up   1.00000  1.00000
156    ssd     0.72699              osd.156              up   1.00000  1.00000
157    ssd     0.72699              osd.157              up   1.00000  1.00000
158    ssd     0.72699              osd.158              up   1.00000  1.00000
159    ssd     0.72699              osd.159              up   1.00000  1.00000
-34          109.81999          host irfuvd-osd3                              
160    hdd     8.90900              osd.160              up   1.00000  1.00000
161    hdd     8.90900              osd.161       destroyed         0        0
162    hdd     8.90900              osd.162              up   1.00000  1.00000
163    hdd     8.90900              osd.163              up   1.00000        0
164    hdd     8.90900              osd.164              up   1.00000  1.00000
165    hdd     8.90900              osd.165              up   1.00000  1.00000
166    hdd     8.90900              osd.166              up   1.00000  1.00000
167    hdd     8.90900              osd.167              up   1.00000  1.00000
168    hdd     8.90900              osd.168              up   1.00000  1.00000
169    hdd     8.90900              osd.169              up   1.00000  1.00000
170    hdd     8.90900              osd.170              up   1.00000  1.00000
171    hdd     8.90900              osd.171              up   1.00000  1.00000
172    ssd     0.72699              osd.172              up   1.00000  1.00000
173    ssd     0.72699              osd.173              up   1.00000  1.00000
174    ssd     0.72699              osd.174              up   1.00000  1.00000
175    ssd     0.72699              osd.175              up   1.00000  1.00000
-37          109.81999          host irfuvd-osd4                              
176    hdd     8.90900              osd.176              up   1.00000  1.00000
177    hdd     8.90900              osd.177              up   1.00000  1.00000
178    hdd     8.90900              osd.178              up   1.00000  1.00000
179    hdd     8.90900              osd.179              up   1.00000  1.00000
180    hdd     8.90900              osd.180              up   1.00000  1.00000
181    hdd     8.90900              osd.181              up   1.00000  1.00000
182    hdd     8.90900              osd.182              up   1.00000  1.00000
183    hdd     8.90900              osd.183              up   1.00000  1.00000
184    hdd     8.90900              osd.184              up   1.00000  1.00000
185    hdd     8.90900              osd.185              up   1.00000  1.00000
186    hdd     8.90900              osd.186              up   1.00000  1.00000
187    hdd     8.90900              osd.187              up   1.00000  1.00000
188    ssd     0.72699              osd.188              up   1.00000  1.00000
189    ssd     0.72699              osd.189              up   1.00000  1.00000
190    ssd     0.72699              osd.190              up   1.00000  1.00000
191    ssd     0.72699              osd.191              up   1.00000  1.00000
-41          788.55652      datacenter llr                                    
-16          109.83200          host llrvd-osd1                               
 64    hdd     8.90900              osd.64               up   1.00000  1.00000
 65    hdd     8.90999              osd.65               up   1.00000  1.00000
 66    hdd     8.90999              osd.66               up   1.00000  1.00000
 67    hdd     8.90999              osd.67               up   1.00000  1.00000
 68    hdd     8.90999              osd.68               up   1.00000  1.00000
 69    hdd     8.90999              osd.69               up   1.00000  1.00000
 70    hdd     8.90999              osd.70               up   1.00000  1.00000
 71    hdd     8.90999              osd.71               up   1.00000  1.00000
 72    hdd     8.90999              osd.72               up   1.00000  1.00000
 73    hdd     8.90999              osd.73               up   1.00000  1.00000
 74    hdd     8.90999              osd.74               up   1.00000  1.00000
 75    hdd     8.90999              osd.75               up   1.00000  1.00000
 76    ssd     0.72800              osd.76               up   1.00000  1.00000
 77    ssd     0.72800              osd.77               up   1.00000  1.00000
 78    ssd     0.72800              osd.78               up   1.00000  1.00000
 79    ssd     0.72800              osd.79               up   1.00000  1.00000
-19          109.81999          host llrvd-osd2                               
 80    hdd     8.90900              osd.80               up   1.00000  1.00000
 81    hdd     8.90900              osd.81               up   1.00000  1.00000
 82    hdd     8.90900              osd.82               up   1.00000  1.00000
 83    hdd     8.90900              osd.83               up   1.00000  1.00000
 84    hdd     8.90900              osd.84               up   1.00000  1.00000
 85    hdd     8.90900              osd.85               up   1.00000  1.00000
 86    hdd     8.90900              osd.86               up   1.00000  1.00000
 87    hdd     8.90900              osd.87               up   1.00000  1.00000
 88    hdd     8.90900              osd.88               up   1.00000  1.00000
 89    hdd     8.90900              osd.89               up   1.00000  1.00000
 90    hdd     8.90900              osd.90               up   1.00000  1.00000
 91    hdd     8.90900              osd.91               up   1.00000  1.00000
 92    ssd     0.72699              osd.92               up   1.00000  1.00000
 93    ssd     0.72699              osd.93               up   1.00000  1.00000
 94    ssd     0.72699              osd.94               up   1.00000  1.00000
 95    ssd     0.72699              osd.95               up   1.00000  1.00000
-22          109.81999          host llrvd-osd3                               
 96    hdd     8.90900              osd.96               up   1.00000  1.00000
 97    hdd     8.90900              osd.97               up   1.00000  1.00000
 98    hdd     8.90900              osd.98               up   1.00000  1.00000
 99    hdd     8.90900              osd.99               up   1.00000  1.00000
100    hdd     8.90900              osd.100              up   1.00000  1.00000
101    hdd     8.90900              osd.101              up   1.00000  1.00000
102    hdd     8.90900              osd.102              up   1.00000  1.00000
103    hdd     8.90900              osd.103              up   1.00000  1.00000
104    hdd     8.90900              osd.104              up   1.00000  1.00000
105    hdd     8.90900              osd.105              up   1.00000  1.00000
106    hdd     8.90900              osd.106              up   1.00000  1.00000
107    hdd     8.90900              osd.107              up   1.00000  1.00000
108    ssd     0.72699              osd.108              up   1.00000  1.00000
109    ssd     0.72699              osd.109              up   1.00000  1.00000
110    ssd     0.72699              osd.110              up   1.00000  1.00000
111    ssd     0.72699              osd.111              up   1.00000  1.00000
-25          109.81999          host llrvd-osd4                               
112    hdd     8.90900              osd.112              up   1.00000  1.00000
113    hdd     8.90900              osd.113              up   1.00000  1.00000
114    hdd     8.90900              osd.114              up   1.00000  1.00000
115    hdd     8.90900              osd.115              up   1.00000  1.00000
116    hdd     8.90900              osd.116              up   1.00000  1.00000
117    hdd     8.90900              osd.117              up   1.00000  1.00000
118    hdd     8.90900              osd.118              up   1.00000  1.00000
119    hdd     8.90900              osd.119              up   1.00000  1.00000
120    hdd     8.90900              osd.120              up   1.00000  1.00000
121    hdd     8.90900              osd.121              up   1.00000  1.00000
122    hdd     8.90900              osd.122              up   1.00000  1.00000
123    hdd     8.90900              osd.123              up   1.00000  1.00000
124    ssd     0.72699              osd.124              up   1.00000  1.00000
125    ssd     0.72699              osd.125              up   1.00000  1.00000
126    ssd     0.72699              osd.126              up   1.00000  1.00000
127    ssd     0.72699              osd.127              up   1.00000  1.00000
-49          349.26453          host llrvd-osd5                               
192    hdd    14.55269              osd.192              up   1.00000  1.00000
193    hdd    14.55269              osd.193              up   1.00000  1.00000
194    hdd    14.55269              osd.194              up   1.00000  1.00000
195    hdd    14.55269              osd.195              up   1.00000  1.00000
196    hdd    14.55269              osd.196              up   1.00000  1.00000
197    hdd    14.55269              osd.197              up   1.00000  1.00000
198    hdd    14.55269              osd.198              up   1.00000  1.00000
199    hdd    14.55269              osd.199              up   1.00000  1.00000
200    hdd    14.55269              osd.200              up   1.00000  1.00000
201    hdd    14.55269              osd.201              up   1.00000  1.00000
202    hdd    14.55269              osd.202              up   1.00000  1.00000
203    hdd    14.55269              osd.203              up   1.00000  1.00000
204    hdd    14.55269              osd.204              up   1.00000  1.00000
205    hdd    14.55269              osd.205              up   1.00000  1.00000
206    hdd    14.55269              osd.206              up   1.00000  1.00000
207    hdd    14.55269              osd.207              up   1.00000  1.00000
208    hdd    14.55269              osd.208              up   1.00000  1.00000
209    hdd    14.55269              osd.209              up   1.00000  1.00000
210    hdd    14.55269              osd.210              up   1.00000  1.00000
211    hdd    14.55269              osd.211              up   1.00000  1.00000
212    hdd    14.55269              osd.212              up   1.00000  1.00000
213    hdd    14.55269              osd.213              up   1.00000  1.00000
214    hdd    14.55269              osd.214              up   1.00000  1.00000
215    hdd    14.55269              osd.215              up   1.00000  1.00000
-42          678.72046      datacenter vallee                                 
 -3                  0          host valvd-osd1                               
 -7          109.81999          host valvd-osd2                               
 16    hdd     8.90900              osd.16               up   1.00000  1.00000
 17    hdd     8.90900              osd.17               up   1.00000  1.00000
 18    hdd     8.90900              osd.18               up   1.00000  1.00000
 19    hdd     8.90900              osd.19               up   1.00000  1.00000
 20    hdd     8.90900              osd.20               up   1.00000  1.00000
 21    hdd     8.90900              osd.21               up   1.00000  1.00000
 24    hdd     8.90900              osd.24               up   1.00000  1.00000
 25    hdd     8.90900              osd.25               up   1.00000  1.00000
 26    hdd     8.90900              osd.26               up   1.00000  1.00000
 27    hdd     8.90900              osd.27               up   1.00000  1.00000
 28    hdd     8.90900              osd.28               up   1.00000  1.00000
 63    hdd     8.90900              osd.63               up   1.00000  1.00000
 22    ssd     0.72699              osd.22               up   1.00000  1.00000
 23    ssd     0.72699              osd.23               up   1.00000  1.00000
 29    ssd     0.72699              osd.29               up   1.00000  1.00000
 30    ssd     0.72699              osd.30               up   1.00000  1.00000
-10          109.81592          host valvd-osd3                               
 31    hdd     8.90900              osd.31               up   1.00000  1.00000
 32    hdd     8.90900              osd.32               up   1.00000  1.00000
 33    hdd     8.90900              osd.33               up   1.00000  1.00000
 34    hdd     8.90900              osd.34               up   1.00000  1.00000
 35    hdd     8.90900              osd.35               up   1.00000  1.00000
 36    hdd     8.90900              osd.36               up   1.00000  1.00000
 37    hdd     8.90900              osd.37               up   1.00000  1.00000
 38    hdd     8.90900              osd.38               up   1.00000  1.00000
 39    hdd     8.90900              osd.39               up   1.00000  1.00000
 40    hdd     8.90900              osd.40               up   1.00000  1.00000
 41    hdd     8.90900              osd.41               up   1.00000  1.00000
 42    hdd     8.90900              osd.42               up   1.00000  1.00000
 43    ssd     0.72699              osd.43               up   1.00000  1.00000
 44    ssd     0.72699              osd.44               up   1.00000  1.00000
 45    ssd     0.72699              osd.45               up   1.00000  1.00000
 46    ssd     0.72699              osd.46               up   1.00000  1.00000
-13          109.81999          host valvd-osd4                               
 47    hdd     8.90900              osd.47               up   1.00000  1.00000
 48    hdd     8.90900              osd.48               up   1.00000  1.00000
 49    hdd     8.90900              osd.49               up   1.00000  1.00000
 50    hdd     8.90900              osd.50               up   1.00000        0
 51    hdd     8.90900              osd.51               up   1.00000  1.00000
 52    hdd     8.90900              osd.52               up   1.00000  1.00000
 53    hdd     8.90900              osd.53               up   1.00000  1.00000
 54    hdd     8.90900              osd.54               up   1.00000  1.00000
 55    hdd     8.90900              osd.55               up   1.00000        0
 56    hdd     8.90900              osd.56               up   1.00000  1.00000
 57    hdd     8.90900              osd.57               up   1.00000  1.00000
 58    hdd     8.90900              osd.58               up   1.00000  1.00000
 59    ssd     0.72699              osd.59               up   1.00000  1.00000
 60    ssd     0.72699              osd.60               up   1.00000  1.00000
 61    ssd     0.72699              osd.61               up   1.00000  1.00000
 62    ssd     0.72699              osd.62               up   1.00000  1.00000
-52          349.26453          host valvd-osd5                               
216    hdd    14.55269              osd.216              up   1.00000  1.00000
217    hdd    14.55269              osd.217              up   1.00000  1.00000
218    hdd    14.55269              osd.218              up   1.00000  1.00000
219    hdd    14.55269              osd.219              up   1.00000  1.00000
220    hdd    14.55269              osd.220              up   1.00000  1.00000
221    hdd    14.55269              osd.221              up   1.00000  1.00000
222    hdd    14.55269              osd.222              up   1.00000  1.00000
223    hdd    14.55269              osd.223              up   1.00000  1.00000
224    hdd    14.55269              osd.224              up   1.00000  1.00000
225    hdd    14.55269              osd.225              up   1.00000  1.00000
226    hdd    14.55269              osd.226              up   1.00000  1.00000
227    hdd    14.55269              osd.227              up   1.00000  1.00000
228    hdd    14.55269              osd.228              up   1.00000  1.00000
229    hdd    14.55269              osd.229              up   1.00000  1.00000
230    hdd    14.55269              osd.230              up   1.00000  1.00000
231    hdd    14.55269              osd.231              up   1.00000  1.00000
232    hdd    14.55269              osd.232              up   1.00000  1.00000
233    hdd    14.55269              osd.233              up   1.00000  1.00000
234    hdd    14.55269              osd.234              up   1.00000  1.00000
235    hdd    14.55269              osd.235              up   1.00000  1.00000
236    hdd    14.55269              osd.236              up   1.00000  1.00000
237    hdd    14.55269              osd.237              up   1.00000  1.00000
238    hdd    14.55269              osd.238              up   1.00000  1.00000
239    hdd    14.55269              osd.239              up   1.00000  1.00000
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux