Re: ceph-users Digest, Vol 81, Issue 28

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,
    if  needing to stat size of all root dirs in cephfs file system, is there any simple way to do that via ceph system tools?
thanks 

On 10/15/2019 04:57<ceph-users-request@xxxxxxx> wrote:
Send ceph-users mailing list submissions to
ceph-users@xxxxxxx

To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
ceph-users-request@xxxxxxx

You can reach the person managing the list at
ceph-users-owner@xxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of ceph-users digest..."

Today's Topics:

1. Past_interval start interval mismatch (last_clean_epoch reported)
(Huseyin Cotuk)
2. Re: Constant write load on 4 node ceph cluster (Ingo Schmidt)
3. Re: Constant write load on 4 node ceph cluster (Paul Emmerich)
4. RGW blocking on large objects (Robert LeBlanc)
5. Re: Recurring issue: PG is inconsistent, but lists no inconsistent objects
(Florian Haas)
6. Re: Recurring issue: PG is inconsistent, but lists no inconsistent objects
(Reed Dier)


----------------------------------------------------------------------

Date: Mon, 14 Oct 2019 18:41:42 +0300
From: Huseyin Cotuk <hcotuk@xxxxxxxxx>
Subject: [ceph-users] Past_interval start interval mismatch
(last_clean_epoch reported)
To: ceph-users@xxxxxxx
Message-ID: <0DB35170-05C5-4290-B4E2-9CB2C2BB3DA4@xxxxxxxxx>
Content-Type: multipart/alternative;
boundary="Apple-Mail=_B3AF029E-ABDE-4E3D-A975-9C670354218D"


--Apple-Mail=_B3AF029E-ABDE-4E3D-A975-9C670354218D
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii

Hi all,

I also hit the bug #24866 in my test environment. According to the logs, =
the last_clean_epoch in the specified OSD/PG is 17703, but the interval =
starts with 17895. So the OSD fails to start. There are some other OSDs =
in the same status.=20

2019-10-14 18:22:51.908 7f0a275f1700 -1 osd.21 pg_epoch: 18432 pg[18.51( =
v 18388'4 lc 18386'3 (0'0,18388'4] local-lis/les=3D18430/18431 n=3D1 =
ec=3D295/295 lis/c 18430/17702 les/c/f 18431/17703/0 18428/18430/18421) =
[11,21]/[11,21,20] r=3D1 lpr=3D18431 pi=3D[17895,18430)/3 crt=3D18388'4 =
lcod 0'0 unknown m=3D1 mbc=3D{}] 18.51 past_intervals [17895,18430) =
start interval does not contain the required bound [17703,18430) start

The cause is pg 18.51 went clean in 17703 but 17895 is reported to the =
monitor.=20

I am using the last stable version of Mimic (13.2.6).

Any idea how to fix it? Is there any way to bypass this check or fix the =
reported epoch #?

Thanks in advance.=20

Best regards,
Huseyin Cotuk
hcotuk@xxxxxxxxx


--Apple-Mail=_B3AF029E-ABDE-4E3D-A975-9C670354218D
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" class=3D"">Hi =
all,<div class=3D""><br class=3D""></div><div class=3D"">I also hit the =
bug #24866 in my test environment. According to the logs, the =
last_clean_epoch in the specified OSD/PG is 17703, but the interval =
starts with 17895. So the OSD fails to start. There are some other OSDs =
in the same status.&nbsp;</div><div class=3D""><br class=3D""></div><span =
class=3D"">2019-10-14 18:22:51.908 7f0a275f1700 -1 osd.21 pg_epoch: =
18432 pg[18.51( v 18388'4 lc 18386'3&nbsp;(0'0,18388'4] =
local-lis/les=3D18430/18431 n=3D1 ec=3D295/295 lis/c 18430/17702 les/c/f =
18431/17703/0&nbsp;18428/18430/18421) [11,21]/[11,21,20] r=3D1 lpr=3D18431=
pi=3D[17895,18430)/3 crt=3D18388'4 lcod 0'0 unknown&nbsp;m=3D1 mbc=3D{}] =
18.51 past_intervals [17895,18430) start interval does not contain the =
required bound&nbsp;[17703,18430) start<br class=3D""></span><span =
class=3D""><br class=3D""></span><span class=3D"">The cause is pg 18.51 =
went clean in 17703 but 17895 is reported to the =
monitor.&nbsp;</span><div class=3D""><span class=3D""><br =
class=3D""></span></div><div class=3D""><span class=3D"">I am using the =
last stable version of Mimic (13.2.6).</span></div><div class=3D""><span =
class=3D""><br class=3D""></span></div><div class=3D""><span =
class=3D"">Any idea how to fix it? Is there any way to bypass this check =
or fix the reported epoch #?</span></div><div class=3D""><br =
class=3D""></div><div class=3D"">Thanks in advance.&nbsp;</div><div =
class=3D""><br class=3D""><div class=3D""><div class=3D"">
<div dir=3D"auto" style=3D"word-wrap: break-word; -webkit-nbsp-mode: =
space; line-break: after-white-space;" class=3D""><div =
style=3D"caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 12px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;">Best regards,</div><div style=3D"caret-color: rgb(0, 0, 0); =
color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; =
font-style: normal; font-variant-caps: normal; font-weight: normal; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;">Huseyin =
Cotuk</div><div style=3D"caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); =
font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;"><a href="" =
class=3D"">hcotuk@xxxxxxxxx</a></div><div style=3D"caret-color: rgb(0, =
0, 0); color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; =
font-style: normal; font-variant-caps: normal; font-weight: normal; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D""><br =
class=3D""></div></div></div></div></div></body></html>=

--Apple-Mail=_B3AF029E-ABDE-4E3D-A975-9C670354218D--

------------------------------

Date: Mon, 14 Oct 2019 18:34:17 +0200 (CEST)
From: Ingo Schmidt <i.schmidt@xxxxxxxxxxx>
Subject: [ceph-users] Re: Constant write load on 4 node ceph cluster
To: Ashley Merrick <singapore@xxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Message-ID:
<1013769429.74266.1571070857785.JavaMail.zimbra@xxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8

Great, this helped a lot. Although "ceph iostat" didn't give iostats of single images, but just general overview of IO, i remembered the new nautilus RDB performance monitoring.

https://ceph.com/rbd/new-in-nautilus-rbd-performance-monitoring/

With a "simple"
rbd perf image iotop
i was able to see that the writes indeed are from the Log Server and the Zabbix Monitoring Server. I didn't expect that it would cause that much I/O... unbelieveable...

----- Ursprüngliche Mail -----
Von: "Ashley Merrick" <singapore@xxxxxxxxxxxxxx>
An: "i schmidt" <i.schmidt@xxxxxxxxxxx>
CC: "ceph-users" <ceph-users@xxxxxxx>
Gesendet: Montag, 14. Oktober 2019 15:20:46
Betreff: Re: [ceph-users] Constant write load on 4 node ceph cluster

Is the storage being used for the whole VM disk?

If so have you checked none of your software is writing constant log's? Or something that could continuously write to disk.

If your running a new version you can use : [ https://docs.ceph.com/docs/mimic/mgr/iostat/ | https://docs.ceph.com/docs/mimic/mgr/iostat/ ] to locate the exact RBD image.





------------------------------

Date: Mon, 14 Oct 2019 21:14:29 +0200
From: Paul Emmerich <paul.emmerich@xxxxxxxx>
Subject: [ceph-users] Re: Constant write load on 4 node ceph cluster
To: Ingo Schmidt <i.schmidt@xxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Message-ID:
<CAD9yTbE5UapmWXpj56yckWW2X7S6wyHz4PQSXLup6H_pzFm1vA@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="UTF-8"

It's pretty common to see way more writes than reads if you got lots of idle VMs


Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Oct 14, 2019 at 6:34 PM Ingo Schmidt <i.schmidt@xxxxxxxxxxx> wrote:

Great, this helped a lot. Although "ceph iostat" didn't give iostats of single images, but just general overview of IO, i remembered the new nautilus RDB performance monitoring.

https://ceph.com/rbd/new-in-nautilus-rbd-performance-monitoring/

With a "simple"
rbd perf image iotop
i was able to see that the writes indeed are from the Log Server and the Zabbix Monitoring Server. I didn't expect that it would cause that much I/O... unbelieveable...

----- Ursprüngliche Mail -----
Von: "Ashley Merrick" <singapore@xxxxxxxxxxxxxx>
An: "i schmidt" <i.schmidt@xxxxxxxxxxx>
CC: "ceph-users" <ceph-users@xxxxxxx>
Gesendet: Montag, 14. Oktober 2019 15:20:46
Betreff: Re: [ceph-users] Constant write load on 4 node ceph cluster

Is the storage being used for the whole VM disk?

If so have you checked none of your software is writing constant log's? Or something that could continuously write to disk.

If your running a new version you can use : [ https://docs.ceph.com/docs/mimic/mgr/iostat/ | https://docs.ceph.com/docs/mimic/mgr/iostat/ ] to locate the exact RBD image.




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

------------------------------

Date: Mon, 14 Oct 2019 12:54:05 -0700
From: Robert LeBlanc <robert@xxxxxxxxxxxxx>
Subject: [ceph-users] RGW blocking on large objects
To: ceph-users <ceph-users@xxxxxxx>
Message-ID:
<CAANLjFpWch8_mnHDeph4-=cdY1N1CXA2UJYTg7uE2w718HJ42w@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="UTF-8"

We set up a new Nautilus cluster and only have RGW on it. While we had
a job doing 200k IOPs of really small objects, I noticed that HAProxy
was kicking out RGW backends because they were taking more than 2
seconds to return. We GET a large ~4GB file each minute and use that
as a health check to determine if the system is taking too long to
service requests. It seems that other IO is being blocked by this
large transfer. This seems to be the case with both civetweb and
beast. But I'm double checking beast at the moment because I'm not
100% sure we were using it at the start.

Any ideas how to mitigate this? It seems that IOs are being scheduled
on a thread and if they get unlucky enough to be scheduled behind a
big IO, they are just stuck, in this case HAProxy could kick out the
backend before the IO is returned and it has to re-request it.

Thank you,
Robert LeBlanc


----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1

------------------------------

Date: Mon, 14 Oct 2019 21:55:53 +0200
From: Florian Haas <florian@xxxxxxxxxxxxxx>
Subject: [ceph-users] Re: Recurring issue: PG is inconsistent, but
lists no inconsistent objects
To: Dan van der Ster <dan@xxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Message-ID: <7b49695d-be1c-b36b-2ced-1b9cb0212530@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8

On 14/10/2019 17:21, Dan van der Ster wrote:
I'd appreciate a link to more information if you have one, but a PG
autoscaling problem wouldn't really match with the issue already
appearing in pre-Nautilus releases. :)

https://github.com/ceph/ceph/pull/30479

Thanks! But no, this doesn't look like a likely culprit, for the reason
that we also saw this in Luminous and hence, *definitely* without splits
or merges in play.

Has anyone else seen these scrub false positives — if that's what they are?

Cheers,
Florian

------------------------------

Date: Mon, 14 Oct 2019 15:57:05 -0500
From: Reed Dier <reed.dier@xxxxxxxxxxx>
Subject: [ceph-users] Re: Recurring issue: PG is inconsistent, but
lists no inconsistent objects
To: Florian Haas <florian@xxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Message-ID: <24D2F909-3015-4256-BAA3-0E56966BF778@xxxxxxxxxxx>
Content-Type: multipart/signed;
boundary="Apple-Mail=_51C0D174-4907-403A-ADC3-D9E9E92FC1EB";
protocol="application/pkcs7-signature"; micalg=sha-256


--Apple-Mail=_51C0D174-4907-403A-ADC3-D9E9E92FC1EB
Content-Type: multipart/alternative;
boundary="Apple-Mail=_0E365995-82AE-44C2-BC46-D7A98266CECC"


--Apple-Mail=_0E365995-82AE-44C2-BC46-D7A98266CECC
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=utf-8

I had something slightly similar to you.

However, my issue was specific/limited to the device_health_metrics pool =
that is auto-created with 1 PG when you turn that mgr feature on.

https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg56315.html =
<https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg56315.html>

I was never able to get a good resolution, other than finally running pg =
repair on it, and it resolved itself.

Reed

On Oct 14, 2019, at 2:55 PM, Florian Haas <florian@xxxxxxxxxxxxxx> =
wrote:
=20
On 14/10/2019 17:21, Dan van der Ster wrote:
I'd appreciate a link to more information if you have one, but a PG
autoscaling problem wouldn't really match with the issue already
appearing in pre-Nautilus releases. :)
=20
https://github.com/ceph/ceph/pull/30479
=20
Thanks! But no, this doesn't look like a likely culprit, for the =
reason
that we also saw this in Luminous and hence, *definitely* without =
splits
or merges in play.
=20
Has anyone else seen these scrub false positives =E2=80=94 if that's =
what they are?
=20
Cheers,
Florian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


--Apple-Mail=_0E365995-82AE-44C2-BC46-D7A98266CECC
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" class=3D"">I =
had something slightly similar to you.<div class=3D""><br =
class=3D""></div><div class=3D"">However, my issue was specific/limited =
to the&nbsp;device_health_metrics pool that is auto-created with 1 PG =
when you turn that mgr feature on.</div><div class=3D""><br =
class=3D""></div><div class=3D""><a =
href="">ml" =
class=3D"">https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg56315=
.html</a></div><div class=3D""><br class=3D""></div><div class=3D"">I =
was never able to get a good resolution, other than finally running pg =
repair on it, and it resolved itself.</div><div class=3D""><br =
class=3D""></div><div class=3D"">Reed<br class=3D""><div =
class=3D""><div><br class=3D""><blockquote type=3D"cite" class=3D""><div =
class=3D"">On Oct 14, 2019, at 2:55 PM, Florian Haas &lt;<a =
href="" =
class=3D"">florian@xxxxxxxxxxxxxx</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><div class=3D"">On =
14/10/2019 17:21, Dan van der Ster wrote:<br class=3D""><blockquote =
type=3D"cite" class=3D""><blockquote type=3D"cite" class=3D"">I'd =
appreciate a link to more information if you have one, but a PG<br =
class=3D"">autoscaling problem wouldn't really match with the issue =
already<br class=3D"">appearing in pre-Nautilus releases. :)<br =
class=3D""></blockquote><br class=3D""><a =
href="" =
class=3D"">https://github.com/ceph/ceph/pull/30479</a><br =
class=3D""></blockquote><br class=3D"">Thanks! But no, this doesn't look =
like a likely culprit, for the reason<br class=3D"">that we also saw =
this in Luminous and hence, *definitely* without splits<br class=3D"">or =
merges in play.<br class=3D""><br class=3D"">Has anyone else seen these =
scrub false positives =E2=80=94 if that's what they are?<br class=3D""><br=
class=3D"">Cheers,<br class=3D"">Florian<br =
class=3D"">_______________________________________________<br =
class=3D"">ceph-users mailing list -- <a =
href="" class=3D"">ceph-users@xxxxxxx</a><br =
class=3D"">To unsubscribe send an email to <a =
href="" =
class=3D"">ceph-users-leave@xxxxxxx</a><br =
class=3D""></div></div></blockquote></div><br =
class=3D""></div></div></body></html>=

--Apple-Mail=_0E365995-82AE-44C2-BC46-D7A98266CECC--

--Apple-Mail=_51C0D174-4907-403A-ADC3-D9E9E92FC1EB
Content-Disposition: attachment;
filename=smime.p7s
Content-Type: application/pkcs7-signature;
name=smime.p7s
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCDGow
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBlIwggU6oAMCAQICEC8JVM/NtNco+9fMvYhQtd8wDQYJKoZIhvcN
AQELBQAwgZYxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNV
BAcTB1NhbGZvcmQxGDAWBgNVBAoTD1NlY3RpZ28gTGltaXRlZDE+MDwGA1UEAxM1U2VjdGlnbyBS
U0EgQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0EwHhcNMTkwOTI2MDAw
MDAwWhcNMjIwOTI1MjM1OTU5WjCCAV8xCzAJBgNVBAYTAlVTMQ4wDAYDVQQREwU3MDAwNTESMBAG
A1UECBMJTG91aXNpYW5hMREwDwYDVQQHEwhNZXRhaXJpZTEsMCoGA1UECRMjU3RlIDI1MCBUaGUg
SGVyaXRhZ2UgUGxhemEgQnVpbGRpbmcxIzAhBgNVBAkTGjExMSBWZXRlcmFucyBNZW1vcmlhbCBC
bHZkMSYwJAYDVQQKEx1Gb2N1cyBBdXRvbWF0ZWQgRXF1aXRpZXMsIExMQzFDMEEGA1UECxM6SXNz
dWVkIHRocm91Z2ggRm9jdXMgQXV0b21hdGVkIEVxdWl0aWVzLCBMTEMgRS1QS0kgTWFuYWdlcjEf
MB0GA1UECxMWQ29ycG9yYXRlIFNlY3VyZSBFbWFpbDESMBAGA1UEAxMJUmVlZCBEaWVyMSQwIgYJ
KoZIhvcNAQkBFhVyZWVkLmRpZXJAZm9jdXN2cS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQC8ELOuFrZlztYzJp/iLLqvEc1ugxFGOkqZJsfk9XmgYc+8y48P8SNzRvWQeVs84RmN
HeLg3xnAtt7xdK0cjM2PdOqznYmb2wAV6deaodu4Kun+im+Rp4/7pjvkyaiMjEL0/EA+WM8bFgjQ
OKy6/T+8W9PpaJkxtdgTSXTsYWKfkZYZcJfy/aOQgtAwBflehQ8pazCo4Qx8cOG7i58ZUoBEB46v
EdrxOaFzrGeeO+rx2siq9dByT7rkR4YEnKrHfXNgAnDIjsz5Q8Qq9cqH7tbt20MUU+X4D8GQ0fT+
FR8nSQNoCpKSk1rTboOza1qKJyIaBGEwKfuq5cd+KzAIwEY1AgMBAAGjggHOMIIByjAfBgNVHSME
GDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUqHWIYbt2tNSGAjge5B0wOrOk5fsw
DgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUF
BwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2Vj
dGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2Vj
dGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUF
BwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xp
ZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDov
L29jc3Auc2VjdGlnby5jb20wIAYDVR0RBBkwF4EVcmVlZC5kaWVyQGZvY3VzdnEuY29tMA0GCSqG
SIb3DQEBCwUAA4IBAQCmjIyvT1SaifV+zEYRcBBfkJppf3IV3Hu0C1CN29RNjdslsmk2YYHUGBya
03Ae5oSAKLOt1vZLaLLZeaYNdx9Kx7Oqux+S9ozg9Vg86sSCJroIICqWHzqjo4DoOIg1nFkVMMpM
kWpjNxFdUSIZLYvKyWWGFTYsMmF6rp7RNO6pJ4AtNn88lNXhkT/40ebvJ85RBoy73W6LX21vtxsh
e83r6jjzHNPe33kTwWzT39YoqHoCdbQwW3jfKFQUs/6a6HB1o1el7aazOMZIDjl2poHUEoKBPJWR
pk8zb1JKD4WCNaDg+sQJSMO7ekcjpJeoBEzlfjI2dT4lHlhtizWgypVuMYIDxDCCA8ACAQEwgasw
gZYxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1Nh
bGZvcmQxGDAWBgNVBAoTD1NlY3RpZ28gTGltaXRlZDE+MDwGA1UEAxM1U2VjdGlnbyBSU0EgQ2xp
ZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0ECEC8JVM/NtNco+9fMvYhQtd8w
DQYJYIZIAWUDBAIBBQCgggHpMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkF
MQ8XDTE5MTAxNDIwNTcwNlowLwYJKoZIhvcNAQkEMSIEIJqv7OvtXJqA52V0p7Z1lyr0AxHrxFB3
ab4K0PWgfEF+MIG8BgkrBgEEAYI3EAQxga4wgaswgZYxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJH
cmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGDAWBgNVBAoTD1NlY3RpZ28gTGlt
aXRlZDE+MDwGA1UEAxM1U2VjdGlnbyBSU0EgQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1
cmUgRW1haWwgQ0ECEC8JVM/NtNco+9fMvYhQtd8wgb4GCyqGSIb3DQEJEAILMYGuoIGrMIGWMQsw
CQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3Jk
MRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBB
dXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhAvCVTPzbTXKPvXzL2IULXfMA0GCSqG
SIb3DQEBAQUABIIBACWiX8zkYNxrRSXTsKV0iFrjJYo5xqGItFrbhyrEuJ1zDVCxHDYGoT6PV4OE
pBPN6kpGbktyrGfH+Kt3YfcGng0oTOwRBiixlplAEvNSWJeAT/aHv0ta7Z6D+sQhlQh0/qsak9hk
1ycvd25X9wf7nqPb5Rlc9VDh7HUGlc/l2EOdlMbFjmdm0GvljtPzng/288bMtIGHEwltxLu654E1
9Q7l1bUE10RmUftILZTLIgDMGu0er6OuB18ZV4zIjsMiK1SXr/DIuk5lrbVJjydx5LnPKU/7PEUz
g14k9x0/DZiepnzhMBiTJjARn3N+scKEK4QtlskpKoWh8cnvovTGpCsAAAAAAAA=
--Apple-Mail=_51C0D174-4907-403A-ADC3-D9E9E92FC1EB--

------------------------------

Subject: Digest Footer

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


------------------------------

End of ceph-users Digest, Vol 81, Issue 28
******************************************
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux