=?gb18030?b?u9i4tKO6Y2VwaC11c2VycyBEaWdlc3QsIFZvbCAxMDcsIElzc3VlIDIw?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



help




------------------ 原始邮件 ------------------
发件人:                                                                                                                        "ceph-users"                                                                                    <ceph-users-request@xxxxxxx&gt;;
发送时间:&nbsp;2023年5月4日(星期四) 下午4:40
收件人:&nbsp;"ceph-users"<ceph-users@xxxxxxx&gt;;

主题:&nbsp;ceph-users Digest, Vol 107, Issue 20



Send ceph-users mailing list submissions to
	ceph-users@xxxxxxx

To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
	ceph-users-request@xxxxxxx

You can reach the person managing the list at
	ceph-users-owner@xxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of ceph-users digest..."

Today's Topics:

&nbsp;&nbsp; 1. Initialization timeout, failed to initialize (Vitaly Goot)
&nbsp;&nbsp; 2. Re: MDS crash on FAILED ceph_assert(cur-&gt;is_auth())
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (Peter van Heusden)
&nbsp;&nbsp; 3. Re: MDS "newly corrupt dentry" after patch version upgrade
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (Janek Bevendorff)
&nbsp;&nbsp; 4. Best practice for expanding Ceph cluster (huxiaoyu@xxxxxxxxxxxx)
&nbsp;&nbsp; 5. Re: 16.2.13 pacific QE validation status (Guillaume Abrioux)


----------------------------------------------------------------------

Date: Thu, 04 May 2023 01:50:12 -0000
From: "Vitaly Goot" <vitaly.goot@xxxxxxxxx&gt;
Subject:  Initialization timeout, failed to initialize
To: ceph-users@xxxxxxx
Message-ID: <168316501216.1713.9594013921879975501@mailman-web&gt;
Content-Type: text/plain; charset="utf-8"

playing with MULTI-SITE zones for CEPH Object Gateway

ceph version: 17.2.5 
my setup: 3 zone multi-site; 3-way full sync mode; 
each zone has 3 machines -&gt; RGW+MON+OSD
running load test:&nbsp; 3000 concurrent uploads of 1M object 

after about 3-4 minutes of load RGW machine get stuck, on 2 zone out of 3 RGW is not responding (e.g. curl $RGW:80) 
attempt to restart RGW ends up with `Initialization timeout, failed to initialize`

here is a backtrace from gdb with a backtrace where it hangs after restart:

(gdb) inf thr
&nbsp; Id&nbsp;&nbsp; Target Id&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Frame
* 1&nbsp;&nbsp;&nbsp; Thread 0x7fa7d3abbcc0 (LWP 30791) "radosgw"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; futex_wait_cancelable (private=<optimized out&gt;, expected=0, futex_word=0x7ffc7f7a2438) at ../sysdeps/nptl/futex-internal.h:183
...

(gdb) bt
#0&nbsp; futex_wait_cancelable (private=<optimized out&gt;, expected=0, futex_word=0x7ffc7f7a2438) at ../sysdeps/nptl/futex-internal.h:183
#1&nbsp; __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x7ffc7f7a2488, cond=0x7ffc7f7a2410) at pthread_cond_wait.c:508
#2&nbsp; __pthread_cond_wait (cond=cond@entry=0x7ffc7f7a2410, mutex=0x7ffc7f7a2488) at pthread_cond_wait.c:647
#3&nbsp; 0x00007fa7d7097e42 in ceph::condition_variable_debug::wait (this=this@entry=0x7ffc7f7a2410, lock=...) at ../src/common/mutex_debug.h:148
#4&nbsp; 0x00007fa7d7953cba in ceph::condition_variable_debug::wait<librados::IoCtxImpl::operate(const object_t&amp;, ObjectOperation*, ceph::real_time*, int)::<lambda()&gt; &gt; (pred=..., lock=..., this=0x7ffc7f7a2410) at ../src/librados/IoCtxImpl.cc:672
#5&nbsp; librados::IoCtxImpl::operate (this=this@entry=0x558347c21010, oid=..., o=0x558347e12310, pmtime=<optimized out&gt;, flags=<optimized out&gt;) at ../src/librados/IoCtxImpl.cc:672
#6&nbsp; 0x00007fa7d792bd55 in librados::v14_2_0::IoCtx::operate (this=this@entry=0x558347e44760, oid="notify.0", o=o@entry=0x7ffc7f7a2690, flags=flags@entry=0) at ../src/librados/librados_cxx.cc:1536
#7&nbsp; 0x00007fa7d9490ad1 in rgw_rados_operate (dpp=<optimized out&gt;, ioctx=..., oid="notify.0", op=op@entry=0x7ffc7f7a2690, y=..., flags=0) at ../src/rgw/rgw_tools.cc:277
#8&nbsp; 0x00007fa7d9627e0f in RGWSI_RADOS::Obj::operate (this=this@entry=0x558347e44710, dpp=<optimized out&gt;, op=op@entry=0x7ffc7f7a2690, y=..., flags=flags@entry=0) at ../src/rgw/services/svc_rados.h:112
#9&nbsp; 0x00007fa7d96209a5 in RGWSI_Notify::init_watch (this=this@entry=0x558347c49530, dpp=<optimized out&gt;, y=...) at ../src/rgw/services/svc_notify.cc:214
#10 0x00007fa7d962161b in RGWSI_Notify::do_start (this=0x558347c49530, y=..., dpp=<optimized out&gt;) at ../src/rgw/services/svc_notify.cc:277
#11 0x00007fa7d8f17bcf in RGWServiceInstance::start (this=0x558347c49530, y=..., dpp=<optimized out&gt;) at ../src/rgw/rgw_service.cc:331
#12 0x00007fa7d8f1a260 in RGWServices_Def::init (this=this@entry=0x558347de90a0, cct=<optimized out&gt;, have_cache=<optimized out&gt;, raw=raw@entry=false, run_sync=<optimized out&gt;, y=..., dpp=<optimized out&gt;) at /usr/include/c++/9/bits/unique_ptr.h:360
#13 0x00007fa7d8f1cc40 in RGWServices::do_init (this=this@entry=0x558347de90a0, _cct=<optimized out&gt;, have_cache=<optimized out&gt;, raw=raw@entry=false, run_sync=<optimized out&gt;, y=..., dpp=<optimized out&gt;) at ../src/rgw/rgw_service.cc:284
#14 0x00007fa7d92a7b1f in RGWServices::init (dpp=<optimized out&gt;, y=..., run_sync=<optimized out&gt;, have_cache=<optimized out&gt;, cct=<optimized out&gt;, this=0x558347de90a0) at ../src/rgw/rgw_service.h:153
#15 RGWRados::init_svc (this=this@entry=0x558347de8dc0, raw=raw@entry=false, dpp=<optimized out&gt;) at ../src/rgw/rgw_rados.cc:1380
#16 0x00007fa7d930f241 in RGWRados::initialize (this=0x558347de8dc0, dpp=<optimized out&gt;) at ../src/rgw/rgw_rados.cc:1400
#17 0x00007fa7d944f85f in RGWRados::initialize (dpp=<optimized out&gt;, _cct=0x558347c6a320, this=<optimized out&gt;) at ../src/rgw/rgw_rados.h:586
#18 StoreManager::init_storage_provider (dpp=<optimized out&gt;, dpp@entry=0x7ffc7f7a2e90, cct=cct@entry=0x558347c6a320, svc="rados", use_gc_thread=use_gc_thread@entry=true, use_lc_thread=use_lc_thread@entry=true, quota_threads=quota_threads@entry=true, run_sync_thread=true, run_reshard_thread=true, use_cache=true,
&nbsp;&nbsp;&nbsp; use_gc=true) at ../src/rgw/rgw_sal.cc:55
#19 0x00007fa7d8e7367a in StoreManager::get_storage (use_gc=true, use_cache=true, run_reshard_thread=true, run_sync_thread=true, quota_threads=true, use_lc_thread=true, use_gc_thread=true, svc="rados", cct=0x558347c6a320, dpp=0x7ffc7f7a2e90) at /usr/include/c++/9/bits/basic_string.h:267
#20 radosgw_Main (argc=<optimized out&gt;, argv=<optimized out&gt;) at ../src/rgw/rgw_main.cc:372
#21 0x0000558347883f56 in main (argc=<optimized out&gt;, argv=<optimized out&gt;) at ../src/rgw/radosgw.cc:12
(gdb)
#0&nbsp; futex_wait_cancelable (private=<optimized out&gt;, expected=0, futex_word=0x7ffc7f7a2438) at ../sysdeps/nptl/futex-internal.h:183
#1&nbsp; __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x7ffc7f7a2488, cond=0x7ffc7f7a2410) at pthread_cond_wait.c:508
#2&nbsp; __pthread_cond_wait (cond=cond@entry=0x7ffc7f7a2410, mutex=0x7ffc7f7a2488) at pthread_cond_wait.c:647
#3&nbsp; 0x00007fa7d7097e42 in ceph::condition_variable_debug::wait (this=this@entry=0x7ffc7f7a2410, lock=...) at ../src/common/mutex_debug.h:148
#4&nbsp; 0x00007fa7d7953cba in ceph::condition_variable_debug::wait<librados::IoCtxImpl::operate(const object_t&amp;, ObjectOperation*, ceph::real_time*, int)::<lambda()&gt; &gt; (pred=..., lock=..., this=0x7ffc7f7a2410) at ../src/librados/IoCtxImpl.cc:672
#5&nbsp; librados::IoCtxImpl::operate (this=this@entry=0x558347c21010, oid=..., o=0x558347e12310, pmtime=<optimized out&gt;, flags=<optimized out&gt;) at ../src/librados/IoCtxImpl.cc:672
#6&nbsp; 0x00007fa7d792bd55 in librados::v14_2_0::IoCtx::operate (this=this@entry=0x558347e44760, oid="notify.0", o=o@entry=0x7ffc7f7a2690, flags=flags@entry=0) at ../src/librados/librados_cxx.cc:1536
#7&nbsp; 0x00007fa7d9490ad1 in rgw_rados_operate (dpp=<optimized out&gt;, ioctx=..., oid="notify.0", op=op@entry=0x7ffc7f7a2690, y=..., flags=0) at ../src/rgw/rgw_tools.cc:277
#8&nbsp; 0x00007fa7d9627e0f in RGWSI_RADOS::Obj::operate (this=this@entry=0x558347e44710, dpp=<optimized out&gt;, op=op@entry=0x7ffc7f7a2690, y=..., flags=flags@entry=0) at ../src/rgw/services/svc_rados.h:112
#9&nbsp; 0x00007fa7d96209a5 in RGWSI_Notify::init_watch (this=this@entry=0x558347c49530, dpp=<optimized out&gt;, y=...) at ../src/rgw/services/svc_notify.cc:214
#10 0x00007fa7d962161b in RGWSI_Notify::do_start (this=0x558347c49530, y=..., dpp=<optimized out&gt;) at ../src/rgw/services/svc_notify.cc:277
#11 0x00007fa7d8f17bcf in RGWServiceInstance::start (this=0x558347c49530, y=..., dpp=<optimized out&gt;) at ../src/rgw/rgw_service.cc:331
#12 0x00007fa7d8f1a260 in RGWServices_Def::init (this=this@entry=0x558347de90a0, cct=<optimized out&gt;, have_cache=<optimized out&gt;, raw=raw@entry=false, run_sync=<optimized out&gt;, y=..., dpp=<optimized out&gt;) at /usr/include/c++/9/bits/unique_ptr.h:360
#13 0x00007fa7d8f1cc40 in RGWServices::do_init (this=this@entry=0x558347de90a0, _cct=<optimized out&gt;, have_cache=<optimized out&gt;, raw=raw@entry=false, run_sync=<optimized out&gt;, y=..., dpp=<optimized out&gt;) at ../src/rgw/rgw_service.cc:284
#14 0x00007fa7d92a7b1f in RGWServices::init (dpp=<optimized out&gt;, y=..., run_sync=<optimized out&gt;, have_cache=<optimized out&gt;, cct=<optimized out&gt;, this=0x558347de90a0) at ../src/rgw/rgw_service.h:153
#15 RGWRados::init_svc (this=this@entry=0x558347de8dc0, raw=raw@entry=false, dpp=<optimized out&gt;) at ../src/rgw/rgw_rados.cc:1380
#16 0x00007fa7d930f241 in RGWRados::initialize (this=0x558347de8dc0, dpp=<optimized out&gt;) at ../src/rgw/rgw_rados.cc:1400
#17 0x00007fa7d944f85f in RGWRados::initialize (dpp=<optimized out&gt;, _cct=0x558347c6a320, this=<optimized out&gt;) at ../src/rgw/rgw_rados.h:586
#18 StoreManager::init_storage_provider (dpp=<optimized out&gt;, dpp@entry=0x7ffc7f7a2e90, cct=cct@entry=0x558347c6a320, svc="rados", use_gc_thread=use_gc_thread@entry=true, use_lc_thread=use_lc_thread@entry=true, quota_threads=quota_threads@entry=true, run_sync_thread=true, run_reshard_thread=true, use_cache=true,
&nbsp;&nbsp;&nbsp; use_gc=true) at ../src/rgw/rgw_sal.cc:55
#19 0x00007fa7d8e7367a in StoreManager::get_storage (use_gc=true, use_cache=true, run_reshard_thread=true, run_sync_thread=true, quota_threads=true, use_lc_thread=true, use_gc_thread=true, svc="rados", cct=0x558347c6a320, dpp=0x7ffc7f7a2e90) at /usr/include/c++/9/bits/basic_string.h:267
#20 radosgw_Main (argc=<optimized out&gt;, argv=<optimized out&gt;) at ../src/rgw/rgw_main.cc:372
#21 0x0000558347883f56 in main (argc=<optimized out&gt;, argv=<optimized out&gt;) at ../src/rgw/radosgw.cc:12

Any suggestion on what can be a problem and how to reset RGW so it will be able to start normally?

------------------------------

Date: Thu, 4 May 2023 09:13:56 +0200
From: Peter van Heusden <pvh@xxxxxxxxxxx&gt;
Subject:  Re: MDS crash on FAILED
	ceph_assert(cur-&gt;is_auth())
Cc: ceph-users@xxxxxxx
Message-ID:
	<CAK1reXhEDjfKuLmuyus0RT09mwecRmP=LGLcoSKWeZ+pu+YXJQ@xxxxxxxxxxxxxx&gt;
Content-Type: text/plain; charset="UTF-8"

Hi Emmaneul

It was a while ago, but as I recall I evicted all clients and that allowed
me to restart the MDS servers. There was something clearly "broken" in how
at least one of the clients was interacting with the system.

Peter

On Thu, 4 May 2023 at 07:18, Emmanuel Jaep <emmanuel.jaep@xxxxxxxxx&gt; wrote:

&gt; Hi,
&gt;
&gt; did you finally figure out what happened?
&gt; I do have the same behavior and we can't get the mds to start again...
&gt;
&gt; Thanks,
&gt;
&gt; Emmanuel
&gt; _______________________________________________
&gt; ceph-users mailing list -- ceph-users@xxxxxxx
&gt; To unsubscribe send an email to ceph-users-leave@xxxxxxx
&gt;

------------------------------

Date: Thu, 4 May 2023 09:15:38 +0200
From: Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx&gt;
Subject:  Re: MDS "newly corrupt dentry" after patch
	version upgrade
To: Patrick Donnelly <pdonnell@xxxxxxxxxx&gt;
Cc: ceph-users <ceph-users@xxxxxxx&gt;
Message-ID: <591f410c-aabc-72af-36d0-478ce8d09028@xxxxxxxxxxxxx&gt;
Content-Type: text/plain; charset=UTF-8; format=flowed

After running the tool for 11 hours straight, it exited with the 
following exception:

Traceback (most recent call last):
&nbsp;&nbsp; File "/home/webis/first-damage.py", line 156, in <module&gt;
&nbsp;&nbsp;&nbsp;&nbsp; traverse(f, ioctx)
&nbsp;&nbsp; File "/home/webis/first-damage.py", line 84, in traverse
&nbsp;&nbsp;&nbsp;&nbsp; for (dnk, val) in it:
&nbsp;&nbsp; File "rados.pyx", line 1389, in rados.OmapIterator.__next__
&nbsp;&nbsp; File "rados.pyx", line 318, in rados.decode_cstr
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 8: 
invalid start byte

Does that mean that the last inode listed in the output file is corrupt? 
Any way I can fix it?

The output file has 14 million lines. We have about 24.5 million objects 
in the metadata pool.

Janek


On 03/05/2023 14:20, Patrick Donnelly wrote:
&gt; On Wed, May 3, 2023 at 4:33 AM Janek Bevendorff
&gt; <janek.bevendorff@xxxxxxxxxxxxx&gt; wrote:
&gt;&gt; Hi Patrick,
&gt;&gt;
&gt;&gt;&gt; I'll try that tomorrow and let you know, thanks!
&gt;&gt; I was unable to reproduce the crash today. Even with
&gt;&gt; mds_abort_on_newly_corrupt_dentry set to true, all MDS booted up
&gt;&gt; correctly (though they took forever to rejoin with logs set to 20).
&gt;&gt;
&gt;&gt; To me it looks like the issue has resolved itself overnight. I had run a
&gt;&gt; recursive scrub on the file system and another snapshot was taken, in
&gt;&gt; case any of those might have had an effect on this. It could also be the
&gt;&gt; case that the (supposedly) corrupt journal entry has simply been
&gt;&gt; committed now and hence doesn't trigger the assertion any more. Is there
&gt;&gt; any way I can verify this?
&gt; You can run:
&gt;
&gt; https://github.com/ceph/ceph/blob/main/src/tools/cephfs/first-damage.py
&gt;
&gt; Just do:
&gt;
&gt; python3 first-damage.py --memo run.1 <meta pool&gt;
&gt;
&gt; No need to do any of the other steps if you just want a read-only check.
&gt;
-- 

Bauhaus-Universität Weimar
Bauhausstr. 9a, R308
99423 Weimar, Germany

Phone: +49 3643 58 3577
www.webis.de

------------------------------

Date: Thu, 4 May 2023 10:38:30 +0200
From: "huxiaoyu@xxxxxxxxxxxx" <huxiaoyu@xxxxxxxxxxxx&gt;
Subject:  Best practice for expanding Ceph cluster
To: ceph-users <ceph-users@xxxxxxx&gt;
Message-ID: <985B22F000D9DE88+2023050410382589829410@xxxxxxxxxxxx&gt;
Content-Type: text/plain;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; charset="us-ascii"

Dear Ceph folks,

I am writing to ask for advice on best practice of expanding ceph cluster. We are running an 8-node Ceph cluster and RGW, and would like to add another 10 node, each of which have 10x 12TB HDD. The current 8-node has ca. 400TB user data.

I am wondering whether to add 10 nodes at one shot and let the cluster to rebalance, or divide into 5 steps, each of which add 2 nodes and rebalance step by step?&nbsp; I do not know what would be the advantages or disadvantages with the one shot scheme vs 5 bataches of adding 2 nodes step-by-step.

Any suggestions, experience sharing or advice are highly appreciated. 

thanks a lot in advance,

Samuel



huxiaoyu@xxxxxxxxxxxx

------------------------------

Date: Thu, 4 May 2023 10:40:02 +0200
From: Guillaume Abrioux <gabrioux@xxxxxxxxxx&gt;
Subject:  Re: 16.2.13 pacific QE validation status
To: Laura Flores <lflores@xxxxxxxxxx&gt;
Cc: Yuri Weinstein <yweinste@xxxxxxxxxx&gt;, Radoslaw Zarzynski
	<rzarzyns@xxxxxxxxxx&gt;, dev <dev@xxxxxxx&gt;, ceph-users
	<ceph-users@xxxxxxx&gt;
Message-ID:
	<CANqTTH5ba9qf3xStCcCZr24n5GPyq0Eeimw3Seha1MZ6wna5nA@xxxxxxxxxxxxxx&gt;
Content-Type: text/plain; charset="UTF-8"

ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/553/

On Wed, 3 May 2023 at 22:43, Guillaume Abrioux <gabrioux@xxxxxxxxxx&gt; wrote:

&gt; The failure seen in ceph-volume tests isn't related.
&gt; That being said, it needs to be fixed to have a better view of the current
&gt; status.
&gt;
&gt; On Wed, 3 May 2023 at 21:00, Laura Flores <lflores@xxxxxxxxxx&gt; wrote:
&gt;
&gt;&gt; upgrade/octopus-x (pacific) is approved. Went over failures with Adam
&gt;&gt; King and it was decided they are not release blockers.
&gt;&gt;
&gt;&gt; On Wed, May 3, 2023 at 1:53 PM Yuri Weinstein <yweinste@xxxxxxxxxx&gt;
&gt;&gt; wrote:
&gt;&gt;
&gt;&gt;&gt; upgrade/octopus-x (pacific) - Laura
&gt;&gt;&gt; ceph-volume - Guillaume
&gt;&gt;&gt;
&gt;&gt;&gt; + 2 PRs are the remaining issues
&gt;&gt;&gt;
&gt;&gt;&gt; Josh FYI
&gt;&gt;&gt;
&gt;&gt;&gt; On Wed, May 3, 2023 at 11:50 AM Radoslaw Zarzynski <rzarzyns@xxxxxxxxxx&gt;
&gt;&gt;&gt; wrote:
&gt;&gt;&gt; &gt;
&gt;&gt;&gt; &gt; rados approved.
&gt;&gt;&gt; &gt;
&gt;&gt;&gt; &gt; Big thanks to Laura for helping with this!
&gt;&gt;&gt; &gt;
&gt;&gt;&gt; &gt; On Thu, Apr 27, 2023 at 11:21 PM Yuri Weinstein <yweinste@xxxxxxxxxx&gt;
&gt;&gt;&gt; wrote:
&gt;&gt;&gt; &gt; &gt;
&gt;&gt;&gt; &gt; &gt; Details of this release are summarized here:
&gt;&gt;&gt; &gt; &gt;
&gt;&gt;&gt; &gt; &gt; https://tracker.ceph.com/issues/59542#note-1
&gt;&gt;&gt; &gt; &gt; Release Notes - TBD
&gt;&gt;&gt; &gt; &gt;
&gt;&gt;&gt; &gt; &gt; Seeking approvals for:
&gt;&gt;&gt; &gt; &gt;
&gt;&gt;&gt; &gt; &gt; smoke - Radek, Laura
&gt;&gt;&gt; &gt; &gt; rados - Radek, Laura
&gt;&gt;&gt; &gt; &gt;&nbsp;&nbsp; rook - Sébastien Han
&gt;&gt;&gt; &gt; &gt;&nbsp;&nbsp; cephadm - Adam K
&gt;&gt;&gt; &gt; &gt;&nbsp;&nbsp; dashboard - Ernesto
&gt;&gt;&gt; &gt; &gt;
&gt;&gt;&gt; &gt; &gt; rgw - Casey
&gt;&gt;&gt; &gt; &gt; rbd - Ilya
&gt;&gt;&gt; &gt; &gt; krbd - Ilya
&gt;&gt;&gt; &gt; &gt; fs - Venky, Patrick
&gt;&gt;&gt; &gt; &gt; upgrade/octopus-x (pacific) - Laura (look the same as in 16.2.8)
&gt;&gt;&gt; &gt; &gt; upgrade/pacific-p2p - Laura
&gt;&gt;&gt; &gt; &gt; powercycle - Brad (SELinux denials)
&gt;&gt;&gt; &gt; &gt; ceph-volume - Guillaume, Adam K
&gt;&gt;&gt; &gt; &gt;
&gt;&gt;&gt; &gt; &gt; Thx
&gt;&gt;&gt; &gt; &gt; YuriW
&gt;&gt;&gt; &gt; &gt; _______________________________________________
&gt;&gt;&gt; &gt; &gt; Dev mailing list -- dev@xxxxxxx
&gt;&gt;&gt; &gt; &gt; To unsubscribe send an email to dev-leave@xxxxxxx
&gt;&gt;&gt; &gt;
&gt;&gt;&gt; _______________________________________________
&gt;&gt;&gt; Dev mailing list -- dev@xxxxxxx
&gt;&gt;&gt; To unsubscribe send an email to dev-leave@xxxxxxx
&gt;&gt;&gt;
&gt;&gt;
&gt;&gt;
&gt;&gt; --
&gt;&gt;
&gt;&gt; Laura Flores
&gt;&gt;
&gt;&gt; She/Her/Hers
&gt;&gt;
&gt;&gt; Software Engineer, Ceph Storage <https://ceph.io&gt;
&gt;&gt;
&gt;&gt; Chicago, IL
&gt;&gt;
&gt;&gt; lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx&gt;
&gt;&gt; M: +17087388804
&gt;&gt;
&gt;&gt;
&gt;&gt; _______________________________________________
&gt;&gt; Dev mailing list -- dev@xxxxxxx
&gt;&gt; To unsubscribe send an email to dev-leave@xxxxxxx
&gt;&gt;
&gt;
&gt;
&gt; --
&gt;
&gt; *Guillaume Abrioux*Senior Software Engineer
&gt;


-- 

*Guillaume Abrioux*Senior Software Engineer

------------------------------

Subject: Digest Footer

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


------------------------------

End of ceph-users Digest, Vol 107, Issue 20
*******************************************
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux