CEPH Filesystem Development
[Prev Page][Next Page]
- Re: C++11, std::list::size(), and trusty
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: better doc (and build) validation
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: Could you resend the link to the pull request?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Assertion failed while buildign manpages
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH 2/2] ceph: avoid dereferencing invalid pointer during cached readdir
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH 1/2] ceph: use atomic64_t for ceph_inode_info::i_shared_gen
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- better doc (and build) validation
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Assertion failed while buildign manpages
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Assertion failed while buildign manpages
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [PATCH 2/2] ceph: avoid dereferencing invalid pointer during cached readdir
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 1/2] ceph: use atomic64_t for ceph_inode_info::i_shared_gen
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Assertion failed while buildign manpages
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: ceph-dencoder decode PGMap failed
- From: Sage Weil <sage@xxxxxxxxxxxx>
- ceph-dencoder decode PGMap failed
- From: <simplemaomao@xxxxxxx>
- [PATCH 2/2] ceph: avoid dereferencing invalid pointer during cached readdir
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 1/2] ceph: use atomic64_t for ceph_inode_info::i_shared_gen
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: 答复: osd: fine-grained statistics for object space usage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Assertion failed while buildign manpages
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Assertion failed while buildign manpages
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Assertion failed while buildign manpages
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: iSCSI gateway
- From: Lars Seipel <lars.seipel@xxxxxxx>
- mimic-dev1 branch for 13.0.1
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] dropping trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [ceph-users] dropping trusty
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: [PATCH] ceph: drop negtive child dentries before try pruning inode's alias
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Missing Debian packages (was: Luminous v12.2.2 released)
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] [Docs] s/ceph-disk/ceph-volume/g ?
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: 答复: osd: fine-grained statistics for object space usage
- From: Igor Fedotov <ifedotov@xxxxxxx>
- [Docs] s/ceph-disk/ceph-volume/g ?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap [how to avoid in Gentoo in future]
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: [ceph-users] dropping trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Cary <dynamic.cary@xxxxxxxxx>
- Re: Missing Debian packages (was: Luminous v12.2.2 released)
- From: Lars Seipel <lars.seipel@xxxxxxxxx>
- Re: Missing Debian packages (was: Luminous v12.2.2 released)
- From: Lars Seipel <lars.seipel@xxxxxxxxx>
- Re: Missing Debian packages (was: Luminous v12.2.2 released)
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Missing Debian packages (was: Luminous v12.2.2 released)
- From: Lars Seipel <lars.seipel@xxxxxxxxx>
- Re: config on mons
- From: Sage Weil <sweil@xxxxxxxxxx>
- Luminous v12.2.2 released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: About the "temperature histogram" of tier-agent
- From: YuFan Chen <wiz.chen@xxxxxxxxx>
- Re: 答复: osd: fine-grained statistics for object space usage
- From: Igor Fedotov <ifedotov@xxxxxxx>
- Re: About the "temperature histogram" of tier-agent
- From: Li Wang <laurence.liwang@xxxxxxxxx>
- Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Integers as String in osd_metadata (memory, rotational)
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)
- From: Stefan Kooman <stefan@xxxxxx>
- Re: ceph-disk removal roadmap (was ceph-disk is now deprecated)
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: kefu chai <tchaikov@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [PATCH] ceph: drop negtive child dentries before try pruning inode's alias
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: config on mons
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: 答复: osd: fine-grained statistics for object space usage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: 答复: osd: fine-grained statistics for object space usage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: dropping trusty
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Ceph Developers Monthly - December
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- dropping trusty
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)
- From: Peter Woodman <peter@xxxxxxxxxxxx>
- ceph-disk removal roadmap (was ceph-disk is now deprecated)
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] ceph: drop negtive child dentries before try pruning inode's alias
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH] ceph: drop negtive child dentries before try pruning inode's alias
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: metadata spill back onto block.slow before block.db filled up
- From: Igor Fedotov <ifedotov@xxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [PATCH] ceph: decease session->s_trim_caps only after caps get trimmed
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: metadata spill back onto block.slow before block.db filled up
- From: shasha lu <lushasha08@xxxxxxxxx>
- Re: About the "temperature histogram" of tier-agent
- From: Li Wang <laurence.liwang@xxxxxxxxx>
- [PATCH] ceph: decease session->s_trim_caps only after caps get trimmed
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Cary <dynamic.cary@xxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Cary <dynamic.cary@xxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Questions regarding mds early reply
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: DBObjectMap::header_lock forcing filestore threads to be sequentail
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: metadata spill back onto block.slow before block.db filled up
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: ceph-disk is now deprecated
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: metadata spill back onto block.slow before block.db filled up
- From: Igor Fedotov <ifedotov@xxxxxxx>
- Re: Questions regarding mds early reply
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Question about ceph paxos implementation
- From: Kang Wang <beijingwangkang@xxxxxxxxx>
- Re: CDM for: pg log, pg info, and dup ops data storage
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: DBObjectMap::header_lock forcing filestore threads to be sequentail
- From: yuxiang fang <abcdeffyx@xxxxxxxxx>
- Fwd: Questions regarding mds early reply
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: some issue about peering progress
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: DBObjectMap::header_lock forcing filestore threads to be sequentail
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Cary <dynamic.cary@xxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: metadata spill back onto block.slow before block.db filled up
- From: Igor Fedotov <ifedotov@xxxxxxx>
- Re: metadata spill back onto block.slow before block.db filled up
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: DBObjectMap::header_lock forcing filestore threads to be sequentail
- From: yuxiang fang <abcdeffyx@xxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: metadata spill back onto block.slow before block.db filled up
- From: Igor Fedotov <ifedotov@xxxxxxx>
- Re: About the "temperature histogram" of tier-agent
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: metadata spill back onto block.slow before block.db filled up
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: ceph-disk is now deprecated
- From: Andreas Calminder <andreas.calminder@xxxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Question about ceph paxos implementation
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: ceph-disk is now deprecated
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: ceph-disk is now deprecated
- From: Andreas Calminder <andreas.calminder@xxxxxxxxxx>
- Re: ceph-disk is now deprecated
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Integers as String in osd_metadata (memory, rotational)
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: ceph-disk is now deprecated
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Integers as String in osd_metadata (memory, rotational)
- From: Wido den Hollander <wido@xxxxxxxx>
- DBObjectMap::header_lock forcing filestore threads to be sequentail
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: ceph-disk is now deprecated
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: [ceph-users] ceph-disk is now deprecated
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: ceph-disk is now deprecated
- From: Andreas Calminder <andreas.calminder@xxxxxxxxxx>
- About the "temperature histogram" of tier-agent
- From: YuFan Chen <wiz.chen@xxxxxxxxx>
- metadata spill back onto block.slow before block.db filled up
- From: shasha lu <lushasha08@xxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Cary <dynamic.cary@xxxxxxxxx>
- Re: Question about ceph paxos implementation
- From: Kang Wang <beijingwangkang@xxxxxxxxx>
- Re: Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap
- From: Cary <dynamic.cary@xxxxxxxxx>
- Re: osd: fine-grained statistics for object space usage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- osd: fine-grained statistics for object space usage
- From: Igor Fedotov <ifedotov@xxxxxxx>
- Re: ceph-disk is now deprecated
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: coming in boost 1.66
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- coming in boost 1.66
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- rgw: who will write a key starting with a character greater than BI_PREFIX_CHAR
- From: 宋新颖 <songxinying.ftd@xxxxxxxxx>
- Re: Question about ceph paxos implementation
- From: Sage Weil <sage@xxxxxxxxxxxx>
- ceph-disk is now deprecated
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Question about ceph paxos implementation
- From: Kang Wang <beijingwangkang@xxxxxxxxx>
- Re: Blocked / Slow requests in health JSON from Mon/Mgr
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Blocked / Slow requests in health JSON from Mon/Mgr
- From: John Spray <jspray@xxxxxxxxxx>
- Blocked / Slow requests in health JSON from Mon/Mgr
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Journaler::_flush
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Journaler::_flush
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- iSCSI gateway
- From: Marc Cote <marc.cote.mathieu@xxxxxxxxx>
- Re: boost: to download, or not to download?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: boost: to download, or not to download?
- From: kefu chai <tchaikov@xxxxxxxxx>
- boost: to download, or not to download?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- 11/23/2017 Perf meeting canceled
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Ceph Tech Talk Cancelled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Small objects in erasure-coded pool
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: presentation: making rgw's process_request() asynchronous
- From: liuchang0812 <liuchang0812@xxxxxxxxx>
- Re: presentation: making rgw's process_request() asynchronous
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: CDM for: pg log, pg info, and dup ops data storage
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: Small objects in erasure-coded pool
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Re: CDM for: pg log, pg info, and dup ops data storage
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Small objects in erasure-coded pool
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- [GIT PULL] Ceph updates for 4.15-rc1
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Small objects in erasure-coded pool
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Re: "Inherit members" on redmine subprojects
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: "Inherit members" on redmine subprojects
- From: John Spray <jspray@xxxxxxxxxx>
- Re: "Inherit members" on redmine subprojects
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- "Inherit members" on redmine subprojects
- From: John Spray <jspray@xxxxxxxxxx>
- reply // about memory alignment
- From: Liuhao <liu.haoA@xxxxxxx>
- Re: Build failed in Jenkins: ceph-master #1458
- From: kefu chai <tchaikov@xxxxxxxxx>
- radosgw-admin improvements
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Build failed in Jenkins: ceph-master #1458
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: about memory alignment
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Build failed in Jenkins: ceph-master #1458
- From: kefu chai <tchaikov@xxxxxxxxx>
- Fwd: Build failed in Jenkins: ceph-master #1458
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] who is using nfs-ganesha and cephfs?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [ceph-users] who is using nfs-ganesha and cephfs?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: MDS connection problem
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: MDS connection problem
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: MDS connection problem
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Unittest_rbd_miiror: Build failed in Jenkins: ceph-master #1448
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Errors in Ceph
- From: "Cosmin V. Miron (Cosmic Sound)" <siravecavec@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- presentation: making rgw's process_request() asynchronous
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: [ceph-users] who is using nfs-ganesha and cephfs?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- [PATCH v2] ceph: snapshot nfs re-export
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [ceph-users] ceph-deploy failed to deploy osd randomly
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Unittest_rbd_miiror: Build failed in Jenkins: ceph-master #1448
- From: Venky Shankar <vshankar@xxxxxxxxxx>
- Re: MDS connection problem
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Unittest_rbd_miiror: Build failed in Jenkins: ceph-master #1448
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Unittest_rbd_miiror: Build failed in Jenkins: ceph-master #1448
- From: Venky Shankar <vshankar@xxxxxxxxxx>
- Unittest_rbd_miiror: Build failed in Jenkins: ceph-master #1448
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Ceph Benchmark Visualization Project
- From: Neha Ojha <nojha@xxxxxxxxxx>
- Re: 12.2.2 Luminous validation status
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1445
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Ceph Benchmark Visualization Project
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1445
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- 12.2.2 Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1445
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: config on mons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Fwd: Build failed in Jenkins: ceph-master #1445
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- MDS connection problem
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Running a FreeBSD bhyve instance on a Ceph cluster
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: config on mons
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Dashboard enhancements
- From: John Spray <jspray@xxxxxxxxxx>
- Re: config on mons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: ceph-deploy failed to deploy osd randomly
- From: Wei Jin <wjin.cn@xxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: SMART disk monitoring
- From: Huang Zhiteng <winston.d@xxxxxxxxx>
- Re: config on mons
- From: John Spray <jspray@xxxxxxxxxx>
- Re: config on mons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: config on mons
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: config on mons
- From: Kyle Bader <kyle.bader@xxxxxxxxx>
- Re: config on mons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: config on mons
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: SMART disk monitoring
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: config on mons
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: config on mons
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: config on mons
- From: John Spray <jspray@xxxxxxxxxx>
- Re: config on mons
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1408
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: SMART disk monitoring
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: SMART disk monitoring
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- new wallclock profiler github repo
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: config on mons
- From: John Spray <jspray@xxxxxxxxxx>
- Re: config on mons
- From: Kyle Bader <kyle.bader@xxxxxxxxx>
- Re: config on mons
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: config on mons
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- "Hybrid SMR" drive spec proposal for OCP
- From: Simon Leinen <simon.leinen@xxxxxxxxx>
- Re: decompiled crushmap device list after removing osd
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: There is no rbd_aio_write_traced function in librbd
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: SMART disk monitoring
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: SMART disk monitoring
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: config on mons
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH] rbd: default to single-major device number scheme
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- about memory alignment
- From: Liuhao <liu.haoA@xxxxxxx>
- Re: SMART disk monitoring
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: SMART disk monitoring
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- [PATCH] rbd: default to single-major device number scheme
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: SMART disk monitoring
- From: John Spray <jspray@xxxxxxxxxx>
- Re: config on mons
- From: John Spray <jspray@xxxxxxxxxx>
- Re: config on mons
- From: John Spray <jspray@xxxxxxxxxx>
- Re: C++11, std::list::size(), and trusty
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: rgw: storing (and securing?) totp seed information
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: config on mons
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: rgw: storing (and securing?) totp seed information
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: rgw: storing (and securing?) totp seed information
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: config on mons
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: config on mons
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: SMART disk monitoring
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: SMART disk monitoring
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1408
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: SMART disk monitoring
- From: Yaarit Hatuka <yaarit@xxxxxxxxx>
- rgw: storing (and securing?) totp seed information
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: SMART disk monitoring
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- SMART disk monitoring
- From: Sage Weil <sweil@xxxxxxxxxx>
- C++11, std::list::size(), and trusty
- From: Sage Weil <sweil@xxxxxxxxxx>
- config on mons
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [nfs-ganesha]{PATCH] FSAL_CEPH: do an inode lookup vs. MDS when the Inode is not in cache
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [nfs-ganesha]{PATCH] FSAL_CEPH: do an inode lookup vs. MDS when the Inode is not in cache
- From: Sage Weil <sage@xxxxxxxxxxxx>
- [GIT PULL] Ceph fix for 4.14-rc9
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [nfs-ganesha]{PATCH] FSAL_CEPH: do an inode lookup vs. MDS when the Inode is not in cache
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [nfs-ganesha]{PATCH] FSAL_CEPH: do an inode lookup vs. MDS when the Inode is not in cache
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- jenkins failing smoke.sh
- From: Amit <amitkuma@xxxxxxxxxx>
- Re: [nfs-ganesha]{PATCH] FSAL_CEPH: do an inode lookup vs. MDS when the Inode is not in cache
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [nfs-ganesha]{PATCH] FSAL_CEPH: do an inode lookup vs. MDS when the Inode is not in cache
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- [nfs-ganesha]{PATCH] FSAL_CEPH: do an inode lookup vs. MDS when the Inode is not in cache
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- 12.2.2 Luminous ready for QE
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: question about ec partial read in a single stripe
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] who is using nfs-ganesha and cephfs?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Osd failure detection
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: who is using nfs-ganesha and cephfs?
- From: "Supriti Singh" <Supriti.Singh@xxxxxxxx>
- Re: question about ec partial read in a single stripe
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: Osd failure detection
- From: Piotr Dałek <branch@xxxxxxxxxxxxxxxx>
- Re: Osd failure detection
- From: Wei Jin <wjin.cn@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- question about ec partial read in a single stripe
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: Osd failure detection
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: who is using nfs-ganesha and cephfs?
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Osd failure detection
- From: Wei Jin <wjin.cn@xxxxxxxxx>
- About a quantitative analysis to data reliability of Ceph
- From: Li Wang <laurence.liwang@xxxxxxxxx>
- Re: [ceph-users] who is using nfs-ganesha and cephfs?
- From: Wido den Hollander <wido@xxxxxxxx>
- 11/09/2017 perf meeting is canceled
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: [ceph-users] who is using nfs-ganesha and cephfs?
- From: Lincoln Bryant <lincolnb@xxxxxxxxxxxx>
- Re: Redmine problem
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Redmine problem
- From: David Galloway <dgallowa@xxxxxxxxxx>
- RE: [ceph-users] who is using nfs-ganesha and cephfs?
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- Re: Redmine problem
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Redmine problem
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Redmine problem
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: who is using nfs-ganesha and cephfs?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- who is using nfs-ganesha and cephfs?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 2/2] rbd: get rid of rbd_mapping::read_only
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: 12.2.2 status
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek <abhishek@xxxxxxxx>
- Re: [ceph-users] removing cluster name support
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: decompiled crushmap device list after removing osd
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] removing cluster name support
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH 2/2] rbd: get rid of rbd_mapping::read_only
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] rbd: use GFP_NOIO for parent stat and data requests
- From: David Disseldorp <ddiss@xxxxxxx>
- decompiled crushmap device list after removing osd
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: [PATCH 2/2] rbd: get rid of rbd_mapping::read_only
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: Fw: XFS on RBD deadlock
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] removing cluster name support
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] removing cluster name support
- From: kefu chai <tchaikov@xxxxxxxxx>
- Fw: XFS on RBD deadlock
- From: "Brennecke, Simon" <simon.brennecke@xxxxxxx>
- Re: [PATCH] libceph: don't WARN() if user tries to add invalid key
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] removing cluster name support
- From: Erik McCormick <emccormick@xxxxxxxxxxxxxxx>
- [PATCH] libceph: don't WARN() if user tries to add invalid key
- From: Eric Biggers <ebiggers3@xxxxxxxxx>
- Re: 12.2.2 status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Does anything still use the separate ceph-qa-suite repo?
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- [PATCH] rbd: use GFP_NOIO for parent stat and data requests
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 2/2] rbd: get rid of rbd_mapping::read_only
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 1/2] rbd: fix and simplify rbd_ioctl_set_ro()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: kefu chai <tchaikov@xxxxxxxxx>
- why not share last_complete in record_write_error
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: 12.2.2 status
- From: kefu chai <tchaikov@xxxxxxxxx>
- python crush tools uses pre luminous health status
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: {pg_num} auto-tuning project
- From: Sage Weil <sage@xxxxxxxxxxxx>
- reply: reply: reply: about filestore->journal->rebuild_align
- From: Liuhao <liu.haoA@xxxxxxx>
- {pg_num} auto-tuning project
- From: bhavishya <bhavishya@xxxxxxxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Ricardo Dias <rdias@xxxxxxxx>
- Ceph Community at the OpenStack Summit Sydney 2017
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Performance questions.
- From: Tomasz Kusmierz <tom.kusmierz@xxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: 12.2.2 status
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1408
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1408
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1408
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
- From: Bassam Tabbara <bassam@xxxxxxxxxxx>
- Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
- From: Bassam Tabbara <bassam@xxxxxxxxxxx>
- Re: 12.2.2 status
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: 12.2.2 status
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: reply: reply: about filestore->journal->rebuild_align
- From: Sage Weil <sage@xxxxxxxxxxxx>
- reply: reply: about filestore->journal->rebuild_align
- From: Liuhao <liu.haoA@xxxxxxx>
- Re: distributed point-in-time consistency report
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: Additional backport labels and process improvements
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Additional backport labels and process improvements
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Additional backport labels and process improvements
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Additional backport labels and process improvements
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Additional backport labels and process improvements
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: cephfs performance
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: cephfs performance
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: [PATCH v2] rbd: set discard alignment to zero
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Should set bluestore_shard_finishers as true?
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: success-comment of github pr trigger
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: some issue about peering progress
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Does anything still use the separate ceph-qa-suite repo?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- [PATCH v2] rbd: set discard alignment to zero
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: [PATCH] rbd: set discard alignment to zero
- From: David Disseldorp <ddiss@xxxxxxx>
- Does anything still use the separate ceph-qa-suite repo?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: omap and xattrs clarifications
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: some issue about peering progress
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph Developers Monthly - November
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: success-comment of github pr trigger
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: [PATCH] rbd: set discard alignment to zero
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: some issue about peering progress
- From: Ning Yao <zay11022@xxxxxxxxx>
- Re: success-comment of github pr trigger
- From: David Galloway <dgallowa@xxxxxxxxxx>
- [PATCH] ceph: invalidate pages that beyond EOF in ceph_writepages_start()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] rbd: set discard alignment to zero
- From: David Disseldorp <ddiss@xxxxxxx>
- success-comment of github pr trigger
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [PATCH] ceph: silence sparse endianness warning in encode_caps_cb
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] ceph: silence sparse endianness warning in encode_caps_cb
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH] ceph: remove the bump of i_version
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: mds: failed to decode msg EXPORT_DIR
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: omap and xattrs clarifications
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- Re: mds: failed to decode msg EXPORT_DIR
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Karol Mroz <kmroz@xxxxxxx>
- Re: [ceph-users] Ceph @ OpenStack Sydney Summit
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: CDM for: pg log, pg info, and dup ops data storage
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- [PATCH] ceph: remove the bump of i_version
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- CDM for: pg log, pg info, and dup ops data storage
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: 12.2.2 status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Unable to edit CDM page
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: 12.2.2 status
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: about "osd: stateful health warnings: mgr->mon"
- From: kefu chai <tchaikov@xxxxxxxxx>
- Unable to edit CDM page
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: Ceph @ OpenStack Sydney Summit
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- removed_snaps update
- From: Sage Weil <sage@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: [Ceph-qa] Failed to schedule teuthology-2017-10-27_01:15:05-upgrade:hammer-x-jewel-distro-basic-vps
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- some issue about peering progress
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: kefu chai <tchaikov@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Fwd: [Ceph-qa] Failed to schedule teuthology-2017-10-27_01:15:05-upgrade:hammer-x-jewel-distro-basic-vps
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: 12.2.2 status
- From: Karol Mroz <kmroz@xxxxxxx>
- Re: 12.2.2 status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Karol Mroz <kmroz@xxxxxxx>
- Re: 12.2.2 status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Ceph Developers Monthly - November
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Ceph Tech Talk Cancelled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: recovery scheduling
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: librados3
- From: kefu chai <tchaikov@xxxxxxxxx>
- [GIT PULL] Ceph fix for 4.14-rc7
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: librados3
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: recovery scheduling
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: librados3
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- recovery scheduling
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: librados3
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
- From: Sage Weil <sweil@xxxxxxxxxx>
- announcing ceph-helm (ceph on kubernetes orchestration)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: librados3
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: about "osd: stateful health warnings: mgr->mon"
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: about "osd: stateful health warnings: mgr->mon"
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: reply: about filestore->journal->rebuild_align
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: pg inconsistent and repair doesn't work
- From: Wei Jin <wjin.cn@xxxxxxxxx>
- about "osd: stateful health warnings: mgr->mon"
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: librados3
- From: kefu chai <tchaikov@xxxxxxxxx>
- rbd rm snap on image with exclusive lock
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- reply: about filestore->journal->rebuild_align
- From: Liuhao <liu.haoA@xxxxxxx>
- pg inconsistent and repair doesn't work
- From: Wei Jin <wjin.cn@xxxxxxxxx>
- Re: fun with seastar
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: fun with seastar
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: fun with seastar
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Messenger V2: multiple bind support
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: fun with seastar
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: about filestore->journal->rebuild_align
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: fun with seastar
- From: Haomai Wang <haomai@xxxxxxxx>
- about filestore->journal->rebuild_align
- From: Liuhao <liu.haoA@xxxxxxx>
- unclean pgs health warning
- From: Sage Weil <sage@xxxxxxxxxx>
- fun with seastar
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: [PATCH] ceph: present consistent fsid, regardless of arch endianness
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] ceph: present consistent fsid, regardless of arch endianness
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH] ceph: present consistent fsid, regardless of arch endianness
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Why does messenger sends the address of himself and of the connecting peer
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: multiple client read/write in cephfs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Why does messenger sends the address of himself and of the connecting peer
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: multiple client read/write in cephfs
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: multiple client read/write in cephfs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- multiple client read/write in cephfs
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- mds: failed to decode msg EXPORT_DIR
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Why does messenger sends the address of himself and of the connecting peer
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Why does messenger sends the address of himself and of the connecting peer
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Why does messenger sends the address of himself and of the connecting peer
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: Work update related to rocksdb
- From: Radoslaw Zarzynski <rzarzyns@xxxxxxxxxx>
- Re: luminous OSD memory usage
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH] ceph: clean up spinlocking and list handling around cleanup_cap_releases
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: unlock dangling spinlock in try_flush_caps
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: luminous OSD memory usage
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Ceph Upstream @The Pub in Prague
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: librados3
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [ceph-users] [filestore][journal][prepare_entry] rebuild data_align is 4086, maybe a bug
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: "debug ms = 0/5" logging ...
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: cephfs quotas
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: librados3
- From: Alan Somers <asomers@xxxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Alan Somers <asomers@xxxxxxxxxxx>
- Re: cephfs quotas
- From: Jan Fajerski <jan-fajerski@xxxxxxx>
- 1.chacra.ceph.com outage
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: [PATCH] ceph: clean up spinlocking and list handling around cleanup_cap_releases
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: "debug ms = 0/5" logging ...
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH] ceph: unlock dangling spinlock in try_flush_caps
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] ceph: clean up spinlocking and list handling around cleanup_cap_releases
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH] ceph: unlock dangling spinlock in try_flush_caps
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: librados3
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: librados3
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: "debug ms = 0/5" logging ...
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: librados3
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: librados3
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- "debug ms = 0/5" logging ...
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: cephfs quotas
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: cephfs quotas
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: librados3
- From: John Spray <jspray@xxxxxxxxxx>
- Re: cephfs quotas
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- librados3
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: cephfs quotas
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Messenger V2
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Messenger V2
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Messenger V2
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: alan somers <asomers@xxxxxxxxx>
- Messenger V2
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: [PATCH] ceph: remove unused and redundant variable dropping
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: cephfs quotas
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Answer01
- From: 55574742@xxxxxxxxxxxxxxxxxx
- [PATCH] ceph: remove unused and redundant variable dropping
- From: Colin King <colin.king@xxxxxxxxxxxxx>
- Re: cephfs quotas
- From: John Spray <jspray@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- cephfs quotas
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: Work update related to rocksdb
- From: Radoslaw Zarzynski <rzarzyns@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Wish list : automatic rebuild with hot swap osd ?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Unstable clock
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Unstable clock
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Work update related to rocksdb
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: Unstable clock
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: mds client reconnect
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: CephFS: Jewel release: kernel panic seen while unmounting. Known Issue?
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Christian Balzer <chibi@xxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Anthony Verevkin <anthony@xxxxxxxxxxx>
- Re: 答复: [ceph-users] assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: osd assertion failure during scrub
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Sage Weil <sage@xxxxxxxxxxxx>
- preparing a bluestore OSD fails with no (useful) output
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] ceph: Delete unused variable in mds_client
- From: Christos Gkekas <chris.gekas@xxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Work update related to rocksdb
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: CephFS: Jewel release: kernel panic seen while unmounting. Known Issue?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Work update related to rocksdb
- From: Sage Weil <sweil@xxxxxxxxxx>
- osd assertion failure during scrub
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: [PATCH] net: ceph: mark expected switch fall-throughs
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: Delete unused variable in mds_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: CephFS: Jewel release: kernel panic seen while unmounting. Known Issue?
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] net: ceph: mark expected switch fall-throughs
- From: "Gustavo A. R. Silva" <garsilva@xxxxxxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- 答复: [ceph-users] assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: zhaomingyue <zhao.mingyue@xxxxxxx>
- Re: CephFS: Jewel release: kernel panic seen while unmounting. Known Issue?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH] ceph: Delete unused variable in mds_client
- From: Christos Gkekas <chris.gekas@xxxxxxxxx>
- Re: [PATCH] ceph: Delete unused variables in mds_client
- From: Christos Gkekas <chris.gekas@xxxxxxxxx>
- Re: mds client reconnect
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: mds client reconnect
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: mds client reconnect
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: some questions about ceph issues#15034 & 17379
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: mds client reconnect
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: major infrastructure outage
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [PATCH] ceph: Delete unused variables in mds_client
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- [PATCH] ceph: Delete unused variables in mds_client
- From: Christos Gkekas <chris.gekas@xxxxxxxxx>
- Re: do we support building on rhel/centos 7.{0,1,2} ?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: do we support building on rhel/centos 7.{0,1,2} ?
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: do we support building on rhel/centos 7.{0,1,2} ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: do we support building on rhel/centos 7.{0,1,2} ?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Re : [ceph-users] general protection fault: 0000 [#1] SMP
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: major infrastructure outage
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: call cls::journal tag_list and take osd loop infinite
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- major infrastructure outage
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- FOSDEM Call for Participation: Software Defined Storage devroom
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: [ceph-users] general protection fault: 0000 [#1] SMP
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] general protection fault: 0000 [#1] SMP
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re : [ceph-users] general protection fault: 0000 [#1] SMP
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [ceph-users] general protection fault: 0000 [#1] SMP
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- do we support building on rhel/centos 7.{0,1,2} ?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Messenger V2 status
- From: Sage Weil <sweil@xxxxxxxxxx>
- Messenger V2 status
- From: Ricardo Dias <rdias@xxxxxxxx>
- general protection fault: 0000 [#1] SMP
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: librados on OSX
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: librados on OSX
- From: Chris Blum <chris.blu@xxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Fwd: Jenkins build is back to normal : ceph-master #1305
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]