CEPH Filesystem Development
[Prev Page][Next Page]
- decompiled crushmap device list after removing osd
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: [PATCH 2/2] rbd: get rid of rbd_mapping::read_only
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: Fw: XFS on RBD deadlock
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] removing cluster name support
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] removing cluster name support
- From: kefu chai <tchaikov@xxxxxxxxx>
- Fw: XFS on RBD deadlock
- From: "Brennecke, Simon" <simon.brennecke@xxxxxxx>
- Re: [PATCH] libceph: don't WARN() if user tries to add invalid key
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] removing cluster name support
- From: Erik McCormick <emccormick@xxxxxxxxxxxxxxx>
- [PATCH] libceph: don't WARN() if user tries to add invalid key
- From: Eric Biggers <ebiggers3@xxxxxxxxx>
- Re: 12.2.2 status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Does anything still use the separate ceph-qa-suite repo?
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- [PATCH] rbd: use GFP_NOIO for parent stat and data requests
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 2/2] rbd: get rid of rbd_mapping::read_only
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 1/2] rbd: fix and simplify rbd_ioctl_set_ro()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: kefu chai <tchaikov@xxxxxxxxx>
- why not share last_complete in record_write_error
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: 12.2.2 status
- From: kefu chai <tchaikov@xxxxxxxxx>
- python crush tools uses pre luminous health status
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: {pg_num} auto-tuning project
- From: Sage Weil <sage@xxxxxxxxxxxx>
- reply: reply: reply: about filestore->journal->rebuild_align
- From: Liuhao <liu.haoA@xxxxxxx>
- {pg_num} auto-tuning project
- From: bhavishya <bhavishya@xxxxxxxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Ricardo Dias <rdias@xxxxxxxx>
- Ceph Community at the OpenStack Summit Sydney 2017
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Performance questions.
- From: Tomasz Kusmierz <tom.kusmierz@xxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: 12.2.2 status
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1408
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1408
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #1408
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
- From: Bassam Tabbara <bassam@xxxxxxxxxxx>
- Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
- From: Bassam Tabbara <bassam@xxxxxxxxxxx>
- Re: 12.2.2 status
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: 12.2.2 status
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: reply: reply: about filestore->journal->rebuild_align
- From: Sage Weil <sage@xxxxxxxxxxxx>
- reply: reply: about filestore->journal->rebuild_align
- From: Liuhao <liu.haoA@xxxxxxx>
- Re: distributed point-in-time consistency report
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: Additional backport labels and process improvements
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Additional backport labels and process improvements
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Additional backport labels and process improvements
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Additional backport labels and process improvements
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Additional backport labels and process improvements
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: cephfs performance
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: cephfs performance
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: [PATCH v2] rbd: set discard alignment to zero
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Should set bluestore_shard_finishers as true?
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: success-comment of github pr trigger
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: some issue about peering progress
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Does anything still use the separate ceph-qa-suite repo?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- [PATCH v2] rbd: set discard alignment to zero
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: [PATCH] rbd: set discard alignment to zero
- From: David Disseldorp <ddiss@xxxxxxx>
- Does anything still use the separate ceph-qa-suite repo?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: omap and xattrs clarifications
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: some issue about peering progress
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph Developers Monthly - November
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: success-comment of github pr trigger
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: [PATCH] rbd: set discard alignment to zero
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: some issue about peering progress
- From: Ning Yao <zay11022@xxxxxxxxx>
- Re: success-comment of github pr trigger
- From: David Galloway <dgallowa@xxxxxxxxxx>
- [PATCH] ceph: invalidate pages that beyond EOF in ceph_writepages_start()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] rbd: set discard alignment to zero
- From: David Disseldorp <ddiss@xxxxxxx>
- success-comment of github pr trigger
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [PATCH] ceph: silence sparse endianness warning in encode_caps_cb
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] ceph: silence sparse endianness warning in encode_caps_cb
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH] ceph: remove the bump of i_version
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: mds: failed to decode msg EXPORT_DIR
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: omap and xattrs clarifications
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- Re: mds: failed to decode msg EXPORT_DIR
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Karol Mroz <kmroz@xxxxxxx>
- Re: [ceph-users] Ceph @ OpenStack Sydney Summit
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: CDM for: pg log, pg info, and dup ops data storage
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- [PATCH] ceph: remove the bump of i_version
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- CDM for: pg log, pg info, and dup ops data storage
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: 12.2.2 status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Unable to edit CDM page
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: 12.2.2 status
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: about "osd: stateful health warnings: mgr->mon"
- From: kefu chai <tchaikov@xxxxxxxxx>
- Unable to edit CDM page
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: Ceph @ OpenStack Sydney Summit
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- removed_snaps update
- From: Sage Weil <sage@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: [Ceph-qa] Failed to schedule teuthology-2017-10-27_01:15:05-upgrade:hammer-x-jewel-distro-basic-vps
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- some issue about peering progress
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: kefu chai <tchaikov@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Fwd: [Ceph-qa] Failed to schedule teuthology-2017-10-27_01:15:05-upgrade:hammer-x-jewel-distro-basic-vps
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: 12.2.2 status
- From: Karol Mroz <kmroz@xxxxxxx>
- Re: 12.2.2 status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Karol Mroz <kmroz@xxxxxxx>
- Re: 12.2.2 status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: 12.2.2 status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- 12.2.2 status
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Ceph Developers Monthly - November
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Ceph Tech Talk Cancelled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: recovery scheduling
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: librados3
- From: kefu chai <tchaikov@xxxxxxxxx>
- [GIT PULL] Ceph fix for 4.14-rc7
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: librados3
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: recovery scheduling
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: librados3
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- recovery scheduling
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: librados3
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
- From: Sage Weil <sweil@xxxxxxxxxx>
- announcing ceph-helm (ceph on kubernetes orchestration)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: librados3
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: about "osd: stateful health warnings: mgr->mon"
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Messenger V2: multiple bind support
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: about "osd: stateful health warnings: mgr->mon"
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: [ceph-users] rbd rm snap on image with exclusive lock
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Why does Erasure-pool not support omap?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: reply: about filestore->journal->rebuild_align
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: pg inconsistent and repair doesn't work
- From: Wei Jin <wjin.cn@xxxxxxxxx>
- about "osd: stateful health warnings: mgr->mon"
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: librados3
- From: kefu chai <tchaikov@xxxxxxxxx>
- rbd rm snap on image with exclusive lock
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- reply: about filestore->journal->rebuild_align
- From: Liuhao <liu.haoA@xxxxxxx>
- pg inconsistent and repair doesn't work
- From: Wei Jin <wjin.cn@xxxxxxxxx>
- Re: fun with seastar
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: fun with seastar
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: fun with seastar
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Messenger V2: multiple bind support
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: fun with seastar
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: about filestore->journal->rebuild_align
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: fun with seastar
- From: Haomai Wang <haomai@xxxxxxxx>
- about filestore->journal->rebuild_align
- From: Liuhao <liu.haoA@xxxxxxx>
- unclean pgs health warning
- From: Sage Weil <sage@xxxxxxxxxx>
- fun with seastar
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: [PATCH] ceph: present consistent fsid, regardless of arch endianness
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] ceph: present consistent fsid, regardless of arch endianness
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH] ceph: present consistent fsid, regardless of arch endianness
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Why does messenger sends the address of himself and of the connecting peer
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: multiple client read/write in cephfs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Why does messenger sends the address of himself and of the connecting peer
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: multiple client read/write in cephfs
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: multiple client read/write in cephfs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- multiple client read/write in cephfs
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- mds: failed to decode msg EXPORT_DIR
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Why does messenger sends the address of himself and of the connecting peer
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Why does messenger sends the address of himself and of the connecting peer
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Why does messenger sends the address of himself and of the connecting peer
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: Work update related to rocksdb
- From: Radoslaw Zarzynski <rzarzyns@xxxxxxxxxx>
- Re: luminous OSD memory usage
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH] ceph: clean up spinlocking and list handling around cleanup_cap_releases
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: unlock dangling spinlock in try_flush_caps
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: luminous OSD memory usage
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Huge lookup when recursively mkdir
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Ceph Upstream @The Pub in Prague
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: librados3
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [ceph-users] [filestore][journal][prepare_entry] rebuild data_align is 4086, maybe a bug
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: "debug ms = 0/5" logging ...
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: cephfs quotas
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Huge lookup when recursively mkdir
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: librados3
- From: Alan Somers <asomers@xxxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Alan Somers <asomers@xxxxxxxxxxx>
- Re: cephfs quotas
- From: Jan Fajerski <jan-fajerski@xxxxxxx>
- 1.chacra.ceph.com outage
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: [PATCH] ceph: clean up spinlocking and list handling around cleanup_cap_releases
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: "debug ms = 0/5" logging ...
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH] ceph: unlock dangling spinlock in try_flush_caps
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] ceph: clean up spinlocking and list handling around cleanup_cap_releases
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH] ceph: unlock dangling spinlock in try_flush_caps
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: librados3
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: librados3
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: "debug ms = 0/5" logging ...
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: librados3
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: librados3
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- "debug ms = 0/5" logging ...
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: cephfs quotas
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: cephfs quotas
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: librados3
- From: John Spray <jspray@xxxxxxxxxx>
- Re: cephfs quotas
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- librados3
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: cephfs quotas
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Messenger V2
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Messenger V2
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Messenger V2
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: alan somers <asomers@xxxxxxxxx>
- Messenger V2
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: [PATCH] ceph: remove unused and redundant variable dropping
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: cephfs quotas
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Answer01
- From: 55574742@xxxxxxxxxxxxxxxxxx
- [PATCH] ceph: remove unused and redundant variable dropping
- From: Colin King <colin.king@xxxxxxxxxxxxx>
- Re: cephfs quotas
- From: John Spray <jspray@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- cephfs quotas
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: Work update related to rocksdb
- From: Radoslaw Zarzynski <rzarzyns@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Wish list : automatic rebuild with hot swap osd ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Wish list : automatic rebuild with hot swap osd ?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Unstable clock
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Unstable clock
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Work update related to rocksdb
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: Unstable clock
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: mds client reconnect
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: CephFS: Jewel release: kernel panic seen while unmounting. Known Issue?
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Work update related to rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Christian Balzer <chibi@xxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Anthony Verevkin <anthony@xxxxxxxxxxx>
- Re: 答复: [ceph-users] assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: osd assertion failure during scrub
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: preparing a bluestore OSD fails with no (useful) output
- From: Sage Weil <sage@xxxxxxxxxxxx>
- preparing a bluestore OSD fails with no (useful) output
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] ceph: Delete unused variable in mds_client
- From: Christos Gkekas <chris.gekas@xxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Luminous: osd_crush_location_hook renamed to crush_location_hook
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Work update related to rocksdb
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: CephFS: Jewel release: kernel panic seen while unmounting. Known Issue?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Work update related to rocksdb
- From: Sage Weil <sweil@xxxxxxxxxx>
- osd assertion failure during scrub
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: [PATCH] net: ceph: mark expected switch fall-throughs
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: Delete unused variable in mds_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: CephFS: Jewel release: kernel panic seen while unmounting. Known Issue?
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] net: ceph: mark expected switch fall-throughs
- From: "Gustavo A. R. Silva" <garsilva@xxxxxxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- 答复: [ceph-users] assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: zhaomingyue <zhao.mingyue@xxxxxxx>
- Re: CephFS: Jewel release: kernel panic seen while unmounting. Known Issue?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH] ceph: Delete unused variable in mds_client
- From: Christos Gkekas <chris.gekas@xxxxxxxxx>
- Re: [PATCH] ceph: Delete unused variables in mds_client
- From: Christos Gkekas <chris.gekas@xxxxxxxxx>
- Re: mds client reconnect
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: mds client reconnect
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: mds client reconnect
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: some questions about ceph issues#15034 & 17379
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: mds client reconnect
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: major infrastructure outage
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [PATCH] ceph: Delete unused variables in mds_client
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- [PATCH] ceph: Delete unused variables in mds_client
- From: Christos Gkekas <chris.gekas@xxxxxxxxx>
- Re: do we support building on rhel/centos 7.{0,1,2} ?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: do we support building on rhel/centos 7.{0,1,2} ?
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: do we support building on rhel/centos 7.{0,1,2} ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: do we support building on rhel/centos 7.{0,1,2} ?
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Re : [ceph-users] general protection fault: 0000 [#1] SMP
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: major infrastructure outage
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: call cls::journal tag_list and take osd loop infinite
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- major infrastructure outage
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- FOSDEM Call for Participation: Software Defined Storage devroom
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: [ceph-users] general protection fault: 0000 [#1] SMP
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] general protection fault: 0000 [#1] SMP
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re : [ceph-users] general protection fault: 0000 [#1] SMP
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [ceph-users] general protection fault: 0000 [#1] SMP
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- do we support building on rhel/centos 7.{0,1,2} ?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: removed_snaps
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- removed_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Messenger V2 status
- From: Sage Weil <sweil@xxxxxxxxxx>
- Messenger V2 status
- From: Ricardo Dias <rdias@xxxxxxxx>
- general protection fault: 0000 [#1] SMP
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: librados on OSX
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: librados on OSX
- From: Chris Blum <chris.blu@xxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Fwd: Jenkins build is back to normal : ceph-master #1305
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: OSD crashes
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: OSD crashes
- From: kefu chai <tchaikov@xxxxxxxxx>
- OSD crashes
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- rgw_dynamic_resharding default enabled?
- From: Andy Yao <andyzzyao@xxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] ceph-volume: migration and disk partition support
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [ceph-users] ceph-volume: migration and disk partition support
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: librados on OSX
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] ceph-volume: migration and disk partition support
- From: Stefan Kooman <stefan@xxxxxx>
- Re: librados on OSX
- From: Kefu Chai <kchai@xxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Multisite 3+ zones
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: bug in luminous bluestore?
- From: kefu chai <tchaikov@xxxxxxxxx>
- deleting snapshots in batches?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Multisite 3+ zones
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- [PATCH 03/16] ceph: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: bug in luminous bluestore?
- From: Ugis <ugis22@xxxxxxxxx>
- Re: [PATCH] ceph: Fix bool initialization/comparison
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: bug in luminous bluestore?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- bug in luminous bluestore?
- From: Ugis <ugis22@xxxxxxxxx>
- [PATCH] ceph: Fix bool initialization/comparison
- From: Thomas Meyer <thomas@xxxxxxxx>
- Re: ec overwrite issue
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- v10.2.10 Jewel released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- ceph-volume: migration and disk partition support
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- [GIT PULL] Ceph fixes for 4.14-rc4
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: ec overwrite issue
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: ec overwrite issue
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: ec overwrite issue
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: [Ceph-maintainers] Mimic timeline
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Fwd: Build failed in Jenkins: ceph-master #1284
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Mimic timeline
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Mimic timeline
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: ec overwrite issue
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- What is the progress of RDMA READ/WRITE?
- From: Justinas LINGYS <jlingys@xxxxxxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: ceph-iscsi packages
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- ceph-iscsi packages
- From: Barclay Jameson <almightybeeij@xxxxxxxxx>
- Re: Encrypted over WAN?
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Multisite 3+ zones
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Ceph on ARM meeting canceled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Docs: build check failed
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: break_lock in librbd API without blacklisting client
- From: Mauricio Garavaglia <mauricio@xxxxxxxxxxxx>
- single realm with multiple zonegroups
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Multisite 3+ zones
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: break_lock in librbd API without blacklisting client
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Encrypted over WAN?
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- break_lock in librbd API without blacklisting client
- From: Mauricio Garavaglia <mauricio@xxxxxxxxxxxx>
- Re: Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Single MDS cephx key
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Encrypted over WAN?
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Encrypted over WAN?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Docs: build check failed
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: "tobe" and "ready" in ceph-disk source
- From: Loic Dachary <ldachary@xxxxxxxxxx>
- Re: [Ceph-announce] Luminous v12.2.1 released
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Write to Secondary Zone?
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Write to Secondary Zone?
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Write to Secondary Zone?
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Encrypted over WAN?
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Why AsyncMessenger/AsyncConnection doesn't use support_zero_copy_read/zero_copy_read?
- From: Justinas LINGYS <jlingys@xxxxxxxxxxxxxx>
- Why AsyncMessenger/AsyncConnection doesn't use support_zero_copy_read/zero_copy_read?
- From: Justinas LINGYS <jlingys@xxxxxxxxxxxxxx>
- Re: Encrypted over WAN?
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: CEPH/BSD status
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: ec overwrite issue
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: ec overwrite issue
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: luminous/dmcrypt/bluestore
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- luminous/dmcrypt/bluestore
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Write to Secondary Zone?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- ec overwrite issue
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: CEPH/BSD status
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- OpenStack Sydney Forum - Ceph BoF proposal
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: [PATCH 03/15] ceph: Use pagevec_lookup_range_tag()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: CephFS HA support network appliance
- From: Sage Weil <sweil@xxxxxxxxxx>
- Luminous v12.2.1 released
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Joao Eduardo Luis <joao@xxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [ceph-users] Ceph Developers Monthly - October
- From: Shinobu Kinjo <skinjo@xxxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Ceph Tech Talk - September
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Strange behavior of OSD after an IO error
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: Single MDS cephx key
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Re: Single MDS cephx key
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Single MDS cephx key
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- [PATCH 03/15] ceph: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- Re: Single MDS cephx key
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Strange behavior of OSD after an IO error
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Single MDS cephx key
- From: John Spray <jspray@xxxxxxxxxx>
- Re: another inconsistent pg issue
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Single MDS cephx key
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Re: another inconsistent pg issue
- From: David Zafman <dzafman@xxxxxxxxxx>
- another inconsistent pg issue
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: inconsistent pg will not repair
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: inconsistent pg will not repair
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: lingering caps outstanding after client shutdown?
- From: Jeff Layton <jlayton@xxxxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: lingering caps outstanding after client shutdown?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- inconsistent pg will not repair
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: lingering caps outstanding after client shutdown?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- lingering caps outstanding after client shutdown?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Docs: build check failed
- From: John Spray <jspray@xxxxxxxxxx>
- Docs: build check failed
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Osds shift within Placement group
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Sage Weil <sweil@xxxxxxxxxx>
- help fixing inconsistent pg
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Is anyone aware of bluestor exploding all the time ?
- From: Tomasz Ku smokers <tom.kusmierz@xxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Is anyone aware of bluestor exploding all the time ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Is anyone aware of bluestor exploding all the time ?
- From: Tomasz Kusmierz <tom.kusmierz@xxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Osds shift within Placement group
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- migrated ceph disk wont start
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [ceph-users] OSD memory usage
- From: Sage Weil <sweil@xxxxxxxxxx>
- [GIT PULL] Ceph fixes for 4.14-rc2
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- "tobe" and "ready" in ceph-disk source
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: inconsistent file issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: fix for "crash in rocksdb LRUCache destructor with tcmalloc"
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: fix for "crash in rocksdb LRUCache destructor with tcmalloc"
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Mustafa Muhammad <mustafa1024m@xxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- ceph v10.2.10 QE validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Re: inconsistent file issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: inconsistent file issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- stuck recovery for many days, help needed
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: inconsistent file issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Luminous OSD high mem usage cause OS die
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: fix for "crash in rocksdb LRUCache destructor with tcmalloc"
- From: Jeff Layton <jlayton@xxxxxxxxxxxxxxx>
- inconsistent file issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- fix for "crash in rocksdb LRUCache destructor with tcmalloc"
- From: kefu chai <tchaikov@xxxxxxxxx>
- Jewel v10.2.10 ready for QE
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Nathan Cutler <ncutler@xxxxxxx>
- Status of luminous v12.2.1 QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: [PATCH] libceph: don't allow bidirectional swap of pg-upmap-items
- From: Sage Weil <sage@xxxxxxxxxxxx>
- [PATCH] libceph: don't allow bidirectional swap of pg-upmap-items
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Nathan Cutler <ncutler@xxxxxxx>
- OSD crashes (10.2.9)
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Bluestore aio_nr?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- CephFS Segfault 12.2.0
- From: Derek Yarnell <derek@xxxxxxxxxxxxxx>
- Re: [ceph-users] CephFS Segfault 12.2.0
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- OSD crashes
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Bluestore aio_nr?
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Time to drop 11429.yaml from jewel?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Bluestore aio_nr?
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Sepia CentOS test nodes now on 7.4
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: cephfs kernel bug (4.9.44)?
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 1/5] ceph: keep auth cap when inode has flocks or posix locks
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Time to drop 11429.yaml from jewel?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [ceph-users] RBD: How many snapshots is too many?
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: Ceph RDMA Memory Leakage
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: There is a big risk in function bufferlist::claim_prepend()
- From: 关云飞 <gyfelectric@xxxxxxxxx>
- Re: Ceph RDMA Memory Leakage
- From: Jin Cai <caijin.laurence@xxxxxxxxx>
- Re: Ceph RDMA Memory Leakage
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: [ceph/ceph] librados: Fix a potential risk of buffer::list::claim_prepend(list& b… (#17661)
- From: 关云飞 <gyfelectric@xxxxxxxxx>
- Re: There is a big risk in function bufferlist::claim_prepend()
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [ceph/ceph] librados: Fix a potential risk of buffer::list::claim_prepend(list& b… (#17661)
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: [PATCH 13/15] ceph: Use pagevec_lookup_range_nr_tag()
- From: Jan Kara <jack@xxxxxxx>
- Re: [f2fs-dev] [PATCH 07/15] f2fs: Use find_get_pages_tag() for looking up single page
- From: Jan Kara <jack@xxxxxxx>
- [ceph/ceph] librados: Fix a potential risk of buffer::list::claim_prepend(list& b… (#17661)
- From: 关云飞 <gyfelectric@xxxxxxxxx>
- Re: Ceph RDMA Memory Leakage
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Ceph RDMA Memory Leakage
- From: Jin Cai <caijin.laurence@xxxxxxxxx>
- Ceph RDMA module memory leakage
- From: Jin Cai <caijin.laurence@xxxxxxxxx>
- Re: [PATCH 13/15] ceph: Use pagevec_lookup_range_nr_tag()
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: REST APIs
- From: John Spray <jspray@xxxxxxxxxx>
- Re: REST APIs
- From: Lenz Grimmer <lenz@xxxxxxxxxxx>
- Re: REST APIs
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: How do I "install" from source? Service binaries (and /etc/ceph) are missing after make install
- From: Henrique Fingler <hfingler@xxxxxxxxxxxxx>
- Re: How do I "install" from source? Service binaries (and /etc/ceph) are missing after make install
- From: Sage Weil <sage@xxxxxxxxxxxx>
- How do I "install" from source? Service binaries (and /etc/ceph) are missing after make install
- From: Henrique Fingler <hfingler@xxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: snapshots
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: ceph-osd crash
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: snapshots
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: snapshots
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [ceph-users] mon health status gone from display
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: snapshots
- From: Eric Eastman <eric.eastman@xxxxxxxxxxxxxx>
- Re: [ceph-users] mon health status gone from display
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] mon health status gone from display
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: CephFS HA support network appliance
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] mon health status gone from display
- From: John Spray <jspray@xxxxxxxxxx>
- Re: snapshots
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- mon health status gone from display
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Geographic disperse Ceph
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: snapshots
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: snapshots
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Geographic disperse Ceph
- From: Sage Weil <sweil@xxxxxxxxxx>
- snapshots
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Geographic disperse Ceph
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Mixed versions of cluster and clients
- From: Mike A <mike.almateia@xxxxxxxxx>
- Re: Mixed versions of cluster and clients
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Mixed versions of cluster and clients
- From: Mike A <mike.almateia@xxxxxxxxx>
- Re: [f2fs-dev] [PATCH 07/15] f2fs: Use find_get_pages_tag() for looking up single page
- From: Chao Yu <chao@xxxxxxxxxx>
- Re: [f2fs-dev] [PATCH 06/15] f2fs: Simplify page iteration loops
- From: Chao Yu <chao@xxxxxxxxxx>
- Re: [f2fs-dev] [PATCH 05/15] f2fs: Use pagevec_lookup_range_tag()
- From: Chao Yu <chao@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Ugis <ugis22@xxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: rocksdb fails to build with gcc 7.1.1
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- rocksdb fails to build with gcc 7.1.1
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Noah Watkins <noahwatkins@xxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: what I did to fix the damaged
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: which mds server is damaged?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: file in one file system is a directory in ceph
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Noah Watkins <noahwatkins@xxxxxxxxx>
- Re: file in one file system is a directory in ceph
- From: John Spray <jspray@xxxxxxxxxx>
- Re: which mds server is damaged?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: request improve online mds help
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: file in one file system is a directory in ceph
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- file in one file system is a directory in ceph
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- request improve online mds help
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: which mds server is damaged?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- undefined references in luminous for librados-devel
- From: Noah Watkins <noahwatkins@xxxxxxxxx>
- Re: what I did to fix the damaged
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: [PATCH 02/15] btrfs: Use pagevec_lookup_range_tag()
- From: David Sterba <dsterba@xxxxxxx>
- Re: REST APIs
- From: Boris Ranto <branto@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH 04/15] ext4: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 02/15] btrfs: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 03/15] ceph: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 05/15] f2fs: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 07/15] f2fs: Use find_get_pages_tag() for looking up single page
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 0/15 v1] Ranged pagevec tagged lookup
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 11/15] mm: Use pagevec_lookup_range_tag() in write_cache_pages()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 06/15] f2fs: Simplify page iteration loops
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 13/15] ceph: Use pagevec_lookup_range_nr_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 12/15] mm: Add variant of pagevec_lookup_range_tag() taking number of pages
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 14/15] mm: Remove nr_pages argument from pagevec_lookup_{,range}_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 08/15] gfs2: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 01/15] mm: Implement find_get_pages_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 15/15] afs: Use find_get_pages_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 09/15] nilfs2: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 10/15] mm: Use pagevec_lookup_range_tag() in __filemap_fdatawait_range()
- From: Jan Kara <jack@xxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: which mds server is damaged?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Status of luminous v12.2.1 integration branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- which mds server is damaged?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: XFS kernel errors bringing up OSD
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: XFS kernel errors bringing up OSD
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: luminous filesystem is degraded
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: clearing unfound objects
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Ceph Mentors for next Outreachy Round
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- Re: size of testing lab
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: recovery priority preemption
- From: Piotr Dałek <branch@xxxxxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: dmcrypt?
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: clearing unfound objects
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Sage Weil <sweil@xxxxxxxxxx>
- Status of luminous v12.2.1 integration branch
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: Sage Weil <sage@xxxxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]