CEPH Filesystem Development
[Prev Page][Next Page]
- Re: librados on OSX
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: librados on OSX
- From: Chris Blum <chris.blu@xxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Fwd: Jenkins build is back to normal : ceph-master #1305
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: OSD crashes
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: OSD crashes
- From: kefu chai <tchaikov@xxxxxxxxx>
- OSD crashes
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- rgw_dynamic_resharding default enabled?
- From: Andy Yao <andyzzyao@xxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: deleting snapshots in batches?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] ceph-volume: migration and disk partition support
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Understanding some of the Cmake logics
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [ceph-users] ceph-volume: migration and disk partition support
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: librados on OSX
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] ceph-volume: migration and disk partition support
- From: Stefan Kooman <stefan@xxxxxx>
- Re: librados on OSX
- From: Kefu Chai <kchai@xxxxxxxxxx>
- Re: deleting snapshots in batches?
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Multisite 3+ zones
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: bug in luminous bluestore?
- From: kefu chai <tchaikov@xxxxxxxxx>
- deleting snapshots in batches?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Multisite 3+ zones
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- [PATCH 03/16] ceph: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: bug in luminous bluestore?
- From: Ugis <ugis22@xxxxxxxxx>
- Re: [PATCH] ceph: Fix bool initialization/comparison
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: bug in luminous bluestore?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- bug in luminous bluestore?
- From: Ugis <ugis22@xxxxxxxxx>
- [PATCH] ceph: Fix bool initialization/comparison
- From: Thomas Meyer <thomas@xxxxxxxx>
- Re: ec overwrite issue
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- v10.2.10 Jewel released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- ceph-volume: migration and disk partition support
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- [GIT PULL] Ceph fixes for 4.14-rc4
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: ec overwrite issue
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: ec overwrite issue
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: ec overwrite issue
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: [Ceph-maintainers] Mimic timeline
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Fwd: Build failed in Jenkins: ceph-master #1284
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Mimic timeline
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Mimic timeline
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: ec overwrite issue
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- What is the progress of RDMA READ/WRITE?
- From: Justinas LINGYS <jlingys@xxxxxxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: ceph-iscsi packages
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- ceph-iscsi packages
- From: Barclay Jameson <almightybeeij@xxxxxxxxx>
- Re: Encrypted over WAN?
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Multisite 3+ zones
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Ceph on ARM meeting canceled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Docs: build check failed
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: break_lock in librbd API without blacklisting client
- From: Mauricio Garavaglia <mauricio@xxxxxxxxxxxx>
- single realm with multiple zonegroups
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Multisite 3+ zones
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: break_lock in librbd API without blacklisting client
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Encrypted over WAN?
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- break_lock in librbd API without blacklisting client
- From: Mauricio Garavaglia <mauricio@xxxxxxxxxxxx>
- Re: Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Single MDS cephx key
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Encrypted over WAN?
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Encrypted over WAN?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Docs: build check failed
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: "tobe" and "ready" in ceph-disk source
- From: Loic Dachary <ldachary@xxxxxxxxxx>
- Re: [Ceph-announce] Luminous v12.2.1 released
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Write to Secondary Zone?
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Write to Secondary Zone?
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Write to Secondary Zone?
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Encrypted over WAN?
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Why AsyncMessenger/AsyncConnection doesn't use support_zero_copy_read/zero_copy_read?
- From: Justinas LINGYS <jlingys@xxxxxxxxxxxxxx>
- Why AsyncMessenger/AsyncConnection doesn't use support_zero_copy_read/zero_copy_read?
- From: Justinas LINGYS <jlingys@xxxxxxxxxxxxxx>
- Re: Encrypted over WAN?
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Encrypted over WAN?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: CEPH/BSD status
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: ec overwrite issue
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: ec overwrite issue
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: luminous/dmcrypt/bluestore
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- luminous/dmcrypt/bluestore
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Write to Secondary Zone?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- ec overwrite issue
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: CEPH/BSD status
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- OpenStack Sydney Forum - Ceph BoF proposal
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: [PATCH 03/15] ceph: Use pagevec_lookup_range_tag()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: CephFS HA support network appliance
- From: Sage Weil <sweil@xxxxxxxxxx>
- Luminous v12.2.1 released
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Joao Eduardo Luis <joao@xxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [ceph-users] Ceph Developers Monthly - October
- From: Shinobu Kinjo <skinjo@xxxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Ceph Tech Talk - September
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Strange behavior of OSD after an IO error
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: Single MDS cephx key
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Re: Single MDS cephx key
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Single MDS cephx key
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- [PATCH 03/15] ceph: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- Re: Single MDS cephx key
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Strange behavior of OSD after an IO error
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Single MDS cephx key
- From: John Spray <jspray@xxxxxxxxxx>
- Re: another inconsistent pg issue
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Single MDS cephx key
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Re: another inconsistent pg issue
- From: David Zafman <dzafman@xxxxxxxxxx>
- another inconsistent pg issue
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: inconsistent pg will not repair
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: inconsistent pg will not repair
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: lingering caps outstanding after client shutdown?
- From: Jeff Layton <jlayton@xxxxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: lingering caps outstanding after client shutdown?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- inconsistent pg will not repair
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: lingering caps outstanding after client shutdown?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- lingering caps outstanding after client shutdown?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Docs: build check failed
- From: John Spray <jspray@xxxxxxxxxx>
- Docs: build check failed
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Osds shift within Placement group
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Sage Weil <sweil@xxxxxxxxxx>
- help fixing inconsistent pg
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Is anyone aware of bluestor exploding all the time ?
- From: Tomasz Ku smokers <tom.kusmierz@xxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Is anyone aware of bluestor exploding all the time ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Is anyone aware of bluestor exploding all the time ?
- From: Tomasz Kusmierz <tom.kusmierz@xxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Osds shift within Placement group
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- migrated ceph disk wont start
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [ceph-users] OSD memory usage
- From: Sage Weil <sweil@xxxxxxxxxx>
- [GIT PULL] Ceph fixes for 4.14-rc2
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- "tobe" and "ready" in ceph-disk source
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: inconsistent file issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: fix for "crash in rocksdb LRUCache destructor with tcmalloc"
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: fix for "crash in rocksdb LRUCache destructor with tcmalloc"
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Mustafa Muhammad <mustafa1024m@xxxxxxxxx>
- Re: ceph v10.2.10 QE validation status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- ceph v10.2.10 QE validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: stuck recovery for many days, help needed
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Re: inconsistent file issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: inconsistent file issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- stuck recovery for many days, help needed
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: inconsistent file issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Luminous OSD high mem usage cause OS die
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: fix for "crash in rocksdb LRUCache destructor with tcmalloc"
- From: Jeff Layton <jlayton@xxxxxxxxxxxxxxx>
- inconsistent file issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- fix for "crash in rocksdb LRUCache destructor with tcmalloc"
- From: kefu chai <tchaikov@xxxxxxxxx>
- Jewel v10.2.10 ready for QE
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 QE validation
- From: Nathan Cutler <ncutler@xxxxxxx>
- Status of luminous v12.2.1 QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: [PATCH] libceph: don't allow bidirectional swap of pg-upmap-items
- From: Sage Weil <sage@xxxxxxxxxxxx>
- [PATCH] libceph: don't allow bidirectional swap of pg-upmap-items
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: OSD crashes (10.2.9)
- From: Nathan Cutler <ncutler@xxxxxxx>
- OSD crashes (10.2.9)
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Bluestore aio_nr?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- CephFS Segfault 12.2.0
- From: Derek Yarnell <derek@xxxxxxxxxxxxxx>
- Re: [ceph-users] CephFS Segfault 12.2.0
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- OSD crashes
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Bluestore aio_nr?
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Time to drop 11429.yaml from jewel?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Bluestore aio_nr?
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Sepia CentOS test nodes now on 7.4
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: cephfs kernel bug (4.9.44)?
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 1/5] ceph: keep auth cap when inode has flocks or posix locks
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Time to drop 11429.yaml from jewel?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [ceph-users] RBD: How many snapshots is too many?
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: Ceph RDMA Memory Leakage
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: There is a big risk in function bufferlist::claim_prepend()
- From: 关云飞 <gyfelectric@xxxxxxxxx>
- Re: Ceph RDMA Memory Leakage
- From: Jin Cai <caijin.laurence@xxxxxxxxx>
- Re: Ceph RDMA Memory Leakage
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: [ceph/ceph] librados: Fix a potential risk of buffer::list::claim_prepend(list& b… (#17661)
- From: 关云飞 <gyfelectric@xxxxxxxxx>
- Re: There is a big risk in function bufferlist::claim_prepend()
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [ceph/ceph] librados: Fix a potential risk of buffer::list::claim_prepend(list& b… (#17661)
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: [PATCH 13/15] ceph: Use pagevec_lookup_range_nr_tag()
- From: Jan Kara <jack@xxxxxxx>
- Re: [f2fs-dev] [PATCH 07/15] f2fs: Use find_get_pages_tag() for looking up single page
- From: Jan Kara <jack@xxxxxxx>
- [ceph/ceph] librados: Fix a potential risk of buffer::list::claim_prepend(list& b… (#17661)
- From: 关云飞 <gyfelectric@xxxxxxxxx>
- Re: Ceph RDMA Memory Leakage
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Ceph RDMA Memory Leakage
- From: Jin Cai <caijin.laurence@xxxxxxxxx>
- Ceph RDMA module memory leakage
- From: Jin Cai <caijin.laurence@xxxxxxxxx>
- Re: [PATCH 13/15] ceph: Use pagevec_lookup_range_nr_tag()
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: REST APIs
- From: John Spray <jspray@xxxxxxxxxx>
- Re: REST APIs
- From: Lenz Grimmer <lenz@xxxxxxxxxxx>
- Re: REST APIs
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: How do I "install" from source? Service binaries (and /etc/ceph) are missing after make install
- From: Henrique Fingler <hfingler@xxxxxxxxxxxxx>
- Re: How do I "install" from source? Service binaries (and /etc/ceph) are missing after make install
- From: Sage Weil <sage@xxxxxxxxxxxx>
- How do I "install" from source? Service binaries (and /etc/ceph) are missing after make install
- From: Henrique Fingler <hfingler@xxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: snapshots
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: ceph-osd crash
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: snapshots
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: snapshots
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [ceph-users] mon health status gone from display
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: snapshots
- From: Eric Eastman <eric.eastman@xxxxxxxxxxxxxx>
- Re: [ceph-users] mon health status gone from display
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] mon health status gone from display
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: CephFS HA support network appliance
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] mon health status gone from display
- From: John Spray <jspray@xxxxxxxxxx>
- Re: snapshots
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- mon health status gone from display
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Geographic disperse Ceph
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: snapshots
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: snapshots
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Geographic disperse Ceph
- From: Sage Weil <sweil@xxxxxxxxxx>
- snapshots
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Geographic disperse Ceph
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Mixed versions of cluster and clients
- From: Mike A <mike.almateia@xxxxxxxxx>
- Re: Mixed versions of cluster and clients
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Mixed versions of cluster and clients
- From: Mike A <mike.almateia@xxxxxxxxx>
- Re: [f2fs-dev] [PATCH 07/15] f2fs: Use find_get_pages_tag() for looking up single page
- From: Chao Yu <chao@xxxxxxxxxx>
- Re: [f2fs-dev] [PATCH 06/15] f2fs: Simplify page iteration loops
- From: Chao Yu <chao@xxxxxxxxxx>
- Re: [f2fs-dev] [PATCH 05/15] f2fs: Use pagevec_lookup_range_tag()
- From: Chao Yu <chao@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Ugis <ugis22@xxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: rocksdb fails to build with gcc 7.1.1
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- rocksdb fails to build with gcc 7.1.1
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Noah Watkins <noahwatkins@xxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: what I did to fix the damaged
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: which mds server is damaged?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: file in one file system is a directory in ceph
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Noah Watkins <noahwatkins@xxxxxxxxx>
- Re: file in one file system is a directory in ceph
- From: John Spray <jspray@xxxxxxxxxx>
- Re: which mds server is damaged?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: undefined references in luminous for librados-devel
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: request improve online mds help
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: file in one file system is a directory in ceph
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- file in one file system is a directory in ceph
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- request improve online mds help
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: which mds server is damaged?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- undefined references in luminous for librados-devel
- From: Noah Watkins <noahwatkins@xxxxxxxxx>
- Re: what I did to fix the damaged
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: [PATCH 02/15] btrfs: Use pagevec_lookup_range_tag()
- From: David Sterba <dsterba@xxxxxxx>
- Re: REST APIs
- From: Boris Ranto <branto@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH 04/15] ext4: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 02/15] btrfs: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 03/15] ceph: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 05/15] f2fs: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 07/15] f2fs: Use find_get_pages_tag() for looking up single page
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 0/15 v1] Ranged pagevec tagged lookup
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 11/15] mm: Use pagevec_lookup_range_tag() in write_cache_pages()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 06/15] f2fs: Simplify page iteration loops
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 13/15] ceph: Use pagevec_lookup_range_nr_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 12/15] mm: Add variant of pagevec_lookup_range_tag() taking number of pages
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 14/15] mm: Remove nr_pages argument from pagevec_lookup_{,range}_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 08/15] gfs2: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 01/15] mm: Implement find_get_pages_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 15/15] afs: Use find_get_pages_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 09/15] nilfs2: Use pagevec_lookup_range_tag()
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 10/15] mm: Use pagevec_lookup_range_tag() in __filemap_fdatawait_range()
- From: Jan Kara <jack@xxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: which mds server is damaged?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Status of luminous v12.2.1 integration branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- which mds server is damaged?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: XFS kernel errors bringing up OSD
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: XFS kernel errors bringing up OSD
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: luminous filesystem is degraded
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: clearing unfound objects
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Ceph Mentors for next Outreachy Round
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- Re: size of testing lab
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: recovery priority preemption
- From: Piotr Dałek <branch@xxxxxxxxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: dmcrypt?
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: clearing unfound objects
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Status of luminous v12.2.1 integration branch
- From: Sage Weil <sweil@xxxxxxxxxx>
- Status of luminous v12.2.1 integration branch
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: should CephContext be a singleton?
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: XFS kernel errors bringing up OSD
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- recovery priority preemption
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: XFS kernel errors bringing up OSD
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: dmcrypt?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- should CephContext be a singleton?
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- REST APIs
- From: John Spray <jspray@xxxxxxxxxx>
- Re: dmcrypt?
- From: Sage Weil <sweil@xxxxxxxxxx>
- [PATCH RESEND] devices: recognise rbd devices
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: clearing unfound objects
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: clearing unfound objects
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- dmcrypt?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: XFS kernel errors bringing up OSD
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: XFS kernel errors bringing up OSD
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: clearing unfound objects
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: ceph-osd crash
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: On monitor development, bugs, and reviews
- From: John Spray <jspray@xxxxxxxxxx>
- clearing unfound objects
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: On monitor development, bugs, and reviews
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: On monitor development, bugs, and reviews
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: luminous filesystem is degraded
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Memory. 100TB OSD?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: luminous filesystem is degraded
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Ceph Developers Monthly - September
- From: Joao Eduardo Luis <joao@xxxxxxx>
- On monitor development, bugs, and reviews
- From: Joao Eduardo Luis <joao@xxxxxxx>
- [GIT PULL] Ceph updates for 4.14-rc1
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Ceph Developers Monthly - September
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: [PATCH 1/5] ceph: keep auth cap when inode has flocks or posix locks
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- XFS kernel errors bringing up OSD
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [PATCH 1/5] ceph: keep auth cap when inode has flocks or posix locks
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 5/5] ceph: avoid null pointer derefernece in case of utsname() return NULL
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 4/5] ceph: handle 'session get evicted while there are file locks'
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 3/5] ceph: optimize flock encoding during reconnect
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 1/5] ceph: keep auth cap when inode has flocks or posix locks
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH 2/5] ceph: make lock_to_ceph_filelock() 'static'
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 1/5] ceph: keep auth cap when inode has flocks or posix locks
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- [PATCH 5/5] ceph: avoid null pointer derefernece in case of utsname() return NULL
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 4/5] ceph: handle 'session get evicted while there are file locks'
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 3/5] ceph: optimize flock encoding during reconnect
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 2/5] ceph: make lock_to_ceph_filelock() 'static'
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 1/5] ceph: keep auth cap when inode has flocks or posix locks
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 0/5] ceph: file lock related fixes
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: LRC has slower recovery than Jerasure
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph release cadence
- From: John Spray <jspray@xxxxxxxxxx>
- Re: LRC has slower recovery than Jerasure
- From: Oleg Kolosov <olekol@xxxxxxxxx>
- Re: LRC has slower recovery than Jerasure
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph release cadence
- From: Nathan Cutler <ncutler@xxxxxxx>
- LRC has slower recovery than Jerasure
- From: Oleg Kolosov <olekol@xxxxxxxxx>
- Re: [ceph-client:testing 3/5] fs/ceph/mds_client.c:2921:9-15: ERROR: reference preceded by free on line 2915 (fwd)
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [ceph-client:testing 3/5] fs/ceph/mds_client.c:2921:9-15: ERROR: reference preceded by free on line 2915 (fwd)
- From: Julia Lawall <julia.lawall@xxxxxxx>
- Re: [ceph-users] [Ceph-maintainers] Ceph release cadence
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Alexander Litvak <alexander.v.litvak@xxxxxxxxx>
- Re: Ceph release cadence
- From: Christian Theune <ct@xxxxxxxxxxxxxxx>
- Re: [ceph-users] librados for MacOS
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Henrik Korkuc <lists@xxxxxxxxx>
- Re: [Ceph-maintainers] Ceph release cadence
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [Ceph-maintainers] Ceph release cadence
- From: Bassam Tabbara <bassam@xxxxxxxxxxx>
- Re: [Ceph-maintainers] Ceph release cadence
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: very slow backfill on Luminous + Bluestore
- From: Sage Weil <sage@xxxxxxxxxxxx>
- very slow backfill on Luminous + Bluestore
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: [RFC PATCH 2/3] ceph: quotas: support for ceph.quota.max_files
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- Re: [RFC PATCH 2/3] ceph: quotas: support for ceph.quota.max_files
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- RE: [ceph-users] Ceph release cadence
- From: Deepak Naidu <dnaidu@xxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Are we ready for Jewel v10.2.10?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Are we ready for Jewel v10.2.10?
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: payload of MPing
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [RFC PATCH 2/3] ceph: quotas: support for ceph.quota.max_files
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: OSD behaviour when an i/o error occurs
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Re: [Ceph-maintainers] Ceph release cadence
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Re: Are we ready for Jewel v10.2.10?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Are we ready for Jewel v10.2.10?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: payload of MPing
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Henrik Korkuc <lists@xxxxxxxxx>
- RE: Ceph release cadence
- From: Adrian Saul <Adrian.Saul@xxxxxxxxxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Henrik Korkuc <lists@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- payload of MPing
- From: kefu chai <tchaikov@xxxxxxxxx>
- ceph-osd crash
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- distributed point-in-time consistency report
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: [ceph-users] Ceph release cadence
- From: Deepak Naidu <dnaidu@xxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Eric Eastman <eric.eastman@xxxxxxxxxxxxxx>
- Re: Ceph release cadence
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: build-integration-branch
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: OSD behaviour when an i/o error occurs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Ceph release cadence
- From: Kingsley Tart <ceph@xxxxxxxxxxx>
- Re: Mimic planning: Wed
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- OSD behaviour when an i/o error occurs
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- RE: [ceph-users] Ceph release cadence
- From: Bryan Banister <bbanister@xxxxxxxxxxxxxxx>
- Ceph release cadence
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] Ceph Developers Monthly - September
- From: Haomai Wang <haomai@xxxxxxxx>
- [RFC PATCH 1/3] ceph: quota: add initial infrastructure to support cephfs quotas
- From: Luis Henriques <lhenriques@xxxxxxxx>
- [RFC PATCH 3/3] ceph: quota: don't allow cross-quota renames
- From: Luis Henriques <lhenriques@xxxxxxxx>
- [RFC PATCH 2/3] ceph: quotas: support for ceph.quota.max_files
- From: Luis Henriques <lhenriques@xxxxxxxx>
- [RFC PATCH 0/3] ceph: kernel client cephfs quota support
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: send more reads on recovery
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Mimic planning: Wed
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: admin_socket question
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph Developers Monthly - September
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: send more reads on recovery
- From: Linux Chips <linux.chips@xxxxxxxxx>
- Re: Mimic planning: Wed
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Mimic planning: Wed
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Mimic planning: Wed
- From: Joao Eduardo Luis <joao@xxxxxxx>
- admin_socket question
- From: Paul Cuzner <pcuzner@xxxxxxxxxx>
- Re: Ceph Developers Monthly - September
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: auth: assert(ckh)
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: auth: assert(ckh)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: https://github.com/ceph/rocksdb
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: auth: assert(ckh)
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: [ceph-users] a question about use of CEPH_IOC_SYNCIO in write
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Ceph Mentors for next Outreachy Round
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- Re: auth: assert(ckh)
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: build-integration-branch
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: https://github.com/ceph/rocksdb
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- auth: assert(ckh)
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: build-integration-branch
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: luminous filesystem is degraded
- From: Sage Weil <sweil@xxxxxxxxxx>
- https://github.com/ceph/rocksdb
- From: Amit <amitkuma@xxxxxxxxxx>
- Ceph on ARM meeting cancelled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: build-integration-branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Paweł Sadowski <pawel@xxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Paweł Sadowski <ceph@xxxxxxxxx>
- Re: build-integration-branch
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: luminous filesystem is degraded
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: hammer PRs
- From: kefu chai <tchaikov@xxxxxxxxx>
- send more reads on recovery
- From: Linux Chips <linux.chips@xxxxxxxxx>
- [PATCH 13/13] ceph: wait on writeback after writing snapshot data
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 12/13] ceph: fix capsnap dirty pages accounting
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 11/13] ceph: ignore wbc->range_{start,end} when write back snapshot data
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 10/13] ceph: fix "range cyclic" mode writepages
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 07/13] ceph: make writepage_nounlock() invalidate page that beyonds EOF
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 08/13] ceph: optimize pagevec iterating in ceph_writepages_start()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 09/13] ceph: cleanup local varibles in ceph_writepages_start()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 06/13] ceph: properly get capsnap's size in get_oldest_context()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 05/13] ceph: remove stale check in ceph_invalidatepage()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 04/13] ceph: queue cap snap only when snap realm's context changes
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 03/13] ceph: handle race between vmtruncate and queuing cap snap
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 02/13] ceph: fix message order check in handle_cap_export()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 01/13] ceph: fix null pointer dereference in ceph_flush_snaps()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 00/13] ceph: snapshot and multimds fixes
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: hammer PRs
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: luminous filesystem is degraded
- From: John Spray <jspray@xxxxxxxxxx>
- Re: hammer PRs
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Memory. 100TB OSD?
- From: Henrik Korkuc <lists@xxxxxxxxx>
- Re: Memory. 100TB OSD?
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: ceph-osd fails to start - crash log
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: hammer PRs
- From: Nathan Cutler <ncutler@xxxxxxx>
- luminous filesystem is degraded
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- FreeBSSD: [Bug 221997] net/ceph: Luminous (12.2.0) release for Ceph
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- hammer PRs
- From: kefu chai <tchaikov@xxxxxxxxx>
- Feature Request ceph -s recovery and resync estimated completion times
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- "Unhandled exception in thread started by" ceph-deploy admin
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: [ceph-users] use and benifits of CEPH_IOC_SYNCIO flag
- From: Mark Meyers <MarkMeyers.MMY@xxxxxxxxx>
- use and benifits of CEPH_IOC_SYNCIO flag
- From: Mark Meyers <MarkMeyers.MMY@xxxxxxxxx>
- Re: Rados bench with a failed node
- From: Oleg Kolosov <olekol@xxxxxxxxx>
- Re: Rados bench with a failed node
- From: Oleg Kolosov <olekol@xxxxxxxxx>
- Re: ceph-disk triggers XFS kernel bug?
- From: Christian Theune <ct@xxxxxxxxxxxxxxx>
- RE: v12.2.0 Luminous released
- From: Sage Weil <sage@xxxxxxxxxxxx>
- RE: v12.2.0 Luminous released
- From: "Felix, Evan J" <Evan.Felix@xxxxxxxx>
- ceph-osd fails to start - crash log
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: ceph-mon leader election problem, should it be improved ?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- [GIT PULL] Ceph fix for 4.13-rc8
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: ceph-disk triggers XFS kernel bug?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: cephfs kernel bug (4.9.44)?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- ceph-disk triggers XFS kernel bug?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- a question about use of CEPH_IOC_SYNCIO in write
- From: sa514164@xxxxxxxxxxxxxxxx
- why mds sends a caps message of "zero" inode max size to client when finishing "open a new created file" ?
- From: Meyers Mark <markmeyers.mmy@xxxxxxxxx>
- Re: luminous OSD memory usage
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: luminous OSD memory usage
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: Contributor credits for v12.2.0 Luminous
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Contributor credits for v12.2.0 Luminous
- From: Abhishek <abhishek@xxxxxxxx>
- Re: Contributor credits for v12.2.0 Luminous
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- RE: [ceph-users] v12.2.0 Luminous released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: luminous OSD memory usage
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Bluestore memory usage on our test cluster
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: cephfs kernel bug (4.9.44)?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: cephfs kernel bug (4.9.44)?
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: a metadata lost problem when mds breaks down
- From: Sage Weil <sage@xxxxxxxxxxxx>
- a metadata lost problem when mds breaks down
- From: Mark Meyers <MarkMeyers.MMY@xxxxxxxxx>
- Re: cephfs kernel bug (4.9.44)?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [PATCH] devices: recognise rbd devices
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: cephfs kernel bug (4.9.44)?
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- cephfs kernel bug (4.9.44)?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: YuShengzuo <yu.shengzuo@xxxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: shasha lu <lushasha08@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Bluestore memory usage on our test cluster
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Bluestore memory usage on our test cluster
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: John Spray <jspray@xxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- build-integration-branch
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: Jewel v10.2.10, anyone?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Jewel v10.2.10, anyone?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Independent instances of rgw
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Independent instances of rgw
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Bluestore memory usage on our test cluster
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- RE: [ceph-users] v12.2.0 Luminous released
- From: Deepak Naidu <dnaidu@xxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: John Spray <jspray@xxxxxxxxxx>
- Memory. 100TB OSD?
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: Bug? luminous pkg missing ceph-osd ceph-mon 32bit
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: feature request to "ceph osd status"
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: feature request to "ceph osd status"
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Bug? luminous pkg missing ceph-osd ceph-mon 32bit
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: feature request to "ceph osd status"
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: luminous OSD memory usage
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: feature request to "ceph osd status"
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: [ceph-users] v12.2.0 Luminous released
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: feature request to "ceph osd status"
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: a question about “lease issued to client” in ceph mds
- From: Sage Weil <sage@xxxxxxxxxxxx>
- A question about "lease issued to client" in ceph mds
- From: sa514164@xxxxxxxxxxxxxxxx
- a question about “lease issued to client” in ceph mds
- From: Meyers Mark <markmeyers.mmy@xxxxxxxxx>
- luminous OSD memory usage
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Contributor credits for v12.2.0 Luminous
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: [ceph-users] v12.2.0 Luminous released
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: [ceph-users] v12.2.0 Luminous released
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: feature request to "ceph osd status"
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Commit messages + labels for mgr modules
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: v12.2.0 Luminous released
- From: kefu chai <tchaikov@xxxxxxxxx>
- Ceph Developers Monthly - September
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: v12.2.0 Luminous released
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: question about bluefs log sync
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: question about bluefs log sync
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: question about bluefs log sync
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: question about bluefs log sync
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: feature request to "ceph osd status"
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Bug? luminous pkg missing ceph-osd ceph-mon 32bit
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Bug? luminous pkg missing ceph-osd ceph-mon 32bit
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Linux Chips <linux.chips@xxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Mustafa Muhammad <mustafa1024m@xxxxxxxxx>
- v12.2.0 Luminous released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: New ceph/ceph-helm repo?
- From: John Spray <jspray@xxxxxxxxxx>
- compiling with Clang 5.0
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: New ceph/ceph-helm repo?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- New ceph/ceph-helm repo?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Building Ceph in Docker
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: question about bluefs log sync
- From: Sage Weil <sweil@xxxxxxxxxx>
- Commit messages + labels for mgr modules
- From: John Spray <jspray@xxxxxxxxxx>
- Re: question about bluefs log sync
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: question about bluefs log sync
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: question about bluefs log sync
- From: Sage Weil <sweil@xxxxxxxxxx>
- question about bluefs log sync
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: question on bluestore wal io behavior
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: Building Ceph in Docker
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Mustafa Muhammad <mustafa1024m@xxxxxxxxx>
- Re: Where to find the CDM recording?
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: Where to find the CDM recording?
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Building Ceph in Docker
- From: Mingliang LIU <mingliang.liu@xxxxxxxxxxxxxx>
- Re: osd fails to start, cannot mount the journal
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Mimic planning: Wed
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: question on bluestore wal io behavior
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Extensive attributes not getting copied when flushing HEAD objects from cache pool to base pool.
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Bug? luminous pkg missing ceph-osd ceph-mon 32bit
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: feature request to "ceph osd status"
- From: John Spray <jspray@xxxxxxxxxx>
- question on bluestore wal io behavior
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Fwd: State of play for RDMA on Luminous
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: Extensive attributes not getting copied when flushing HEAD objects from cache pool to base pool.
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: osd pg logs
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Linux Chips <linux.chips@xxxxxxxxx>
- feature request to "ceph osd status"
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: Matching shard to crush bucket in erasure coding
- From: Oleg Kolosov <olekol@xxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Linux Chips <linux.chips@xxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Linux Chips <linux.chips@xxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Linux Chips <linux.chips@xxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Linux Chips <linux.chips@xxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: Sage Weil <sweil@xxxxxxxxxx>
- osd pg logs
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Bug? luminous pkg missing ceph-osd ceph-mon 32bit
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Bug? luminous pkg missing ceph-osd ceph-mon 32bit
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: docs build check - necessary for Jewel?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: docs build check - necessary for Jewel?
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- osd fails to start, cannot mount the journal
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- Re: Bug#19994
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: Bug#19994
- From: John Spray <jspray@xxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Extensive attributes not getting copied when flushing HEAD objects from cache pool to base pool.
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: increasingly large packages and longer build times
- From: kefu chai <tchaikov@xxxxxxxxx>
- docs build check - necessary for Jewel?
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Extensive attributes not getting copied when flushing HEAD objects from cache pool to base pool.
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Bug#19994
- From: Two Spirit <twospirit6905@xxxxxxxxx>
- Re: High memory usage kills OSD while peering
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- Bluestore IO latency is little in OSD latency
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Backport
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Backport
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: Backport
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Backport
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Backport
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
- Backport
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: Ceph Tech Talk Cancelled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: [Ceph-maintainers] [ceph-users] v12.1.4 Luminous (RC) released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Object Size distributions
- From: Myna V <mynaramana@xxxxxxxxx>
- Re: Extensive attributes not getting copied when flushing HEAD objects from cache pool to base pool.
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Extensive attributes not getting copied when flushing HEAD objects from cache pool to base pool.
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: needs-backport label on github/ceph/ceph
- From: Abhishek Lekshmanan <alekshmanan@xxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]