CEPH Filesystem Development
[Prev Page][Next Page]
- Re: Heads-up: possible Jewel/Kraken RBD compatibility issue that might impact users doing rolling upgrades
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Fwd: GSOC on ceph-mgr:Cluster Status Dashboard
- From: saumay agrawal <saumay.agrawal@xxxxxxxxx>
- Question regarding struct ceph_timestamp
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Heads-up: possible Jewel/Kraken RBD compatibility issue that might impact users doing rolling upgrades
- From: Florian Haas <florian@xxxxxxxxxxx>
- boost::future and continuations
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- quick testing/development with ceph-ansible
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Request for subscribing to ceph-devel list
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Request for subscribing to ceph-devel list
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Request for subscribing to ceph-devel list
- From: saumay agrawal <saumay.agrawal@xxxxxxxxx>
- Re: Alibaba's work on recovery process
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: GSoC participant Pranjal Agrawal
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: GSoC participant Pranjal Agrawal
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- Re: Ceph's Outreachy Participant Joannah Nanjekye!
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: Ceph's Outreachy Participant Joannah Nanjekye!
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Ceph's Outreachy Participant Joannah Nanjekye!
- From: Marcus Watts <mwatts@xxxxxxxxxx>
- Re: Ceph's Outreachy Participant Joannah Nanjekye!
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Ceph's Outreachy Participant Joannah Nanjekye!
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- [PATCH v2] src/seek_sanity_test: ensure file size is big enough
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: Hequan <hequanzh@xxxxxxxxx>
- Re: GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: kefu chai <tchaikov@xxxxxxxxx>
- GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: Hequan <hequanzh@xxxxxxxxx>
- Re: GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: EXT: Re: [Ceph-ansible] EXT: Re: osd-directory scenario is used by us
- From: Warren Wang - ISD <Warren.Wang@xxxxxxxxxxx>
- GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: Hequan <hequanzh@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] ceph: Check that the new inode size is within limits in ceph_fallocate()
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: SMARTER REWEIGHT-BY-UTILIZATION
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] RGW: removal of support for fastcgi
- From: Wido den Hollander <wido@xxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- RGW: removal of support for fastcgi
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Alibaba's work on recovery process
- From: Huang Zhiteng <winston.d@xxxxxxxxx>
- Re: Alibaba's work on recovery process
- From: "LIU, Fei" <james.liu@xxxxxxxxxxxxxxx>
- [PATCH] ceph: Check that the new inode size is within limits in ceph_fallocate()
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: fs: mandatory client quota
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: fs: mandatory client quota
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- fs: mandatory client quota
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Alibaba's work on recovery process
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: Tim Serong <tserong@xxxxxxxx>
- [PATCH] libceph: cleanup old messages according to reconnect seq
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Babeltrace error in FreeBSD, Build failed in Jenkins: ceph-master #634
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Babeltrace error in FreeBSD, Build failed in Jenkins: ceph-master #634
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Babeltrace error in FreeBSD, Build failed in Jenkins: ceph-master #634
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: [ceph-users] Intel power tuning - 30% throughput performance increase
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Aaron Ten Clay <aarontc@xxxxxxxxxxx>
- Babeltrace error in FreeBSD, Build failed in Jenkins: ceph-master #634
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: kraken 11.2.1 last call
- From: liuchang0812 <liuchang0812@xxxxxxxxx>
- Re: Blustore data consistency question when big write.
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] osd and/or filestore tuning for ssds?
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: [PATCH] fstests: attr: add support for cephfs
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [PATCH] fstests: attr: add support for cephfs
- From: Eryu Guan <eguan@xxxxxxxxxx>
- Re: [PATCH] fstests: attr: add support for cephfs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Blustore data consistency question when big write.
- From: qi Shi <m13913886148@xxxxxxxxx>
- Re: [PATCH] fstests: attr: add support for cephfs
- From: Eryu Guan <eguan@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [ceph-users] kernel BUG at fs/ceph/inode.c:1197
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] Intel power tuning - 30% throughput performance increase
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: [ceph-users] Intel power tuning - 30% throughput performance increase
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [ceph-users] kernel BUG at fs/ceph/inode.c:1197
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: EXT: Re: [Ceph-ansible] osd-directory scenario is used by us
- From: Gregory Meno <gmeno@xxxxxxxxxx>
- Re: [PATCH 0/9] rbd: support for rbd map --exclusive
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: kraken 11.2.1 last call
- From: Radoslaw Zarzynski <rzarzynski@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: crush luminous endgame
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: "Darrick J. Wong" <darrick.wong@xxxxxxxxxx>
- crush luminous endgame
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: doc: dead link in doc template
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: kraken 11.2.1 last call
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- CDM tonight @ 9p EDT
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: Tracing Ceph results
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Tracing Ceph results
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Increase PG or reweight OSDs?
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- [PATCH] fstests: attr: add support for cephfs
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- doc: dead link in doc template
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: Introduction to community
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Introduction to community
- From: Vaibhav Singhal <singhalvaibhav28@xxxxxxxxx>
- Re: Tracing Ceph results
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: Tracing Ceph results
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Tracing Ceph results
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Tracing Ceph results
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: man pages no longer compressing during install?
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: man pages no longer compressing during install?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Henrik Korkuc <lists@xxxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- ATTN: PAYMENT NOTIFICATION!
- From: "Hon.Adams Keke" <ups.trustsecuritycompany@xxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] LRC low level plugin configuration can't express maximal erasure resilience
- From: Loic Dachary <loic@xxxxxxxxxxx>
- man pages no longer compressing during install?
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: ceph-fuse is working on FreeBSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- arm build server erroneously tagged, caused a number of build failures
- From: Dan Mick <dmick@xxxxxxxxxx>
- osd and/or filestore tuning for ssds?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: kraken 11.2.1 last call
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: [PATCH] ceph: fix memory leak in __ceph_setxattr()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] ceph: fix memory leak in __ceph_setxattr()
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [PATCH v2] ceph: Fix file open flags on ppc64
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH v2] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- [PATCH] ceph: choose readdir frag based on previous readdir reply
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH v2] ceph: Fix file open flags on ppc64
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Usermode iSCSI-SCST updated to use Ceph RBD as backing storage
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Yann Dupont <yd@xxxxxxxxx>
- Usermode iSCSI-SCST updated to use Ceph RBD as backing storage
- From: David Butterfield <dab21774@xxxxxxxxx>
- Re: [PATCH] osd: Do not subtract object overlaps from cache usage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH v2] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- [GIT PULL] Ceph fix for 4.11-rc9
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- PRs not being tested by jenkins??
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Performance Measurement
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Performance Measurement
- From: David Byte <dbyte@xxxxxxxx>
- Re: Performance Measurement
- From: David Byte <dbyte@xxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Performance Measurement
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Performance Measurement
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Help! how to set iscsi.conf of SPDK iscsi target when using ceph rbd
- From: yiming xie <platoxym@xxxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Ceph Tech Talk Cancelled
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] v12.0.2 Luminous (dev) released
- From: kefu chai <tchaikov@xxxxxxxxx>
- repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- Re: When is do_redundant_reads flag set?
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: When is do_redundant_reads flag set?
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: jewel 10.2.8 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: jewel 10.2.8 last call
- From: Yann Dupont <yd@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Alexey Sheplyakov <asheplyakov@xxxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Pedro López-Adeva <plopezadeva@xxxxxxxxx>
- Re: v12.0.2 Luminous (dev) released
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] v12.0.2 Luminous (dev) released
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: v12.0.2 Luminous (dev) released
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: v12.0.2 Luminous (dev) released
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- [PATCH 6/9] rbd: support updating the lock cookie without releasing the lock
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 7/9] rbd: kill rbd_is_lock_supported()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 8/9] rbd: return ResponseMessage result from rbd_handle_request_lock()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 9/9] rbd: exclusive map option
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 5/9] rbd: store lock cookie
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 4/9] rbd: ignore unlock errors
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 3/9] rbd: fix error handling around rbd_init_disk()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 1/9] rbd: move rbd_dev_destroy() call out of rbd_dev_image_release()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 2/9] rbd: move rbd_unregister_watch() call into rbd_dev_image_release()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 0/9] rbd: support for rbd map --exclusive
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: v12.0.2 Luminous (dev) released
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Nathan Cutler <ncutler@xxxxxxx>
- kraken 11.2.1 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- jewel 10.2.8 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: cephfs issue, get_reply data > preallocated
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: cephfs issue, get_reply data > preallocated
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: [sepia] Test queue paused?
- From: Gregory Meno <gmeno@xxxxxxxxxx>
- v12.0.2 Luminous Contributor credits
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: cephfs issue, get_reply data > preallocated
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: cephfs issue, get_reply data > preallocated
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- cephfs issue, get_reply data > preallocated
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- v12.0.2 Luminous (dev) released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- [PATCH] osd: Do not subtract object overlaps from cache usage
- From: Michal Koutný <mkoutny@xxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Storing NFS (ganesha) HA state in Ceph
- From: Brett Niver <bniver@xxxxxxxxxx>
- Storing NFS (ganesha) HA state in Ceph
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Test queue paused?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH] block: get rid of blk_integrity_revalidate()
- From: Jens Axboe <axboe@xxxxxx>
- Adaptation of iSCSI-SCST to run entirely in usermode on unmodified kernel
- From: David Butterfield <dab21774@xxxxxxxxx>
- Re: [PATCH] block: get rid of blk_integrity_revalidate()
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: When is do_redundant_reads flag set?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH 05/11] rbd: use bio_clone_fast() instead of bio_clone()
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: myoungwon oh <ohmyoungwon@xxxxxxxxx>
- When is do_redundant_reads flag set?
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [PATCH 0/25 v3] fs: Convert all embedded bdis into separate ones
- From: Jens Axboe <axboe@xxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Smarter blacklisting?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- [PATCH] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- RE: [PATCH 0/2] fs, ceph filesystem refcount conversions
- From: "Reshetova, Elena" <elena.reshetova@xxxxxxxxx>
- Help maintain the CephFS Samba and/or Hadoop bindings
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH 0/2] fs, ceph filesystem refcount conversions
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- RE: [PATCH 0/2] fs, ceph filesystem refcount conversions
- From: "Reshetova, Elena" <elena.reshetova@xxxxxxxxx>
- [PATCH 00/11] block: assorted cleanup for bio splitting and cloning.
- From: NeilBrown <neilb@xxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- [PATCH 05/11] rbd: use bio_clone_fast() instead of bio_clone()
- From: NeilBrown <neilb@xxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [PATCH] block: get rid of blk_integrity_revalidate()
- From: "Martin K. Petersen" <martin.petersen@xxxxxxxxxx>
- Re: Smarter blacklisting?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Aaron Ten Clay <aarontc@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Ceph, RDMA and 40gbit Cisco CNA
- From: Greg Procunier <greg.procunier@xxxxxxxxx>
- Re: [PATCH v2] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: Luis Henriques <lhenriques@xxxxxxxx>
- reminder: perf meeting moved to thursdays at 8AM PST
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- fun with ccache
- From: Tim Serong <tserong@xxxxxxxx>
- Re: Smarter blacklisting?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Smarter blacklisting?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH v2] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Comparing straw2 and CARP
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: Kernel panic on CephFS kernel client when setting file ACL
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- [PATCH] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Kernel panic on CephFS kernel client when setting file ACL
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Jewel regression (not released, but still serious)
- From: Nathan Cutler <ncutler@xxxxxxx>
- Jewel regression (not released, but still serious)
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Smarter blacklisting?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- [PATCH] block: get rid of blk_integrity_revalidate()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Smarter blacklisting?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Smarter blacklisting?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Filestore directory splitting (ZFS/FreeBSD)
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Filestore directory splitting (ZFS/FreeBSD)
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Filestore directory splitting (ZFS/FreeBSD)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Kernel panic on CephFS kernel client when setting file ACL
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- Filestore directory splitting (ZFS/FreeBSD)
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: myoungwon oh <ohmyoungwon@xxxxxxxxx>
- [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 6/7] Revert "ceph: SetPageError() for writeback pages if writepages fails"
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 2/7] libceph: allow requests to return immediately on full conditions if caller wishes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 0/7] ceph: implement -ENOSPC handling in cephfs
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 7/7] ceph: when seeing write errors on an inode, switch to sync writes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 5/7] ceph: handle epoch barriers in cap messages
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 1/7] libceph: remove req->r_replay_version
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 3/7] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] fsping, why you no work no mo?
- From: John Spray <jspray@xxxxxxxxxx>
- another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 06/12] audit: Use timespec64 to represent audit timestamps
- From: Arnd Bergmann <arnd@xxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] PG calculator improvement
- From: Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>
- Re: Static Analysis
- From: kefu chai <tchaikov@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Measuring lock conention
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Measuring lock conention
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Measuring lock conention
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Measuring lock conention
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Measuring lock conention
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Measuring lock conention
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Measuring lock conention
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Measuring lock conention
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: Tim Serong <tserong@xxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Dan Mick <dmick@xxxxxxxxxx>
- Re: Minimal crush weight_set integration
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Minimal crush weight_set integration
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Weekly perf meeting changing from Wednesday to Thursday
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Minimal crush weight_set integration
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: Sage Weil <sweil@xxxxxxxxxx>
- PG calculator improvement
- From: Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>
- Re: [PATCH 07/12] fs: btrfs: Use ktime_get_real_ts for root ctime
- From: David Sterba <dsterba@xxxxxxx>
- Re: Help debugging RGW bug in jewel 10.2.8 integration branch
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: cephfs: Normal user of our fs can damage the whole system by writing huge xattr kv pairs
- From: John Spray <jspray@xxxxxxxxxx>
- [PATCH 04/25] fs: Provide infrastructure for dynamic BDIs in filesystems
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 0/25 v3] fs: Convert all embedded bdis into separate ones
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 09/25] ceph: Convert to separately allocated bdi
- From: Jan Kara <jack@xxxxxxx>
- Re: [PATCH 09/25] ceph: Convert to separately allocated bdi
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: [PATCH 04/25] fs: Provide infrastructure for dynamic BDIs in filesystems
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: Ceph EC code implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- cephfs: Normal user of our fs can damage the whole system by writing huge xattr kv pairs
- From: Yang Joseph <joseph.yang@xxxxxxxxxxxx>
- Re: [PATCH 06/12] audit: Use timespec64 to represent audit timestamps
- From: Paul Moore <paul@xxxxxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: How to understand Collection in Bluestore, is it a folder?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: John Spray <jspray@xxxxxxxxxx>
- v10.2.7 Jewel released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: John Spray <jspray@xxxxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: Ricardo Dias <rdias@xxxxxxxx>
- multiple cherrypys in ceph-mgr modules stomp on each other
- From: Tim Serong <tserong@xxxxxxxx>
- How to understand Collection in Bluestore, is it a folder?
- From: qi Shi <m13913886148@xxxxxxxxx>
- Re: crush multiweight implementation details
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: rgw: refactoring test_multi.py for teuthology
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Sage Weil <sweil@xxxxxxxxxx>
- crush multiweight implementation details
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD creation and device class
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: OSD creation and device class
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD creation and device class
- From: Sage Weil <sweil@xxxxxxxxxx>
- Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Toying with a FreeBSD cluster results in a crash
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- OSD creation and device class
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH v7 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Toying with a FreeBSD cluster results in a crash
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Help debugging RGW bug in jewel 10.2.8 integration branch
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Help debugging RGW bug in jewel 10.2.8 integration branch
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Help debugging RGW bug in jewel 10.2.8 integration branch
- From: Nathan Cutler <ncutler@xxxxxxx>
- Help debugging RGW bug in jewel 10.2.8 integration branch
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [PATCH 06/12] audit: Use timespec64 to represent audit timestamps
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- Re: Toying with a FreeBSD cluster results in a crash
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH 06/12] audit: Use timespec64 to represent audit timestamps
- From: Paul Moore <paul@xxxxxxxxxxxxxx>
- Re: Toying with a FreeBSD cluster results in a crash
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [PATCH 02/12] trace: Make trace_hwlat timestamp y2038 safe
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- Re: [PATCH 02/12] trace: Make trace_hwlat timestamp y2038 safe
- From: Steven Rostedt <rostedt@xxxxxxxxxxx>
- [PATCH 02/12] trace: Make trace_hwlat timestamp y2038 safe
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 05/12] fs: ufs: Use ktime_get_real_ts64() for birthtime
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 10/12] apparmorfs: Replace CURRENT_TIME with current_time()
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 09/12] lustre: Replace CURRENT_TIME macro
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 11/12] time: Delete CURRENT_TIME_SEC and CURRENT_TIME
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 07/12] fs: btrfs: Use ktime_get_real_ts for root ctime
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 08/12] fs: ubifs: Replace CURRENT_TIME_SEC with current_time
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 12/12] time: Delete current_fs_time() function
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 04/12] fs: ceph: CURRENT_TIME with ktime_get_real_ts()
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 03/12] fs: cifs: Replace CURRENT_TIME by other appropriate apis
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 06/12] audit: Use timespec64 to represent audit timestamps
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 01/12] fs: f2fs: Use ktime_get_real_seconds for sit_info times
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 00/12] Delete CURRENT_TIME, CURRENT_TIME_SEC and current_fs_time
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- Re: [PATCH 1/1] EC backfill retries
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: [PATCH 1/1] EC backfill retries
- From: Alexandre Oliva <oliva@xxxxxxx>
- Toying with a FreeBSD cluster results in a crash
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: rescheduling the performance call
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: rescheduling the performance call
- From: Bartłomiej Święcki <bartlomiej.swiecki@xxxxxxxxxxxx>
- Re: rescheduling the performance call
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: rescheduling the performance call
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: rbdmap.service - which package?
- From: Nathan Cutler <ncutler@xxxxxxx>
- RGW metadata search update
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- rbdmap.service - which package?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Jewel 10.2.7 QE status
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: rescheduling the performance call
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Jewel 10.2.7 QE status
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- rescheduling the performance call
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: storing multiple weights in crush
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Why modify.organizationmap failed
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: Why modify.organizationmap failed
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Sparse file info in filestore not propagated to other OSDs
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Why modify.organizationmap failed
- From: qi Shi <m13913886148@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Jewel 10.2.7 QE status
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v6 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: storing multiple weights in crush
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH 1/5] ceph: fix wrong check in ceph_renew_caps()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH 3/5] ceph: fix potential use-after-free
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH 2/5] ceph: allow connecting to mds whose rank >= mdsmap::m_max_mds
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: Jewel 10.2.7 QE status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Jewel 10.2.7 QE status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: [PATCH 3/5] ceph: fix potential use-after-free
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v7 2/7] libceph: allow requests to return immediately on full conditions if caller wishes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v7 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v7 5/7] ceph: handle epoch barriers in cap messages
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v7 0/7] ceph: implement -ENOSPC handling in cephfs
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v7 7/7] ceph: when seeing write errors on an inode, switch to sync writes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v7 3/7] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v7 6/7] Revert "ceph: SetPageError() for writeback pages if writepages fails"
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v7 1/7] libceph: remove req->r_replay_version
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 1/5] ceph: fix wrong check in ceph_renew_caps()
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- CDM Today @ 12:30p EDT
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: [PATCH v6 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Fwd: rgw keystone revocation thread
- From: Matt Benjamin <mbenjamin@xxxxxxxxxx>
- Re: rgw keystone revocation thread
- From: Pavan Rallabhandi <PRallabhandi@xxxxxxxxxxxxxxx>
- Re: [PATCH v6 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 2/5] ceph: allow connecting to mds whose rank >= mdsmap::m_max_mds
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- [PATCH 5/5] ceph: make seeky readdir more efficiency
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 4/5] ceph: close stopped mds' session
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 3/5] ceph: fix potential use-after-free
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 2/5] ceph: allow connecting to mds whose rank >= mdsmap::m_max_mds
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 1/5] ceph: fix wrong check in ceph_renew_caps()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH v6 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v6 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: rgw keystone revocation thread
- From: Pavan Rallabhandi <PRallabhandi@xxxxxxxxxxxxxxx>
- Re: [PATCH v6 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: How best to integrate dmClock QoS library into ceph codebase
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- CDM: Discussion on Coupled Layer Code.
- From: Myna V <mynaramana@xxxxxxxxx>
- Re: [PATCH v6 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v6 3/7] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: OSDMap / osd_state questions
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: [PATCH v6 2/7] libceph: allow requests to return immediately on full conditions if caller wishes
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v6 1/7] libceph: remove req->r_replay_version
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: OSDMap / osd_state questions
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSDMap / osd_state questions
- From: Sage Weil <sage@xxxxxxxxxxxx>
- How best to integrate dmClock QoS library into ceph codebase
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- Re: OSDMap / osd_state questions
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- OSDMap / osd_state questions
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: storing multiple weights in crush
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: FreeBSD Fwd: Build failed in Jenkins: ceph-master #491
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Coupled Layer MSR: PR 14300, ECSubRead decode fail.
- From: Myna V <mynaramana@xxxxxxxxx>
- Re: Coupled Layer MSR: PR 14300, ECSubRead decode fail.
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Coupled Layer MSR: PR 14300, ECSubRead decode fail.
- From: Myna V <mynaramana@xxxxxxxxx>
- Re: deep-scrubbing
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: How to update ceph/jerasure.git
- From: Myna V <mynaramana@xxxxxxxxx>
- Re: How to update ceph/jerasure.git
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: How to update ceph/jerasure.git
- From: Myna V <mynaramana@xxxxxxxxx>
- rgw keystone revocation thread
- From: Pavan Rallabhandi <PRallabhandi@xxxxxxxxxxxxxxx>
- Re: deep-scrubbing
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Troubleshooting incomplete PG's
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: deep-scrubbing
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- v12.0.1 Contributor credits
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: Ceph 12.0.0/master + DPDK = compilation failed
- From: Haomai Wang <haomai@xxxxxxxx>
- Ceph 12.0.0/master + DPDK = compilation failed
- From: Aynur Shakirov <ajnur.shakirov@xxxxxxxxx>
- Re: Regarding GSoC 2017 Project 'Ceph-mgr: Commands for CephFS Auth Caps Creation'
- From: vivek kukreja <vivekkukreja5@xxxxxxxxx>
- Regarding GSoC 2017 Project 'Ceph-mgr: Commands for CephFS Auth Caps Creation'
- From: vivek kukreja <vivekkukreja5@xxxxxxxxx>
- Google Summer of Code 2017 project proposal: implementation of RBD diff checksums using a rolling checksum algorithm
- From: Radoslav Georgiev <rgeorgiev583@xxxxxxxxx>
- Fwd: Participation in the "Local File Backend for RGW" Ceph project from the Google Summer of Code 2017
- From: Radoslav Georgiev <rgeorgiev583@xxxxxxxxx>
- Re: gsoc proposal review
- From: Vedant Nanda <vedant15114@xxxxxxxxxxx>
- Re: [ceph-users] Troubleshooting incomplete PG's
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Bluestore :Which side is wal submit code
- From: Haodong Tang <tanghaodong25@xxxxxxxxx>
- Bluestore :Which side is wal submit code
- From: qi Shi <m13913886148@xxxxxxxxx>
- Re: segmentation fault while using fio_ceph_objectstore
- From: sheng qiu <herbert1984106@xxxxxxxxx>
- Re: segmentation fault while using fio_ceph_objectstore
- From: Igor Fedotov <ifedotov@xxxxxxxxxxxx>
- writing release notes for v10.2.7
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Jewel 10.2.7 ready for QE
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: segmentation fault while using fio_ceph_objectstore
- From: sheng qiu <herbert1984106@xxxxxxxxx>
- Re: segmentation fault while using fio_ceph_objectstore
- From: Igor Fedotov <ifedotov@xxxxxxxxxxxx>
- Re: segmentation fault while using fio_ceph_objectstore
- From: Sage Weil <sage@xxxxxxxxxxxx>
- segmentation fault while using fio_ceph_objectstore
- From: sheng qiu <herbert1984106@xxxxxxxxx>
- GET bucket policy response format
- From: Artur Molchanov <artur.molchanov@xxxxxxxxxx>
- Teuthology users: you must now include at least one mgr daemon in your roles
- From: Dan Mick <dmick@xxxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: RGW zones and admin ops
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- [PATCH v6 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v6 7/7] ceph: when seeing write errors on an inode, switch to sync writes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v6 6/7] Revert "ceph: SetPageError() for writeback pages if writepages fails"
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v6 5/7] ceph: handle epoch barriers in cap messages
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v6 2/7] libceph: allow requests to return immediately on full conditions if caller wishes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v6 3/7] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v6 1/7] libceph: remove req->r_replay_version
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v6 0/7] implement -ENOSPC handling in cephfs
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Regarding GSoC.
- From: Sagar Sarange <sagar.sarange9@xxxxxxxxx>
- Re: Why can't I delete a pool
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Why can't I delete a pool
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Idea for optimize an OSD rebuild
- From: Sage Weil <sweil@xxxxxxxxxx>
- Why can't I delete a pool
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Idea for optimize an OSD rebuild
- From: huang jun <hjwsm1989@xxxxxxxxx>
- Re: ceph-fuse is working on FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Idea for optimize an OSD rebuild
- From: David Casier <david.casier@xxxxxxxx>
- Idea for optimize an OSD rebuild
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Plan of cache tiering?
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: [PATCH v5 3/6] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v5 3/6] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v5 2/6] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v5 3/6] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v5 2/6] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #476
- From: Igor Fedotov <ifedotov@xxxxxxxxxxxx>
- Re: [PATCH v5 2/6] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: ceph-fuse is working on FreeBSD
- From: Alan Somers <asomers@xxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #476
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: ceph-fuse is working on FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #476
- From: Igor Fedotov <ifedotov@xxxxxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #476
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: ceph-fuse is working on FreeBSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: ceph-fuse is working on FreeBSD
- From: John Spray <jspray@xxxxxxxxxx>
- ceph-fuse is working on FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH v5 1/6] libceph: allow requests to return immediately on full conditions if caller wishes
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Fwd: Build failed in Jenkins: ceph-master #476
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Fwd: Build failed in Jenkins: ceph-master #476
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH v5 2/6] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v5 2/6] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: OpenStack Swift functional tests on RGW
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: RGW zones and admin ops
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- [GSOC] Proposal Review : Smarter Reweight-By-Utilisation
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: [PATCH v5 3/6] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH 04/25] fs: Provide infrastructure for dynamic BDIs in filesystems
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 09/25] ceph: Convert to separately allocated bdi
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 0/25 v2] fs: Convert all embedded bdis into separate ones
- From: Jan Kara <jack@xxxxxxx>
- v12.0.1 Luminous (dev) released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- RGW zones and admin ops
- From: Forumulator V <forumulator@xxxxxxxxx>
- Re: OpenStack Swift functional tests on RGW
- From: Tone Zhang <tone.zhang@xxxxxxxxxx>
- Re: [GSOC] Introduction Mail : ceph-mgr: Smarter Reweight-by-Utilization
- From: Methuku Karthik <kmeth@xxxxxxxxxxxxxx>
- Re: [GSoc] : ceph-mgr: Smarter Reweight-by-Utilization
- From: Methuku Karthik <kmeth@xxxxxxxxxxxxxx>
- Re: [PATCH v5 2/6] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v5 1/6] libceph: allow requests to return immediately on full conditions if caller wishes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: OpenStack Swift functional tests on RGW
- From: Sage Weil <sage@xxxxxxxxxxxx>
- GSoC: Queries regarding ceph-mgr: Slow OSD identification, Automated cluster response
- From: Vedant Nanda <vedant15114@xxxxxxxxxxx>
- Kraken branch reset
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: myoungwon oh <ohmyoungwon@xxxxxxxxx>
- Re: [PATCH v5 3/6] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: fio multithreads meets Segmentation fault
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: Ceph Contribution
- From: Sagar Sarange <sagar.sarange9@xxxxxxxxx>
- Re: OpenStack Swift functional tests on RGW
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: [PATCH v5 2/6] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v5 1/6] libceph: allow requests to return immediately on full conditions if caller wishes
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- OpenStack Swift functional tests on RGW
- From: Rajath Shashidhara <rajath.shashidhara@xxxxxxxxx>
- Re: some question about ceph qa case
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: GSOC 17 questions: ceph-mgr: Slow OSD Identification, Automated Cluster Response
- From: Kefu Chai <kchai@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [GSoc] : ceph-mgr: Smarter Reweight-by-Utilization
- From: Kefu Chai <kchai@xxxxxxxxxx>
- Re: Regarding projects ideas in Ceph
- From: Kefu Chai <kchai@xxxxxxxxxx>
- Re: Queries regarding projects in Gsoc 2017.
- From: Kefu Chai <kchai@xxxxxxxxxx>
- Re: fio multithreads meets Segmentation fault
- From: Ning Yao <zay11022@xxxxxxxxx>
- Re: storing multiple weights in crush
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- fio multithreads meets Segmentation fault
- From: 闫创 <yanchuang1994@xxxxxxxxx>
- some question about ceph qa case
- From: Xinze Chi (信泽) <xmdxcxz@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Adam Kupczyk <akupczyk@xxxxxxxxxxxx>
- Re: Ceph Contribution
- From: Sagar Sarange <sagar.sarange9@xxxxxxxxx>
- Re: [GSOC] Introduction Mail : ceph-mgr: Smarter Reweight-by-Utilization
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- RGW usage from public and private network
- From: sw zhang <zhangsw1001@xxxxxxxxx>
- Re: storing multiple weights in crush
- From: Sage Weil <sweil@xxxxxxxxxx>
- [GSOC] Introduction Mail : ceph-mgr: Smarter Reweight-by-Utilization
- From: Methuku Karthik <kmeth@xxxxxxxxxxxxxx>
- storing multiple weights in crush
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: How to update ceph/jerasure.git
- From: Loic Dachary <loic@xxxxxxxxxxx>
- How to update ceph/jerasure.git
- From: Myna V <mynaramana@xxxxxxxxx>
- Re: [GSoC] Queries regarding the Project
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: myoungwon oh <ohmyoungwon@xxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [GSoC] Queries regarding the Project
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: myoungwon oh <ohmyoungwon@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [GSoC] Queries regarding the Project
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Pedro López-Adeva <plopezadeva@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Adam Kupczyk <akupczyk@xxxxxxxxxxxx>
- Re: [GSoC] Queries regarding the Project
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Apply for GSOC
- From: Kefu Chai <kchai@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: gsoc proposal review
- From: Vedant Nanda <vedant15114@xxxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: Sage Weil <sweil@xxxxxxxxxx>
- intent to tag v10.2.7 mid-April
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- [GIT PULL] Ceph fix for 4.11-rc4
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 06/23] net, ceph: convert ceph_pagelist.refcnt from atomic_t to refcount_t
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 05/23] net, ceph: convert ceph_osd.o_ref from atomic_t to refcount_t
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 04/23] net, ceph: convert ceph_snap_context.nref from atomic_t to refcount_t
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- RE: Print error into debug log by default
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [GSoC] Queries regarding the Project
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: CephFS: How to figure out which files are affected after a disaster
- From: Mao Cheng <chengmao2010@xxxxxxxxx>
- Re: [GSoC] Queries regarding the Project
- From: kefu chai <tchaikov@xxxxxxxxx>
- gsoc proposal review
- From: kefu chai <tchaikov@xxxxxxxxx>
- CephFS: How to figure out which files are affected after a disaster
- From: "Wang, Zhiye" <Zhiye.Wang@xxxxxxxxxxxx>
- Re: code review: crc32c for ppc64le architecture
- From: kefu chai <tchaikov@xxxxxxxxx>
- RE: Print error into debug log by default
- From: "Wang, Zhiye" <Zhiye.Wang@xxxxxxxxxxxx>
- Re: run-rbd-tests not in 'make check'?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Ceph Developer Monthly - APR
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- run-rbd-tests not in 'make check'?
- From: Ning Yao <zay11022@xxxxxxxxx>
- Ceph Tech Talk in 20 mins
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Pedro López-Adeva <plopezadeva@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Broken links in Ceph documentation
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Still denc warnings in Clang
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Broken links in Ceph documentation
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Still denc warnings in Clang
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Still denc warnings in Clang
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Print error into debug log by default
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Pedro López-Adeva <plopezadeva@xxxxxxxxx>
- Re: Broken links in Ceph documentation
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Re: [PATCH] libceph: force GFP_NOIO for socket allocations
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Print error into debug log by default
- From: "Wang, Zhiye" <Zhiye.Wang@xxxxxxxxxxxx>
- kernel client get stuck when the cephfs is unreachable
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- Re: Sudden Bluestore includes
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Sudden Bluestore includes
- From: kefu chai <tchaikov@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [PATCH] libceph: force GFP_NOIO for socket allocations
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Re: [PATCH] libceph: force GFP_NOIO for socket allocations
- From: Jeff Layton <jlayton@xxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]