CEPH Filesystem Development
[Prev Page][Next Page]
- Re: Missing -lpython2.7 when linking ceph-dencoder
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- [GIT PULL] Ceph fixes for 4.12-rc3
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Adding / removing OSDs with weight set
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Missing -lpython2.7 when linking ceph-dencoder
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Missing -lpython2.7 when linking ceph-dencoder
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Ceph on ARM Recap
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: Adding / removing OSDs with weight set
- From: Sage Weil <sweil@xxxxxxxxxx>
- Tuning radosgw for constant uniform high load.
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Adding / removing OSDs with weight set
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: RBD client wallclock profile during 4k random writes
- From: jacky ding <jackyding2679@xxxxxxxxx>
- Re: slowness in builds this week
- From: Sage Weil <sage@xxxxxxxxxxxx>
- slowness in builds this week
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Handling is_readable=0 periods in mon
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Why not to change primary read to random read in any replication
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Why not to change primary read to random read in any replication
- From: qi Shi <m13913886148@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Handling is_readable=0 periods in mon
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Handling is_readable=0 periods in mon
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Handling is_readable=0 periods in mon
- From: John Spray <jspray@xxxxxxxxxx>
- Re: subpackages for mgr modules
- From: John Spray <jspray@xxxxxxxxxx>
- Re: RADOS: Deleting all objects in a namespace
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Handling is_readable=0 periods in mon
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: subpackages for mgr modules
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Help build a drive reliability service!
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: Help build a drive reliability service!
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH] rbd: implement REQ_OP_WRITE_ZEROES
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Help build a drive reliability service!
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH] rbd: implement REQ_OP_WRITE_ZEROES
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: [PATCH] libceph: cleanup old messages according to reconnect seq
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Beta testing crush optimization
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Beta testing crush optimization
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Beta testing crush optimization
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: RADOS: Deleting all objects in a namespace
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: RADOS: Deleting all objects in a namespace
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH] rbd: implement REQ_OP_WRITE_ZEROES
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [PATCH] rbd: implement REQ_OP_WRITE_ZEROES
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Handling is_readable=0 periods in mon
- From: John Spray <jspray@xxxxxxxxxx>
- Re: RADOS: Deleting all objects in a namespace
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Handling is_readable=0 periods in mon
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: RADOS: Deleting all objects in a namespace
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Scuttlemonkey signing off...
- From: Dan Mick <dmick@xxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH 4/5] libceph: validate blob_struct_v in process_one_ticket()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] rbd: implement REQ_OP_WRITE_ZEROES
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [PATCH 5/5] libceph: fix error handling in process_one_ticket()
- From: Alex Elder <elder@xxxxxxxx>
- Re: [PATCH 4/5] libceph: validate blob_struct_v in process_one_ticket()
- From: Alex Elder <elder@xxxxxxxx>
- Re: [PATCH 3/5] libceph: drop version variable from ceph_monmap_decode()
- From: Alex Elder <elder@xxxxxxxx>
- Re: [PATCH 2/5] libceph: make ceph_msg_data_advance() return void
- From: Alex Elder <elder@xxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH 1/5] libceph: use kbasename() and kill ceph_file_part()
- From: Alex Elder <elder@xxxxxxxx>
- Handling is_readable=0 periods in mon
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [ceph-users] Scuttlemonkey signing off...
- From: kefu chai <tchaikov@xxxxxxxxx>
- [PATCH 4/5] libceph: validate blob_struct_v in process_one_ticket()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 5/5] libceph: fix error handling in process_one_ticket()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 3/5] libceph: drop version variable from ceph_monmap_decode()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 2/5] libceph: make ceph_msg_data_advance() return void
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 1/5] libceph: use kbasename() and kill ceph_file_part()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 0/5] libceph: trivial warning fixes
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: RADOS: Deleting all objects in a namespace
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: [PATCH] libceph: NULL deref on crush_decode() error path
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH] rbd: implement REQ_OP_WRITE_ZEROES
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- [PATCH] libceph: NULL deref on crush_decode() error path
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- Re: Problem with query and any operation on PGs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Scuttlemonkey signing off...
- From: Wido den Hollander <wido@xxxxxxxx>
- RADOS: Deleting all objects in a namespace
- From: John Spray <jspray@xxxxxxxxxx>
- Problem with query and any operation on PGs
- From: Łukasz Chrustek <skidoo@xxxxxxx>
- Re: [PATCH 3/3] ceph: cleanup writepage_nounlock()
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 2/3] ceph: redirty page when writepage_nounlock() skips unwritable page
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 1/3] ceph: remove useless page->mapping check in writepage_nounlock()
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH 3/3] ceph: cleanup writepage_nounlock()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 2/3] ceph: redirty page when writepage_nounlock() skips unwritable page
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 1/3] ceph: remove useless page->mapping check in writepage_nounlock()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: [bug report] ceph: fix race between page writeback and truncate
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Haven't been paying attention: Gperf missing
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: How to initiate tests in jenkins.
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: Haven't been paying attention: Gperf missing
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: live migration
- From: 攀刘 <liupan1111@xxxxxxxxx>
- How to initiate tests in jenkins.
- From: Myna V <mynaramana@xxxxxxxxx>
- Re: Scuttlemonkey signing off...
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Haven't been paying attention: Gperf missing
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- understanding Ubuntu's package version numbers for Ceph
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- [bug report] ceph: fix race between page writeback and truncate
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- Ceph Tech Talk This Thurs!
- From: Patrick McGarry <pmcgarry@xxxxxxxxx>
- Re: Scuttlemonkey signing off...
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: Scuttlemonkey signing off...
- From: Victor Denisov <denisovenator@xxxxxxxxx>
- Re: Scuttlemonkey signing off...
- From: Federico Lucifredi <flucifredi@xxxxxxx>
- Re: [ceph-users] ceph-mds crash - jewel 10.2.3
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Scuttlemonkey signing off...
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: subpackages for mgr modules
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Scuttlemonkey signing off...
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: live migration
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: live migration
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- live migration
- From: 攀刘 <liupan1111@xxxxxxxxx>
- No builds/repos for early Monday (UTC)
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Haven't been paying attention: Gperf missing
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Haven't been paying attention: Gperf missing
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: subpackages for mgr modules
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: subpackages for mgr modules
- From: Tim Serong <tserong@xxxxxxxx>
- Haven't been paying attention: Gperf missing
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [GSOC] ceph-mgr: Cluster Status Dashboard
- From: John Spray <jspray@xxxxxxxxxx>
- Re: subpackages for mgr modules
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Which function is mon ping osd?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- [PATCH 03/11 linux-next] ceph: use magic.h
- From: Fabian Frederick <fabf@xxxxxxxxx>
- [PATCH 00/11 linux-next] super magic values consolidation
- From: Fabian Frederick <fabf@xxxxxxxxx>
- Which function is mon ping osd?
- From: qi Shi <m13913886148@xxxxxxxxx>
- Re: global backfill reservation?
- From: David Butterfield <dab21774@xxxxxxxxx>
- Re: global backfill reservation?
- From: Ning Yao <zay11022@xxxxxxxxx>
- Re: subpackages for mgr modules
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [ceph-users] Intel power tuning - 30% throughput performance increase
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: A question about Ceph's paxos implication
- From: fisherman <fisherman.dong@xxxxxxxxx>
- Re: A question about Ceph's paxos implication
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- rbd-nbd performance
- From: sheng qiu <herbert1984106@xxxxxxxxx>
- Re: subpackages for mgr modules
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: [GSOC] ceph-mgr: Cluster Status Dashboard
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: A question about Ceph's paxos implication
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: A question about Ceph's paxos implication
- From: fisherman <fisherman.dong@xxxxxxxxx>
- Re: [GSOC] ceph-mgr: Cluster Status Dashboard
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: [GSOC] ceph-mgr: Cluster Status Dashboard
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: subpackages for mgr modules
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Re: Adaptation of iSCSI-SCST to run entirely in usermode on unmodified kernel
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- [Rgw][Swift API] Swift Midlleware compability
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: [ceph-users] Intel power tuning - 30% throughput performance increase
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: [GSOC] ceph-mgr: Cluster Status Dashboard
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: Kernel warnings in CEPH
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- subpackages for mgr modules
- From: Tim Serong <tserong@xxxxxxxx>
- Re: Kernel warnings in CEPH
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: A question about Ceph's paxos implication
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Alibaba's work on majority commit
- From: "LIU, Fei" <james.liu@xxxxxxxxxxxxxxx>
- Re: [PATCH] ceph: check i_nlink while converting a file handle to dentry
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: standby_count_wanted
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- standby_count_wanted
- From: David Zafman <dzafman@xxxxxxxxxx>
- Kernel warnings in CEPH
- From: Stephen Hemminger <stephen@xxxxxxxxxxxxxxxxxx>
- A question about Ceph's paxos implication
- From: fisherman <fisherman.dong@xxxxxxxxx>
- Re: Is jenkins still building docs for branches in ceph.git?
- From: Dan Mick <dmick@xxxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Ning Yao <zay11022@xxxxxxxxx>
- Re: [PATCH] ceph: check i_nlink while converting a file handle to dentry
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: RBD client wallclock profile during 4k random writes
- From: Ning Yao <zay11022@xxxxxxxxx>
- Is jenkins still building docs for branches in ceph.git?
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Alibaba's work on majority commit
- From: Ning Yao <zay11022@xxxxxxxxx>
- Re: What should we do if ceph feature bit is all used?
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: What should we do if ceph feature bit is all used?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH] ceph: check i_nlink while converting a file handle to dentry
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- What should we do if ceph feature bit is all used?
- From: Ning Yao <zay11022@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- v12.0.3 Luminous (dev) released
- From: Abhishek L <abhishek@xxxxxxxx>
- [PATCH] ceph: check i_nlink while converting a file handle to dentry
- From: Luis Henriques <lhenriques@xxxxxxxx>
- ceph-mgr REST API
- From: Tim Serong <tserong@xxxxxxxx>
- Re: How best to integrate dmClock QoS library into ceph codebase
- From: Ming Lin <minggr@xxxxxxxxx>
- Re: [ceph-users] Cephalocon Cancelled
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Re: How best to integrate dmClock QoS library into ceph codebase
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Why does the heartbeat packet have 122 bytes of messages?
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: global backfill reservation?
- From: David Butterfield <dab21774@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] Cephalocon Cancelled
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Why does the heartbeat packet have 122 bytes of messages?
- From: qi Shi <m13913886148@xxxxxxxxx>
- Re: How best to integrate dmClock QoS library into ceph codebase
- From: Ming Lin <minggr@xxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Aaron Ten Clay <aarontc@xxxxxxxxxxx>
- Re: global backfill reservation?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: RGW: removal of support for fastcgi
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: RGW: removal of support for fastcgi
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: [ceph-users] Cephalocon Cancelled
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: John Spray <jspray@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] Cephalocon Cancelled
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: global backfill reservation?
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: [GSOC] ceph-mgr: Cluster Status Dashboard
- From: saumay agrawal <saumay.agrawal@xxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Re: [ceph-users] Cephalocon Cancelled
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [GSOC] ceph-mgr: Cluster Status Dashboard
- From: John Spray <jspray@xxxxxxxxxx>
- [GSOC] ceph-mgr: Cluster Status Dashboard
- From: saumay agrawal <saumay.agrawal@xxxxxxxxx>
- [Ceph-ansible] EXT: Re: EXT: Re: osd-directory scenario is used by us
- From: Anton Thaker <Anton.Thaker@xxxxxxxxxxx>
- Re: An algorithm to fix uneven CRUSH distributions in Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: global backfill reservation?
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: [ceph-users] Cephalocon Cancelled
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: An algorithm to fix uneven CRUSH distributions in Ceph
- From: Pedro López-Adeva <plopezadeva@xxxxxxxxx>
- Cephalocon Cancelled
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- global backfill reservation?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Mon identity in a dynamic environment
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Re: EXT: Re: [Ceph-ansible] EXT: Re: osd-directory scenario is used by us
- From: Sebastien Han <shan@xxxxxxxxxx>
- An algorithm to fix uneven CRUSH distributions in Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- How to analyze the performance bottleneck ceph
- From: qi Shi <m13913886148@xxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Mon identity in a dynamic environment
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Alibaba's work on recovery process
- From: "LIU, Fei" <james.liu@xxxxxxxxxxxxxxx>
- Re: Alibaba's work on recovery process
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Alibaba's work on majority commit
- From: "LIU, Fei" <james.liu@xxxxxxxxxxxxxxx>
- Re: Alibaba's work on recovery process
- From: "LIU, Fei" <james.liu@xxxxxxxxxxxxxxx>
- Re: Mon identity in a dynamic environment
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Re: Mon identity in a dynamic environment
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: EXT: Re: [Ceph-ansible] EXT: Re: EXT: Re: osd-directory scenario is used by us
- From: Warren Wang - ISD <Warren.Wang@xxxxxxxxxxx>
- Re: Monitoring ceph and prometheus
- From: John Spray <jspray@xxxxxxxxxx>
- Monitoring ceph and prometheus
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: GSoC participant Pranjal Agrawal
- From: Forumulator V <forumulator@xxxxxxxxx>
- Re: [Ceph-ansible] EXT: Re: EXT: Re: osd-directory scenario is used by us
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- Re: fs: mandatory client quota
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: [GSoC] Spandan : Smarter Reweight by Utilisation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: crush optimization targets
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush optimization targets
- From: Xavier Villaneau <xavier.ceph@xxxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Calculating the expected PGs distribution
- From: Xavier Villaneau <xavier.ceph@xxxxxxxxxxxx>
- Re: RBD client wallclock profile during 4k random writes
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: RBD client wallclock profile during 4k random writes
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: RBD client wallclock profile during 4k random writes
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Mon identity in a dynamic environment
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- RBD client wallclock profile during 4k random writes
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: fs: mandatory client quota
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: crush optimization targets
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush optimization targets
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- crush optimization targets
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: crush problem in EC environment
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: announcing bc-ceph-reweight-by-utilization.py and Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: GSOC on ceph-mgr:Cluster Status Dashboard
- From: kefu chai <tchaikov@xxxxxxxxx>
- Coupled Layer MSR (Array Codes) in Ceph
- From: Myna V <mynaramana@xxxxxxxxx>
- announcing bc-ceph-reweight-by-utilization.py and Re: Calculating the expected PGs distribution
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- crush problem in EC environment
- From: zengran zhang <z13121369189@xxxxxxxxx>
- [GIT PULL] Ceph updates for 4.12-rc1
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: boost::future and continuations
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: fs: mandatory client quota
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Xavier Villaneau <xavier.ceph@xxxxxxxxxxxx>
- Re: quick testing/development with ceph-ansible
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [Ceph-ansible] EXT: Re: osd-directory scenario is used by us
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: [Ceph-ansible] EXT: Re: osd-directory scenario is used by us
- From: Sebastien Han <shan@xxxxxxxxxx>
- Re: Question regarding struct ceph_timestamp
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: boost::future and continuations
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: fs: mandatory client quota
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: boost::future and continuations
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Question regarding struct ceph_timestamp
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: boost::future and continuations
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: boost::future and continuations
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Heads-up: possible Jewel/Kraken RBD compatibility issue that might impact users doing rolling upgrades
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Fwd: GSOC on ceph-mgr:Cluster Status Dashboard
- From: saumay agrawal <saumay.agrawal@xxxxxxxxx>
- Question regarding struct ceph_timestamp
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Heads-up: possible Jewel/Kraken RBD compatibility issue that might impact users doing rolling upgrades
- From: Florian Haas <florian@xxxxxxxxxxx>
- boost::future and continuations
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- quick testing/development with ceph-ansible
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Request for subscribing to ceph-devel list
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Request for subscribing to ceph-devel list
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Request for subscribing to ceph-devel list
- From: saumay agrawal <saumay.agrawal@xxxxxxxxx>
- Re: Alibaba's work on recovery process
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: GSoC participant Pranjal Agrawal
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: GSoC participant Pranjal Agrawal
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- Re: Ceph's Outreachy Participant Joannah Nanjekye!
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: Ceph's Outreachy Participant Joannah Nanjekye!
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Ceph's Outreachy Participant Joannah Nanjekye!
- From: Marcus Watts <mwatts@xxxxxxxxxx>
- Re: Ceph's Outreachy Participant Joannah Nanjekye!
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Ceph's Outreachy Participant Joannah Nanjekye!
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- [PATCH v2] src/seek_sanity_test: ensure file size is big enough
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Calculating the expected PGs distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: Hequan <hequanzh@xxxxxxxxx>
- Re: GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: kefu chai <tchaikov@xxxxxxxxx>
- GSOC on ceph-mgr : SMARTER SMARTER REWEIGHT-BY-UTILIZATION
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: Hequan <hequanzh@xxxxxxxxx>
- Re: GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: EXT: Re: [Ceph-ansible] EXT: Re: osd-directory scenario is used by us
- From: Warren Wang - ISD <Warren.Wang@xxxxxxxxxxx>
- GSOC about ceph-mgr:POOL PG_NUM AUTO-TUNING
- From: Hequan <hequanzh@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Spandan Kumar Sahu <spandankumarsahu@xxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] ceph: Check that the new inode size is within limits in ceph_fallocate()
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: SMARTER REWEIGHT-BY-UTILIZATION
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Proposal for a CRUSH collision fallback
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] RGW: removal of support for fastcgi
- From: Wido den Hollander <wido@xxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- RGW: removal of support for fastcgi
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Alibaba's work on recovery process
- From: Huang Zhiteng <winston.d@xxxxxxxxx>
- Re: Alibaba's work on recovery process
- From: "LIU, Fei" <james.liu@xxxxxxxxxxxxxxx>
- [PATCH] ceph: Check that the new inode size is within limits in ceph_fallocate()
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: fs: mandatory client quota
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: fs: mandatory client quota
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- fs: mandatory client quota
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Alibaba's work on recovery process
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: Tim Serong <tserong@xxxxxxxx>
- [PATCH] libceph: cleanup old messages according to reconnect seq
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Babeltrace error in FreeBSD, Build failed in Jenkins: ceph-master #634
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Babeltrace error in FreeBSD, Build failed in Jenkins: ceph-master #634
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Babeltrace error in FreeBSD, Build failed in Jenkins: ceph-master #634
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: [ceph-users] Intel power tuning - 30% throughput performance increase
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Aaron Ten Clay <aarontc@xxxxxxxxxxx>
- Babeltrace error in FreeBSD, Build failed in Jenkins: ceph-master #634
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: kraken 11.2.1 last call
- From: liuchang0812 <liuchang0812@xxxxxxxxx>
- Re: Blustore data consistency question when big write.
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] osd and/or filestore tuning for ssds?
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: [PATCH] fstests: attr: add support for cephfs
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [PATCH] fstests: attr: add support for cephfs
- From: Eryu Guan <eguan@xxxxxxxxxx>
- Re: [PATCH] fstests: attr: add support for cephfs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Blustore data consistency question when big write.
- From: qi Shi <m13913886148@xxxxxxxxx>
- Re: [PATCH] fstests: attr: add support for cephfs
- From: Eryu Guan <eguan@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [ceph-users] kernel BUG at fs/ceph/inode.c:1197
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] Intel power tuning - 30% throughput performance increase
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: [ceph-users] Intel power tuning - 30% throughput performance increase
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [ceph-users] kernel BUG at fs/ceph/inode.c:1197
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: EXT: Re: [Ceph-ansible] osd-directory scenario is used by us
- From: Gregory Meno <gmeno@xxxxxxxxxx>
- Re: [PATCH 0/9] rbd: support for rbd map --exclusive
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: kraken 11.2.1 last call
- From: Radoslaw Zarzynski <rzarzynski@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: crush luminous endgame
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: "Darrick J. Wong" <darrick.wong@xxxxxxxxxx>
- crush luminous endgame
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: doc: dead link in doc template
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: kraken 11.2.1 last call
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- CDM tonight @ 9p EDT
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: Tracing Ceph results
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Tracing Ceph results
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Increase PG or reweight OSDs?
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- [PATCH] fstests: attr: add support for cephfs
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- doc: dead link in doc template
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: Introduction to community
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Introduction to community
- From: Vaibhav Singhal <singhalvaibhav28@xxxxxxxxx>
- Re: Tracing Ceph results
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: Tracing Ceph results
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Tracing Ceph results
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Tracing Ceph results
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: man pages no longer compressing during install?
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: man pages no longer compressing during install?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Henrik Korkuc <lists@xxxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- Re: revisiting uneven CRUSH distributions
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- ATTN: PAYMENT NOTIFICATION!
- From: "Hon.Adams Keke" <ups.trustsecuritycompany@xxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Brian Foster <bfoster@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- revisiting uneven CRUSH distributions
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] LRC low level plugin configuration can't express maximal erasure resilience
- From: Loic Dachary <loic@xxxxxxxxxxx>
- man pages no longer compressing during install?
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: ceph-fuse is working on FreeBSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- Re: xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- xlog_write: reservation ran out
- From: Ming Lin <mlin@xxxxxxxxxx>
- arm build server erroneously tagged, caused a number of build failures
- From: Dan Mick <dmick@xxxxxxxxxx>
- osd and/or filestore tuning for ssds?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: kraken 11.2.1 last call
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: [PATCH] ceph: fix memory leak in __ceph_setxattr()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH] ceph: fix memory leak in __ceph_setxattr()
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [PATCH v2] ceph: Fix file open flags on ppc64
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH v2] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- [PATCH] ceph: choose readdir frag based on previous readdir reply
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH v2] ceph: Fix file open flags on ppc64
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Usermode iSCSI-SCST updated to use Ceph RBD as backing storage
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Yann Dupont <yd@xxxxxxxxx>
- Usermode iSCSI-SCST updated to use Ceph RBD as backing storage
- From: David Butterfield <dab21774@xxxxxxxxx>
- Re: [PATCH] osd: Do not subtract object overlaps from cache usage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH v2] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- [GIT PULL] Ceph fix for 4.11-rc9
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: PRs not being tested by jenkins??
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- PRs not being tested by jenkins??
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Performance Measurement
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Performance Measurement
- From: David Byte <dbyte@xxxxxxxx>
- Re: Performance Measurement
- From: David Byte <dbyte@xxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Performance Measurement
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Performance Measurement
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Help! how to set iscsi.conf of SPDK iscsi target when using ceph rbd
- From: yiming xie <platoxym@xxxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Ceph Tech Talk Cancelled
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] v12.0.2 Luminous (dev) released
- From: kefu chai <tchaikov@xxxxxxxxx>
- repeating "wrong node" and ceph-mgr CPU usage get higher and higher without any I/O
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- Re: When is do_redundant_reads flag set?
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: When is do_redundant_reads flag set?
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: jewel 10.2.8 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: jewel 10.2.8 last call
- From: Yann Dupont <yd@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: jewel 10.2.8 last call
- From: Alexey Sheplyakov <asheplyakov@xxxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Pedro López-Adeva <plopezadeva@xxxxxxxxx>
- Re: v12.0.2 Luminous (dev) released
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] v12.0.2 Luminous (dev) released
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: v12.0.2 Luminous (dev) released
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: v12.0.2 Luminous (dev) released
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- [PATCH 6/9] rbd: support updating the lock cookie without releasing the lock
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 7/9] rbd: kill rbd_is_lock_supported()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 8/9] rbd: return ResponseMessage result from rbd_handle_request_lock()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 9/9] rbd: exclusive map option
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 5/9] rbd: store lock cookie
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 4/9] rbd: ignore unlock errors
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 3/9] rbd: fix error handling around rbd_init_disk()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 1/9] rbd: move rbd_dev_destroy() call out of rbd_dev_image_release()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 2/9] rbd: move rbd_unregister_watch() call into rbd_dev_image_release()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 0/9] rbd: support for rbd map --exclusive
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: v12.0.2 Luminous (dev) released
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Nathan Cutler <ncutler@xxxxxxx>
- kraken 11.2.1 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- jewel 10.2.8 last call
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: cephfs issue, get_reply data > preallocated
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: cephfs issue, get_reply data > preallocated
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: [sepia] Test queue paused?
- From: Gregory Meno <gmeno@xxxxxxxxxx>
- v12.0.2 Luminous Contributor credits
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: cephfs issue, get_reply data > preallocated
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: cephfs issue, get_reply data > preallocated
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- cephfs issue, get_reply data > preallocated
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- v12.0.2 Luminous (dev) released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- [PATCH] osd: Do not subtract object overlaps from cache usage
- From: Michal Koutný <mkoutny@xxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Storing NFS (ganesha) HA state in Ceph
- From: Brett Niver <bniver@xxxxxxxxxx>
- Storing NFS (ganesha) HA state in Ceph
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Test queue paused?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Ceph Package Repo on Ubuntu Precise(12.04) is broken
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH] block: get rid of blk_integrity_revalidate()
- From: Jens Axboe <axboe@xxxxxx>
- Adaptation of iSCSI-SCST to run entirely in usermode on unmodified kernel
- From: David Butterfield <dab21774@xxxxxxxxx>
- Re: [PATCH] block: get rid of blk_integrity_revalidate()
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: When is do_redundant_reads flag set?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH 05/11] rbd: use bio_clone_fast() instead of bio_clone()
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: myoungwon oh <ohmyoungwon@xxxxxxxxx>
- When is do_redundant_reads flag set?
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [PATCH] ceph: Fix file open flags on ppc64
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [PATCH 0/25 v3] fs: Convert all embedded bdis into separate ones
- From: Jens Axboe <axboe@xxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Smarter blacklisting?
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- [PATCH] ceph: Fix file open flags on ppc64
- From: Alexander Graf <agraf@xxxxxxx>
- RE: [PATCH 0/2] fs, ceph filesystem refcount conversions
- From: "Reshetova, Elena" <elena.reshetova@xxxxxxxxx>
- Help maintain the CephFS Samba and/or Hadoop bindings
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH 0/2] fs, ceph filesystem refcount conversions
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- RE: [PATCH 0/2] fs, ceph filesystem refcount conversions
- From: "Reshetova, Elena" <elena.reshetova@xxxxxxxxx>
- [PATCH 00/11] block: assorted cleanup for bio splitting and cloning.
- From: NeilBrown <neilb@xxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- [PATCH 05/11] rbd: use bio_clone_fast() instead of bio_clone()
- From: NeilBrown <neilb@xxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [PATCH] block: get rid of blk_integrity_revalidate()
- From: "Martin K. Petersen" <martin.petersen@xxxxxxxxxx>
- Re: Smarter blacklisting?
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Aaron Ten Clay <aarontc@xxxxxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Ceph, RDMA and 40gbit Cisco CNA
- From: Greg Procunier <greg.procunier@xxxxxxxxx>
- Re: [PATCH v2] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: Luis Henriques <lhenriques@xxxxxxxx>
- reminder: perf meeting moved to thursdays at 8AM PST
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- fun with ccache
- From: Tim Serong <tserong@xxxxxxxx>
- Re: Smarter blacklisting?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Smarter blacklisting?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH v2] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Comparing straw2 and CARP
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: Kernel panic on CephFS kernel client when setting file ACL
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- [PATCH] ceph: fix recursively call between ceph_set_acl and __ceph_setattr
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Kernel panic on CephFS kernel client when setting file ACL
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Jewel regression (not released, but still serious)
- From: Nathan Cutler <ncutler@xxxxxxx>
- Jewel regression (not released, but still serious)
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Smarter blacklisting?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- [PATCH] block: get rid of blk_integrity_revalidate()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Smarter blacklisting?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Smarter blacklisting?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Filestore directory splitting (ZFS/FreeBSD)
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Filestore directory splitting (ZFS/FreeBSD)
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Filestore directory splitting (ZFS/FreeBSD)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Kernel panic on CephFS kernel client when setting file ACL
- From: Jerry Lee <leisurelysw24@xxxxxxxxx>
- Filestore directory splitting (ZFS/FreeBSD)
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: myoungwon oh <ohmyoungwon@xxxxxxxxx>
- [PATCH v8 4/7] libceph: add an epoch_barrier field to struct ceph_osd_client
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 6/7] Revert "ceph: SetPageError() for writeback pages if writepages fails"
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 2/7] libceph: allow requests to return immediately on full conditions if caller wishes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 0/7] ceph: implement -ENOSPC handling in cephfs
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 7/7] ceph: when seeing write errors on an inode, switch to sync writes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 5/7] ceph: handle epoch barriers in cap messages
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 1/7] libceph: remove req->r_replay_version
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v8 3/7] libceph: abort already submitted but abortable requests when map or pool goes full
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: another scrub bug? blocked for > 10240.948831 secs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] fsping, why you no work no mo?
- From: John Spray <jspray@xxxxxxxxxx>
- another scrub bug? blocked for > 10240.948831 secs
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 06/12] audit: Use timespec64 to represent audit timestamps
- From: Arnd Bergmann <arnd@xxxxxxxx>
- Re: Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [ceph-users] PG calculator improvement
- From: Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>
- Re: Static Analysis
- From: kefu chai <tchaikov@xxxxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Measuring lock conention
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Measuring lock conention
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Measuring lock conention
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Measuring lock conention
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Measuring lock conention
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Measuring lock conention
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Measuring lock conention
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Measuring lock conention
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Ceph-deploy for FreeBSD
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Sparse file info in filestore not propagated to other OSDs
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: Tim Serong <tserong@xxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Dan Mick <dmick@xxxxxxxxxx>
- Re: Minimal crush weight_set integration
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Minimal crush weight_set integration
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Mohamad Gebai <mgebai@xxxxxxxx>
- Re: Weekly perf meeting changing from Wednesday to Thursday
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Weekly perf meeting changing from Wednesday to Thursday
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Minimal crush weight_set integration
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Question about writeback performance and content address obejct for deduplication
- From: Sage Weil <sweil@xxxxxxxxxx>
- PG calculator improvement
- From: Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>
- Re: [PATCH 07/12] fs: btrfs: Use ktime_get_real_ts for root ctime
- From: David Sterba <dsterba@xxxxxxx>
- Re: Help debugging RGW bug in jewel 10.2.8 integration branch
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: cephfs: Normal user of our fs can damage the whole system by writing huge xattr kv pairs
- From: John Spray <jspray@xxxxxxxxxx>
- [PATCH 04/25] fs: Provide infrastructure for dynamic BDIs in filesystems
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 0/25 v3] fs: Convert all embedded bdis into separate ones
- From: Jan Kara <jack@xxxxxxx>
- [PATCH 09/25] ceph: Convert to separately allocated bdi
- From: Jan Kara <jack@xxxxxxx>
- Re: [PATCH 09/25] ceph: Convert to separately allocated bdi
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: [PATCH 04/25] fs: Provide infrastructure for dynamic BDIs in filesystems
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: Ceph EC code implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- cephfs: Normal user of our fs can damage the whole system by writing huge xattr kv pairs
- From: Yang Joseph <joseph.yang@xxxxxxxxxxxx>
- Re: [PATCH 06/12] audit: Use timespec64 to represent audit timestamps
- From: Paul Moore <paul@xxxxxxxxxxxxxx>
- Re: crush multipick anomaly
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: How to understand Collection in Bluestore, is it a folder?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: John Spray <jspray@xxxxxxxxxx>
- v10.2.7 Jewel released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: John Spray <jspray@xxxxxxxxxx>
- Re: multiple cherrypys in ceph-mgr modules stomp on each other
- From: Ricardo Dias <rdias@xxxxxxxx>
- multiple cherrypys in ceph-mgr modules stomp on each other
- From: Tim Serong <tserong@xxxxxxxx>
- How to understand Collection in Bluestore, is it a folder?
- From: qi Shi <m13913886148@xxxxxxxxx>
- Re: crush multiweight implementation details
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: rgw: refactoring test_multi.py for teuthology
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: crush multiweight implementation details
- From: Sage Weil <sweil@xxxxxxxxxx>
- crush multiweight implementation details
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD creation and device class
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Forwarding Rocksdb to a new sha
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]