CEPH Filesystem Development
[Prev Page][Next Page]
- Re: [PATCH REPOST 6/6] rbd: move remaining osd op setup into rbd_osd_req_op_create()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: assign watch request more directly
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/6] rbd: consolidate osd request setup
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 6/6] rbd: move remaining osd op setup into rbd_osd_req_op_create()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: combine rbd sync watch/unwatch functions
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: use a common layout for each device
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: kill ceph_osd_req_op->flags
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- tgt backend driver for Ceph block devices (rbd)
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/3] rbd: no need for file mapping calculation
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/4] rbd: explicitly support only one osd op
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- RE: Ceph slow request & unstable issue
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: Ceph slow request & unstable issue
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH 00/29] Various fixes for MDS
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: "Yan, Zheng " <yanzheng@xxxxxxxx>
- Re: [PATCH REPOST 0/6] libceph: parameter cleanup
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: separate layout init
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/2] libceph: embed r_trail struct in ceph_osd_request()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Ceph slow request & unstable issue
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: flashcache
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: flashcache
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Sage Weil <sage@xxxxxxxxxxx>
- flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Tomasz Paszkowski <ss7pro@xxxxxxxxx>
- Re: [PATCH] configure.ac: fix problem with --enable-cephfs-java
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: ceph from poc to production
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: Ceph slow request & unstable issue
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: 8 out of 12 OSDs died after expansion on 0.56.1 (void OSD::do_waiters())
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: REMINDER: all argonaut users should upgrade to v0.48.3argonaut
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Ceph slow request & unstable issue
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: REMINDER: all argonaut users should upgrade to v0.48.3argonaut
- From: Sage Weil <sage@xxxxxxxxxxx>
- 8 out of 12 OSDs died after expansion on 0.56.1 (void OSD::do_waiters())
- From: Wido den Hollander <wido@xxxxxxxxx>
- OSD don't start after upgrade form 0.47.2 to 0.56.1
- From: Michael Menge <michael.menge@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH] libceph: for chooseleaf rules, retry CRUSH map descent from root if leaf is failed
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: REMINDER: all argonaut users should upgrade to v0.48.3argonaut
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Yann Dupont <Yann.Dupont@xxxxxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Dino Yancey <dino2gnt@xxxxxxxxx>
- HOWTO: teuthology and code coverage
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: code coverage and teuthology
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH] configure.ac: fix problem with --enable-cephfs-java
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: CephFS issue
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: understanding cephx
- From: Michael Menge <michael.menge@xxxxxxxxxxxxxxxxxxxx>
- Re: CephFS issue
- From: Alexis GÜNST HORN <alexis.gunsthorn@xxxxxxxxxxxx>
- Re: Ceph slow request & unstable issue
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- ceph-osd can`t start after crush server
- From: "aleonov@xxxxxxxxxxxxxx" <aleonov@xxxxxxxxxxxxxx>
- Re: Adding flashcache for data disk to cache Ceph metadata writes
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Adding flashcache for data disk to cache Ceph metadata writes
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH] libceph: for chooseleaf rules, retry CRUSH map descent from root if leaf is failed
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/5] rbd: drop some unneeded parameters
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/2] rbd: standardize some variable names
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/4] rbd: four minor patches
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 2/2] rbd: only get snap context for write requests
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Ceph slow request & unstable issue
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH REPOST 1/2] rbd: make exists flag atomic
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Test infrastructure: 2 or more servers?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: end request on error in rbd_do_request() caller
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 2/2] rbd: a little more cleanup of rbd_rq_fn()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 1/2] rbd: encapsulate handling for a single request
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- mds: first stab at lookup-by-ino problem/soln description
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH REPOST] libceph: reformat __reset_osd()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Test infrastructure: 2 or more servers?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: Test infrastructure: 2 or more servers?
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Test infrastructure: 2 or more servers?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Test infrastructure: 2 or more servers?
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- REMINDER: all argonaut users should upgrade to v0.48.3argonaut
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Concepts of whole cluster snapshots/backups and backups in general.
- From: Michael Grosser <mail@xxxxxxxxxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: code coverage and teuthology
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: Grid data placement
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- Re: Another rbd compatibility issue between 0.48.2argonaut-2 and 0.56.1 ?
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Grid data placement
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Grid data placement
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- Re: Grid data placement
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Grid data placement
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Patch "rbd: kill create_snap sysfs entry" has been added to the 3.4-stable tree
- From: <gregkh@xxxxxxxxxxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Grid data placement
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: code coverage and teuthology
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Noah Watkins <jayhawk@xxxxxxxxxxx>
- Re: Rack Awareness
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Rack Awareness
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Rack Awareness
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Rack Awareness
- From: Wido den Hollander <wido@xxxxxxxxx>
- Rack Awareness
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: code coverage and teuthology
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: mon down
- From: jie sun <0maidou0@xxxxxxxxx>
- Re: CephFS issue
- From: "Joshua J. Kugler" <joshua@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: James Page <james.page@xxxxxxxxxxxxx>
- mon down
- From: jie sun <0maidou0@xxxxxxxxx>
- RE: Slow requests
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Florian Haas <florian@xxxxxxxxxxx>
- [PATCH 2/2] rbd: prevent open for image being removed
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 0/2] rbd: prevent open of image being unmapped
- From: Alex Elder <elder@xxxxxxxxxxx>
- RE: Seperate metadata disk for OSD
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: code coverage and teuthology
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH v2] rbd: Support plain/json/xml output formatting
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- understanding cephx
- From: Michael Menge <michael.menge@xxxxxxxxxxxxxxxxxxxx>
- RE: Seperate metadata disk for OSD
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Tom Lanyon <tom@xxxxxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: [PATCH v2] rbd: Support plain/json/xml output formatting
- From: Stratos Psomadakis <psomas@xxxxxxxx>
- [PATCH 1/2] mds: fix end check in Server::handle_client_readdir()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: rbd kernel driver on the osd server
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: CephFS issue
- From: Alexis GÜNST HORN <alexis.gunsthorn@xxxxxxxxxxxx>
- Re: CephFS issue
- From: Wido den Hollander <wido@xxxxxxxxx>
- CephFS issue
- From: Alexis GÜNST HORN <alexis.gunsthorn@xxxxxxxxxxxx>
- Another rbd compatibility issue between 0.48.2argonaut-2 and 0.56.1 ?
- From: Simon Frerichs | Fremaks GmbH <frerichs@xxxxxxxxxx>
- Re: rbd kernel driver on the osd server
- From: Sage Weil <sage@xxxxxxxxxxx>
- [debug help]: get dprintk() outputs in src/crush/mapper.c or net/crush/mapper.c
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: rbd kernel driver on the osd server
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: rbd kernel driver on the osd server
- From: Wido den Hollander <wido@xxxxxxxxx>
- rbd kernel driver on the osd server
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- RE: Seperate metadata disk for OSD
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Noah Watkins <jayhawk@xxxxxxxxxxx>
- RE: Seperate metadata disk for OSD
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Seperate metadata disk for OSD
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Seperate metadata disk for OSD
- From: "Yan, Zheng " <yanzheng@xxxxxxxx>
- Re: Striped images and cluster misbehavior
- From: Andrey Korolyov <andrey@xxxxxxx>
- Seperate metadata disk for OSD
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- [PATCH] rbd: fix type of snap_id in rbd_dev_v2_snap_info()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: What is the acceptable attachment file size on the mail server?
- From: "Yan, Zheng " <yanzheng@xxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: radosgw / fail to authorized request
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- radosgw / fail to authorized request
- From: Shailesh Tyagi <shailesh@xxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: v0.48.3 argonaut update released
- From: James Page <james.page@xxxxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: REVIEW REQUEST: wip-rbd-helpercmds
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- REVIEW REQUEST: wip-rbd-helpercmds
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: Question about configuration
- From: Yasuhiro Ohara <yasu@xxxxxxxxxxxx>
- Re: Question about configuration
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Question about configuration
- From: Yasuhiro Ohara <yasu@xxxxxxxxxxxx>
- Re: Question about configuration
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Question about configuration
- From: Yasuhiro Ohara <yasu@xxxxxxxxxxxx>
- code coverage and teuthology
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: geo replication
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Usage of CEPH FS versa HDFS for Hadoop: TeraSort benchmark performance comparison issue
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Crushmap Design Question
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: What is the acceptable attachment file size on the mail server?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- What is the acceptable attachment file size on the mail server?
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: osd down (for 2 about 2 minutes) error after adding a new host to my cluster
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Wido den Hollander <wido@xxxxxxxxx>
- Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: OSD crash, ceph version 0.56.1
- From: Ian Pye <ianpye@xxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD crash, ceph version 0.56.1
- From: Sage Weil <sage@xxxxxxxxxxx>
- OSD crash, ceph version 0.56.1
- From: Ian Pye <ianpye@xxxxxxxxx>
- Re: OSD memory leaks?
- From: Dave Spano <dspano@xxxxxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Noah Watkins <jayhawk@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Dave Spano <dspano@xxxxxxxxxxxxxx>
- [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: geo replication
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: geo replication
- From: Mark Kampe <mark.kampe@xxxxxxxxxxx>
- [PATCH] osd/ReplicatedPG.cc: fix errors in _scrub()
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- geo replication
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Windows port
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: OSD memory leaks?
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Windows port
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: OSD memory leaks?
- From: Dave Spano <dspano@xxxxxxxxxxxxxx>
- Re: Usage of CEPH FS versa HDFS for Hadoop: TeraSort benchmark performance comparison issue
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- RE: Usage of CEPH FS versa HDFS for Hadoop: TeraSort benchmark performance comparison issue
- From: "Lachfeld, Jutta" <jutta.lachfeld@xxxxxxxxxxxxxx>
- Re: Crushmap Design Question
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- RE: Crushmap Design Question
- From: "Moore, Shawn M" <smmoore@xxxxxxxxxxx>
- Re: Windows port
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Mark Kampe <mark.kampe@xxxxxxxxxxx>
- Re: OSD's slow down to a crawl
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Dennis Jacobfeuerborn <dennisml@xxxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Christopher Kunz <chrislist@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Wido den Hollander <wido@xxxxxxxxx>
- Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: "Lachfeld, Jutta" <jutta.lachfeld@xxxxxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: Crushmap Design Question
- From: Wido den Hollander <wido@xxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Sage Weil <sage@xxxxxxxxxxx>
- v0.48.3 argonaut update released
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: Crushmap Design Question
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Adjusting replicas on argonaut
- From: Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx>
- Re: Adjusting replicas on argonaut
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Adjusting replicas on argonaut
- From: Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx>
- Re: Adjusting replicas on argonaut
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Adjusting replicas on argonaut
- From: Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx>
- Re: "hit suicide timeout" message after upgrade to 0.56
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Rados gateway init timeout with cache
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- RE: Rados gateway init timeout with cache
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
- Crushmap Design Question
- From: "Moore, Shawn M" <smmoore@xxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: branches
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- branches
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Windows port
- From: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
- Re: "hit suicide timeout" message after upgrade to 0.56
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: what could go wrong with two clusters on the same network?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD Crashed when runing "rbd list"
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Windows port
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Rados gateway init timeout with cache
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: OSD Crashed when runing "rbd list"
- From: James Page <james.page@xxxxxxxxxx>
- OSD Crashed when runing "rbd list"
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: Windows port
- From: Dino Yancey <dino2gnt@xxxxxxxxx>
- RE: Is Ceph recovery able to handle massive crash
- From: "Moore, Shawn M" <smmoore@xxxxxxxxxxx>
- RE: Rados gateway init timeout with cache
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Rados gateway init timeout with cache
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
- recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: v0.56.1 released
- From: Amon Ott <ao@xxxxxxxxxxxx>
- Re: v0.56.1 released
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: v0.56.1 released
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: v0.56.1 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Windows port
- From: Cesar Mello <cmello@xxxxxxxxx>
- Re: librados/librbd compatibility issue with v0.56
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: v0.56.1 released
- From: Dennis Jacobfeuerborn <dennisml@xxxxxxxxxxxx>
- v0.56.1 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: osd down (for 2 about 2 minutes) error after adding a new host to my cluster
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- osd down (for 2 about 2 minutes) error after adding a new host to my cluster
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: OSD memory leaks?
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Fwd: Interfaces proposed changes
- From: David Zafman <david.zafman@xxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Caleb Miles <caleb.miles@xxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: argonaut stable update coming shortly
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Filesystem size inconsistance problem
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 3/6] ceph: allow revoking duplicated caps issued by non-auth MDS
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: Filesystem size inconsistance problem
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: Filesystem size inconsistance problem
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 4/6] ceph: allocate cap_release message when receiving cap import
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 3/6] ceph: allow revoking duplicated caps issued by non-auth MDS
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 2/6] ceph: move dirty inode to migrating list when clearing auth caps
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 1/6] ceph: re-calculate truncate_size for strip object
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: Sage Weil <sage@xxxxxxxxxxx>
- Filesystem size inconsistance problem
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: be picky about osd request status type
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: be picky about osd request status type
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH REPOST] ceph: define ceph_encode_8_safe()
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- librados/librbd compatibility issue with v0.56
- From: Sage Weil <sage@xxxxxxxxxxx>
- argonaut stable update coming shortly
- From: Sage Weil <sage@xxxxxxxxxxx>
- Windows port
- From: Cesar Mello <cmello@xxxxxxxxx>
- ceph caps (Ganesha + Ceph pnfs)
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: Any idea about doing deduplication in ceph?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: ceph stability
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Usage of CEPH FS versa HDFS for Hadoop: TeraSort benchmark performance comparison issue
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH v2] rbd: Support plain/json/xml output formatting
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 0/6] fix build and packaging issues
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: [PATCH 0/6] fix build and packaging issues
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 0/6] fix build and packaging issues
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 1/6] src/java/Makefile.am: fix default java dir
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 5/6] configure.ac: remove AC_PROG_RANLIB
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 2/6] ceph.spec.in: fix handling of java files
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 6/6] configure.ac: change junit4 handling
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 3/6] ceph.spec.in: rename libcephfs-java package to cephfs-java
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 4/6] ceph.spec.in: fix libcephfs-jni package name
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH, v2] rbd: define and use rbd_warn()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH REPOST 2/4] rbd: add warning messages for missing arguments
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- THE END, for now
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: assign watch request more directly
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 6/6] rbd: move remaining osd op setup into rbd_osd_req_op_create()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 5/6] rbd: move call osd op setup into rbd_osd_req_op_create()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 4/6] rbd: define generalized osd request op routines
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/6] rbd: initialize off and len in rbd_create_rw_op()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/6] rbd: don't assign extent info in rbd_req_sync_op()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/6] rbd: don't assign extent info in rbd_do_request()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/6] rbd: consolidate osd request setup
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/2] rbd: don't leak rbd_req for rbd_req_sync_notify_ack()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/2] rbd: don't leak rbd_req on synchronous requests
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/2] rbd: fix two leaks
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: combine rbd sync watch/unwatch functions
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: use a common layout for each device
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/3] rbd: don't bother calculating file mapping
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/3] rbd: open code rbd_calc_raw_layout()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/3] rbd: pull in ceph_calc_raw_layout()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/3] rbd: no need for file mapping calculation
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: kill ceph_osd_req_op->flags
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 4/4] rbd: assume single op in a request
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/4] rbd: there is really only one op
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/4] libceph: pass num_op with ops
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/4] rbd: pass num_op with ops array
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/4] rbd: explicitly support only one osd op
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 6/6] libceph: don't set pages or bio in,> ceph_osdc_alloc_request()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 5/6] libceph: don't set flags in ceph_osdc_alloc_request()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 4/6] libceph: drop osdc from ceph_calc_raw_layout()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/6] libceph: drop snapid in ceph_calc_raw_layout()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/6] libceph: pass length to ceph_calc_file_object_mapping()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/6] libceph: pass length to ceph_osdc_build_request()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/6] libceph: parameter cleanup
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 4/6] ceph: allocate cap_release message when receiving cap import
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 3/6] ceph: allow revoking duplicated caps issued by non-auth MDS
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 2/6] ceph: move dirty inode to migrating list when clearing auth caps
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/6] ceph: re-calculate truncate_size for strip object
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 0/6] fixes for cephfs kernel client
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 27/29] mds: check if stray dentry is needed
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 29/29] mds: optimize C_MDC_RetryOpenRemoteIno
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 28/29] mds: don't issue caps while inode is exporting caps
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 26/29] mds: drop locks when opening remote dentry
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 25/29] mds: check null context in CDir::fetch()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 24/29] mds: rdlock prepended dest trace when handling rename
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 23/29] mds: fix cap mask for ifile lock
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 22/29] mds: fix replica state for LOCK_MIX_LOCK
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 21/29] mds: keep dentry lock in sync state as much as possible
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 18/29] mds: fix rename inode exportor check
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 19/29] mds: disable concurrent remote locking
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 20/29] mds: forbid creating file in deleted directory
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 15/29] mds: remove unnecessary is_xlocked check
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 16/29] mds: don't defer processing caps if inode is auth pinned
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 17/29] mds: call maybe_eval_stray after removing a replica dentry
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 14/29] mds: fix lock state transition check
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 13/29] mds: indroduce DROPLOCKS slave request
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 11/29] mds: fix anchor table commit race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 12/29] mds: fix on-going two phrase commits tracking
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 09/29] mds: mark rename inode as ambiguous auth on all involved MDS
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 10/29] mds: skip frozen inode when assimilating dirty inodes' rstat
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 08/29] mds: only export directory fragments in stray to their auth MDS
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 07/29] mds: don't trim ambiguous imports in MDCache::trim_non_auth_subtree
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 06/29] mds: use null dentry to find old parent of renamed directory
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 05/29] mds: don't journal null dentry for overwrited remote linkage
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 04/29] mds: xlock stray dentry when handling rename or unlink
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 03/29] mds: don't trigger assertion when discover races with rename
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 02/29] mds: fix Locker::simple_eval()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 01/29] mds: don't renew revoking lease
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 00/29] Various fixes for MDS
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH REPOST 1/4] rbd: define and use rbd_warn()
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH REPOST 2/4] rbd: add warning messages for missing arguments
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: separate layout init
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- [PATCH REPOST 2/2] libceph: kill op_needs_trail()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/2] libceph: always allow trail in osd request
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/2] libceph: embed r_trail struct in ceph_osd_request()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 5/5] rbd: don't bother setting snapid in rbd_do_request()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 4/5] rbd: kill rbd_req_sync_op() snapc and snapid parameters
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/5] rbd: drop flags parameter from rbd_req_sync_exec()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/5] rbd: drop snapid parameter from rbd_req_sync_read()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/5] rbd: drop oid parameters from ceph_osdc_build_request()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/5] rbd: drop some unneeded parameters
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH REPOST 2/2] rbd: standardize ceph_osd_request variable names
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH REPOST 1/2] rbd: standardize rbd_request variable names
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/4] rbd: four minor patches
- From: David Zafman <david.zafman@xxxxxxxxxxx>
- [PATCH REPOST] rbd: separate layout init
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/2] rbd: only get snap context for write requests
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/2] rbd: make exists flag atomic
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/2] rbd: only get snap context for write requests
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: end request on error in rbd_do_request() caller
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: When to use "filestore xattr use omap = true"
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- [PATCH REPOST 2/2] rbd: a little more cleanup of rbd_rq_fn()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/2] rbd: encapsulate handling for a single request
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/2] rbd: simplify rbd_rq_fn()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: be picky about osd request status type
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/2] rbd: standardize ceph_osd_request variable names
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/2] rbd: standardize rbd_request variable names
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/2] rbd: standardize some variable names
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: When to use "filestore xattr use omap = true"
- From: Roberto Aguilar <roberto.c.aguilar@xxxxxxxxx>
- Re: When to use "filestore xattr use omap = true"
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: "hit suicide timeout" message after upgrade to 0.56
- From: Wido den Hollander <wido@xxxxxxxxx>
- When to use "filestore xattr use omap = true"
- From: Roberto Aguilar <roberto.c.aguilar@xxxxxxxxx>
- Re: "hit suicide timeout" message after upgrade to 0.56
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: "hit suicide timeout" message after upgrade to 0.56
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Lorieri <lorieri@xxxxxxxxx>
- Meetup at Opencompute Summit 2013
- From: Stefan Majer <stefan.majer@xxxxxxxxx>
- [PATCH REPOST 4/4] rbd: add warnings to rbd_dev_probe_update_spec()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/4] rbd: add a warning in bio_chain_clone_range()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/4] rbd: add warning messages for missing arguments
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/4] rbd: define and use rbd_warn()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/4] rbd: add warnings
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] ceph: define ceph_encode_8_safe()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 4/4] rbd: use kmemdup()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/4] rbd: kill rbd_spec->image_id_len
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/4] rbd: kill rbd_spec->image_name_len
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/4] rbd: document rbd_spec structure
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/4] rbd: four minor patches
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] libceph: reformat __reset_osd()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: kernel rbd format=2
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: kernel rbd format=2
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: "hit suicide timeout" message after upgrade to 0.56
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: status monitoring options for ceph cluster
- From: Sage Weil <sage@xxxxxxxxxxx>
- radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Looking to Use Ceph
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Looking to Use Ceph
- From: Wido den Hollander <wido@xxxxxxxxx>
- Looking to Use Ceph
- From: "emyr.james" <emyr.james@xxxxxxxxxxxx>
- "hit suicide timeout" message after upgrade to 0.56
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: v0.56 released
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: status monitoring options for ceph cluster
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: status monitoring options for ceph cluster
- From: Ugis <ugis22@xxxxxxxxx>
- Re: v0.56 released
- From: norbi <norbi@xxxxxxxxxx>
- Re: v0.56 released
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: v0.56 released
- From: Dennis Jacobfeuerborn <dennisml@xxxxxxxxxxxx>
- RE: status monitoring options for ceph cluster
- From: Paul Pettigrew <Paul.Pettigrew@xxxxxxxxxxx>
- Re: status monitoring options for ceph cluster
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: status monitoring options for ceph cluster
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: v0.56 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: v0.56 released
- From: Dennis Jacobfeuerborn <dennisml@xxxxxxxxxxxx>
- [GIT PULL] Ceph fixes for 3.8-rc2
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: v0.56 released
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: v0.56 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: v0.56 released
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: [GIT PULL] Ceph updates for 3.8
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: v0.56 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Ceph logging level
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: Very intensive I/O under mon process
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: status monitoring options for ceph cluster
- From: Dennis Jacobfeuerborn <dennisml@xxxxxxxxxxxx>
- Re: Very intensive I/O under mon process
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Ceph logging level
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Very intensive I/O under mon process
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: status monitoring options for ceph cluster
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- Very intensive I/O under mon process
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: status monitoring options for ceph cluster
- From: Wido den Hollander <wido@xxxxxxxxx>
- status monitoring options for ceph cluster
- From: Ugis <ugis22@xxxxxxxxx>
- EU mirror issues
- From: Wido den Hollander <wido@xxxxxxxxx>
- Ceph logging level
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: ceph for small cluster?
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: v0.56 released
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: v0.56 released
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: v0.56 released
- From: Dennis Jacobfeuerborn <dennisml@xxxxxxxxxxxx>
- Re: 0.55 crashed during upgrade to bobtail
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: 0.55 crashed during upgrade to bobtail
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: automatic repair of inconsistent pg?
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- 0.55 crashed during upgrade to bobtail
- From: Andrey Korolyov <andrey@xxxxxxx>
- v0.56 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph for small cluster?
- From: Matthew Roy <imjustmatthew@xxxxxxxxx>
- Patch Backlog
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: automatic repair of inconsistent pg?
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: what could go wrong with two clusters on the same network?
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: ceph for small cluster?
- From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
- Re: ceph for small cluster?
- From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
- Re: Improving responsiveness of KVM guests on Ceph storage
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: automatic repair of inconsistent pg?
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: Improving responsiveness of KVM guests on Ceph storage
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- what could go wrong with two clusters on the same network?
- From: Xiaopong Tran <xiaopong.tran@xxxxxxxxx>
- Re: ceph for small cluster?
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Improving responsiveness of KVM guests on Ceph storage
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Improving responsiveness of KVM guests on Ceph storage
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Improving responsiveness of KVM guests on Ceph storage
- From: Andrey Korolyov <andrey@xxxxxxx>
- ceph for small cluster?
- From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
- Re: Striped images and cluster misbehavior
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: automatic repair of inconsistent pg?
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: Striped images and cluster misbehavior
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: automatic repair of inconsistent pg?
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Improving responsiveness of KVM guests on Ceph storage
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 3/3] libceph: WARN, don't BUG on unexpected connection states
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH 3/3] libceph: WARN, don't BUG on unexpected connection states
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 2/3] libceph: always reset osds when kicking
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 1/3] libceph: move linger requests sooner in kick_requests()
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH V3 3/8] use vfs __set_page_dirty interface instead of doing it inside filesystem
- From: Kamezawa Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
- Re: [PATCH] libceph: fix protocol feature mismatch failure path
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] libceph: fix protocol feature mismatch failure path
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] libceph: fix protocol feature mismatch failure path
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 3/3] libceph: WARN, don't BUG on unexpected connection states
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 2/3] libceph: always reset osds when kicking
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 1/3] libceph: move linger requests sooner in kick_requests()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 0/3] libceph: three bug fixes
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: Bobtail vs Argonaut Performance Preview
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- Any idea about doing deduplication in ceph?
- From: "lollipop" <lollipop_jin@xxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: Any API to get metadata?
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH] libceph: WARN, don't BUG on unexpected connection states
- From: Alex Elder <elder@xxxxxxxxxxx>
- Any API to get metadata?
- From: "lollipop" <lollipop_jin@xxxxxxx>
- [PATCH V3 3/8] use vfs __set_page_dirty interface instead of doing it inside filesystem
- From: Sha Zhengju <handai.szj@xxxxxxxxx>
- Re: automatic repair of inconsistent pg?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Problem with cephx in 0.55
- From: Maciej Gałkiewicz <maciejgalkiewicz@xxxxxxxxxxxxx>
- automatic repair of inconsistent pg?
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: drivers/block/rbd.c:2170:19: sparse: symbol 'rbd_dev_create' was not declared. Should it be static?
- From: Alex Elder <elder@xxxxxxxxxxx>
- drivers/block/rbd.c:2170:19: sparse: symbol 'rbd_dev_create' was not declared. Should it be static?
- From: Fengguang Wu <fengguang.wu@xxxxxxxxx>
- Re: Bobtail vs Argonaut Performance Preview
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: Bobtail vs Argonaut Performance Preview
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Bobtail vs Argonaut Performance Preview
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: Bobtail vs Argonaut Performance Preview
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: mon not marking dead osds down and slow streaming write performance
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Bobtail vs Argonaut Performance Preview
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Bobtail vs Argonaut Performance Preview
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: mon not marking dead osds down and slow streaming write performance
- From: Michael Chapman <michael.chapman@xxxxxxxxxx>
- Re: [GIT PULL] Ceph updates for 3.8
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [GIT PULL] Ceph updates for 3.8
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: Monitor crash
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: rbd kernel module crashes with different kernels
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: reference to global_context.h in dout.h
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: reference to global_context.h in dout.h
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: ceph stability
- From: Amon Ott <ao@xxxxxxxxxxxx>
- Monitor crash
- From: <Eric_YH_Chen@xxxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- Re: ceph stability
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: mon not marking dead osds down and slow streaming write performance
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mon not marking dead osds down and slow streaming write performance
- From: Michael Chapman <michael.chapman@xxxxxxxxxx>
- Re: rbd caching issue
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: Segfault running libcephfs tests
- From: Sage Weil <sage@xxxxxxxxxxx>
- Segfault running libcephfs tests
- From: Noah Watkins <jayhawk@xxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Alex Elder <elder@xxxxxxxxxxx>
- [GIT PULL] Ceph updates for 3.8
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 2/2] Add librados aio stat tests
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- [PATCH 1/2] Implement librados aio_stat
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- [PATCH 0/2] Librados aio stat
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- which Linux kernel version corresponds to 0.48argonaut?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: mon not marking dead osds down and slow streaming write performance
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: reference to global_context.h in dout.h
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: reference to global_context.h in dout.h
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD's slow down to a crawl
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- Bobtail vs Argonaut Performance Preview
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: OSD's slow down to a crawl
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- Re: OSD's slow down to a crawl
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: ceph stability
- From: Sam Lang <sam.lang@xxxxxxxxxxx>
- OSD's slow down to a crawl
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- reference to global_context.h in dout.h
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: ceph stability
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: rbd kernel module crashes with different kernels
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: rbd kernel module crashes with different kernels
- From: Ugis <ugis22@xxxxxxxxx>
- Re: ceph stability
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: ceph stability
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: ceph -s loops/hangs! ubuntu raring ceph bug?
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: rbd kernel module crashes with different kernels
- From: Alex Elder <elder@xxxxxxxxxxx>
- rbd kernel module crashes with different kernels
- From: Ugis <ugis22@xxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: deleting non existing pgs ?
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: deleting non existing pgs ?
- From: norbi <norbi@xxxxxxxxxx>
- deleting non existing pgs ?
- From: norbi <norbi@xxxxxxxxxx>
- Re: ceph -s loops/hangs! ubuntu raring ceph bug?
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- ceph -s loops/hangs! ubuntu raring ceph bug?
- From: Tibet Himalkaya <himalkaya@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: Empty directory size greater than zero and can't remove
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: [PATCH] implement librados aio_stat
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] implement librados aio_stat
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Crash whilst detecting mime-type in radosgw/0.55.1
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: ceph stability
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: Recovery stuck and radosgateway not initializing
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
- Crash whilst detecting mime-type in radosgw/0.55.1
- From: James Page <james.page@xxxxxxxxxx>
- Re: ceph stability
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- [PATCH v2] rbd: Support plain/json/xml output formatting
- From: Stratos Psomadakis <psomas@xxxxxxxx>
- Re: ceph stability
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: Striped images and cluster misbehavior
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: ceph stability
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: ceph stability
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: [PATCH] implement librados aio_stat
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- ceph stability
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: Empty directory size greater than zero and can't remove
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Empty directory size greater than zero and can't remove
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- mon not marking dead osds down and slow streaming write performance
- From: Michael Chapman <michael.chapman@xxxxxxxxxx>
- Re: Empty directory size greater than zero and can't remove
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: Empty directory size greater than zero and can't remove
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Empty directory size greater than zero and can't remove
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: Rados consistency model
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: rbd map command hangs for 15 minutes during system start up
- From: Alex Elder <elder@xxxxxxxxxxx>
- RE: Recovery stuck and radosgateway not initializing
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]