CEPH Filesystem Development
[Prev Page][Next Page]
- Re: [PATCH 01/25] mds: fix end check in Server::handle_client_readdir()
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] net/ceph/osdmap.c: fix undefined behavior when using snprintf()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Sam Lang <sam.lang@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Using a Data Pool
- From: Paul Sherriffs <PSherriffs@xxxxxxxxxxxxxxxx>
- Re: Will multi-monitor speed up pg initializing?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: MDS placement
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: MDS placement
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: /etc/init.d/ceph bug for multi-host when using -a option
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: MDS placement
- From: Wido den Hollander <wido@xxxxxxxxx>
- /etc/init.d/ceph bug for multi-host when using -a option
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Will multi-monitor speed up pg initializing?
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: ssh passwords
- From: Neil Levine <neil.levine@xxxxxxxxxxx>
- Re: ssh passwords
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- [PATCH 05/25] mds: introduce XSYN to SYNC lock state transition
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 16/25] mds: Always use {push,pop}_projected_linkage to change linkage
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 17/25] mds: don't replace existing slave request
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 18/25] mds: fix for MDCache::adjust_bounded_subtree_auth
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 19/25] mds: fix for MDCache::disambiguate_imports
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 13/25] mds: fix slave rename rollback
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 23/25] mds: move variables special to rename into MDRequest::more
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 25/25] mds: fetch missing inodes from disk
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 21/25] mds: don't journal opened non-auth inode
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 24/25] mds: rejoin remote wrlocks and frozen auth pin
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 22/25] mds: properly clear CDir::STATE_COMPLETE when replaying EImportStart
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 07/25] mds: don't early reply rename
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 20/25] mds: journal inode's projected parent when doing link rollback
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 15/25] mds: send resolve messages after all MDS reach resolve stage
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 10/25] mds: force journal straydn for rename if necessary
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 14/25] mds: split reslove into two sub-stages
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 03/25] mds: lock remote inode's primary dentry during rename
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 12/25] mds: preserve non-auth/unlinked objects until slave commit
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 11/25] mds: don't journal non-auth rename source directory
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 08/25] mds: fix "had dentry linked to wrong inode" warning
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 09/25] mds: splits rename force journal check into separate function
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 06/25] mds: properly set error_dentry for discover reply
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 04/25] mds: allow journaling multiple root inodes in EMetaBlob
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 02/25] mds: check deleted directory in Server::rdlock_path_xlock_dentry
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 01/25] mds: fix end check in Server::handle_client_readdir()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 00/24] fixes for MDS cluster recovery
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: ssh passwords
- From: Neil Levine <neil.levine@xxxxxxxxxxx>
- Re: on disk encryption
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: on disk encryption
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ssh passwords
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: ssh passwords
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: handling fs errors
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: handling fs errors
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 2/3] rbd: check for overflow in rbd_get_num_segments()
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH 1/3] rbd: small changes
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- [PATCH 12/12] rbd: get rid of rbd_req_sync_exec()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 11/12] rbd: implement sync method with new code
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 10/12] rbd: send notify ack asynchronously
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 09/12] rbd: get rid of rbd_req_sync_notify_ack()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 08/12] rbd: use new code for notify ack
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 07/12] rbd: get rid of rbd_req_sync_watch()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 06/12] rbd: implement watch/unwatch with new code
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 05/12] rbd: get rid of rbd_req_sync_read()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 04/12] rbd: implement sync object read with new code
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 03/12] rbd: kill rbd_req_coll and rbd_request
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 02/12] rbd: kill rbd_rq_fn() and all other related code
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 01/12] rbd: new request tracking code
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 00/12] rbd: new request tracking code
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: handling fs errors
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- [PATCH 3/3] rbd: don't retry setting up header watch
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 2/3] rbd: check for overflow in rbd_get_num_segments()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- [PATCH 1/3] rbd: small changes
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 0/3] rbd: a few simple changes
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: Inktank team @ FOSDEM 2013 ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [0.48.3] OSD memory leak when scrubbing
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: [0.48.3] OSD memory leak when scrubbing
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: on disk encryption
- From: James Page <james.page@xxxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- MDS placement
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: [0.48.3] OSD memory leak when scrubbing
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: ssh passwords
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: Questions about journals, performance and disk utilization.
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: ssh passwords
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: flashcache
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- Re: flashcache
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- Re: questions on networks and hardware
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: ssh passwords
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: ssh passwords
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Consistently reading/writing rados objects via command line
- From: Sage Weil <sage@xxxxxxxxxxx>
- Questions about journals, performance and disk utilization.
- From: martin <martin@xxxxxxxxxx>
- Re: what is the syntax to use to specify xfs in ceph.conf
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- [0.48.3] OSD memory leak when scrubbing
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: ssh passwords
- From: Neil Levine <neil.levine@xxxxxxxxxxx>
- [PATCH] net/ceph/osdmap.c: fix undefined behavior when using snprintf()
- From: Cong Ding <dinggnu@xxxxxxxxx>
- what is the syntax to use to specify xfs in ceph.conf
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: Consistently reading/writing rados objects via command line
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- Re: ssh passwords
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: ssh passwords
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Inktank team @ FOSDEM 2013 ?
- From: James Page <james.page@xxxxxxxxxx>
- Re: handling fs errors
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- ssh passwords
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Throttle::wait use case clarification
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: handling fs errors
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Consistently reading/writing rados objects via command line
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Consistently reading/writing rados objects via command line
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: RGW object purging in upstream caches
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: Consistently reading/writing rados objects via command line
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- Re: Consistently reading/writing rados objects via command line
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- RGW object purging in upstream caches
- From: Wido den Hollander <wido@xxxxxxxxx>
- Ceph Bobtail Performance: IO Scheduler Comparison Article
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: questions on networks and hardware
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: handling fs errors
- From: Wido den Hollander <wido@xxxxxxxxx>
- RE: handling fs errors
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: handling fs errors
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: handling fs errors
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- handling fs errors
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Consistently reading/writing rados objects via command line
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Consistently reading/writing rados objects via command line
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Consistently reading/writing rados objects via command line
- From: Nick Bartos <nick@xxxxxxxxxxxxxxx>
- Re: flashcache
- From: John Nielsen <lists@xxxxxxxxxxxx>
- Re: questions on networks and hardware
- From: John Nielsen <lists@xxxxxxxxxxxx>
- Re: Concepts of whole cluster snapshots/backups and backups in general.
- From: Michael Grosser <mail@xxxxxxxxxxxxxxxxxx>
- Re: Inktank team @ FOSDEM 2013 ?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: Ceph docs page down
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Ceph docs page down
- From: Wido den Hollander <wido@xxxxxxxxx>
- Ceph docs page down
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: Throttle::wait use case clarification
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph replication and data redundancy
- From: Wido den Hollander <wido@xxxxxxxxx>
- [PATCH] Always Signal() the first Cond when changing the maximum
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph replication and data redundancy
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: ceph replication and data redundancy
- From: Ulysse 31 <ulysse31@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Throttle::wait use case clarification
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: ceph replication and data redundancy
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: ceph replication and data redundancy
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: osd max write size
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- osd max write size
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: questions on networks and hardware
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: questions on networks and hardware
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph replication and data redundancy
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: questions on networks and hardware
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Inktank team @ FOSDEM 2013 ?
- From: Constantinos Venetsanopoulos <cven@xxxxxxxx>
- Re: Understanding Ceph
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- Re: Understanding Ceph
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- Re: Inktank team @ FOSDEM 2013 ?
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Throttle::wait use case clarification
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Inktank team @ FOSDEM 2013 ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Inktank team @ FOSDEM 2013 ?
- From: Constantinos Venetsanopoulos <cven@xxxxxxxx>
- Re: Understanding Ceph
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Peter Smith <peterfruits@xxxxxxxxx>
- Re: flashcache
- From: Joseph Glanville <joseph.glanville@xxxxxxxxxxxxxx>
- Re: Understanding Ceph
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Understanding Ceph
- From: Peter Smith <peterfruits@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Peter Smith <peterfruits@xxxxxxxxx>
- Re: Understanding Ceph
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- RE: max useful journal size
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Peter Smith <peterfruits@xxxxxxxxx>
- Re: Understanding Ceph
- From: Wenhao Xu <xuwenhao2008@xxxxxxxxx>
- Re: Understanding Ceph
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Understanding Ceph
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- RE: questions on networks and hardware
- From: "Holcombe, Christopher" <cholcomb@xxxxxxxxxxx>
- Understanding Ceph
- From: Peter Smith <peterfruits@xxxxxxxxx>
- RE: max useful journal size
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Simon Leinen <simon.leinen@xxxxxxxxx>
- Re: max useful journal size
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: max useful journal size
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- rgw geo-replication and disaster recovery
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- questions on networks and hardware
- From: John Nielsen <lists@xxxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: max useful journal size
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: max useful journal size
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: max useful journal size
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- max useful journal size
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: branches
- From: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
- ceph.com/debian alias changed argonaut -> bobtail
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Simon Leinen <simon.leinen@xxxxxxxxx>
- Re: branches
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: branches
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: branches
- From: Sage Weil <sage@xxxxxxxxxxx>
- error for Translation-en .deb package
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: tgt backend driver for Ceph block devices (rbd)
- From: FUJITA Tomonori <fujita.tomonori@xxxxxxxxxxxxx>
- Re: ListBuckets works but PutObject does not
- From: Cesar Mello <cmello@xxxxxxxxx>
- Re: ListBuckets works but PutObject does not
- From: Christophe Le Guern <c35sys@xxxxxxxxx>
- Re: ListBuckets works but PutObject does not
- From: Cesar Mello <cmello@xxxxxxxxx>
- ListBuckets works but PutObject not
- From: Cesar Mello <cmello@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: master branch issue in ceph.git
- From: Sage Weil <sage@xxxxxxxxxxx>
- master branch issue in ceph.git
- From: David Zafman <david.zafman@xxxxxxxxxxx>
- ceph-client/testing branch updated
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: radosgw boto issue
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: HOWTO: teuthology and code coverage
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: radosgw boto issue
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: radosgw boto issue
- From: Derek Yarnell <derek@xxxxxxxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH REPOST 6/6] rbd: move remaining osd op setup into rbd_osd_req_op_create()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/6] rbd: consolidate osd request setup
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: radosgw boto issue
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- radosgw boto issue
- From: Derek Yarnell <derek@xxxxxxxxxxxxxx>
- Re: [PATCH, v2] rbd: encapsulate handling for a single request
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: understanding cephx
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: ceph-osd can`t start after crush server
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- [PATCH, v2] rbd: encapsulate handling for a single request
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH REPOST 1/2] rbd: encapsulate handling for a single request
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Offsite backup solutions used by ceph providers?
- From: Michael Grosser <mail@xxxxxxxxxxxxxxxxxx>
- Re: ceph from poc to production
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: ceph from poc to production
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: Single host VM limit when using RBD
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: Single host VM limit when using RBD
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH] rbd: fix type of snap_id in rbd_dev_v2_snap_info()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 4/4] rbd: add warnings to rbd_dev_probe_update_spec()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 3/4] rbd: add a warning in bio_chain_clone_range()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH, v2] rbd: define and use rbd_warn()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Current OSD weight vs. target weight
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: Ceph slow request & unstable issue
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Sam Lang <sam.lang@xxxxxxxxxxx>
- Re: flashcache
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: ceph from poc to production
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: ceph from poc to production
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: ceph from poc to production
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: flashcache
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- Current OSD weight vs. target weight
- From: Christopher Kunz <chrislist@xxxxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- Re: flashcache
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: flashcache
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Hit suicide timeout after adding new osd
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: flashcache
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- Hit suicide timeout after adding new osd
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: flashcache
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: flashcache
- From: Joseph Glanville <joseph.glanville@xxxxxxxxxxxxxx>
- ceph replication and data redundancy
- From: Ulysse 31 <ulysse31@xxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- RE: Single host VM limit when using RBD
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- Re: Single host VM limit when using RBD
- From: Andrey Korolyov <andrey@xxxxxxx>
- Single host VM limit when using RBD
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Yann Dupont <Yann.Dupont@xxxxxxxxxxxxxx>
- RE: Ceph slow request & unstable issue
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: flashcache
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [PATCH REPOST 6/6] rbd: move remaining osd op setup into rbd_osd_req_op_create()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: assign watch request more directly
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/6] rbd: consolidate osd request setup
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 6/6] rbd: move remaining osd op setup into rbd_osd_req_op_create()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: combine rbd sync watch/unwatch functions
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: use a common layout for each device
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: kill ceph_osd_req_op->flags
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- tgt backend driver for Ceph block devices (rbd)
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/3] rbd: no need for file mapping calculation
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/4] rbd: explicitly support only one osd op
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- RE: Ceph slow request & unstable issue
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: Ceph slow request & unstable issue
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH 00/29] Various fixes for MDS
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: "Yan, Zheng " <yanzheng@xxxxxxxx>
- Re: [PATCH REPOST 0/6] libceph: parameter cleanup
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: separate layout init
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/2] libceph: embed r_trail struct in ceph_osd_request()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Ceph slow request & unstable issue
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: flashcache
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: mds: first stab at lookup-by-ino problem/soln description
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: flashcache
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: flashcache
- From: Sage Weil <sage@xxxxxxxxxxx>
- flashcache
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Tomasz Paszkowski <ss7pro@xxxxxxxxx>
- Re: [PATCH] configure.ac: fix problem with --enable-cephfs-java
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: ceph from poc to production
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: Ceph slow request & unstable issue
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: 8 out of 12 OSDs died after expansion on 0.56.1 (void OSD::do_waiters())
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: REMINDER: all argonaut users should upgrade to v0.48.3argonaut
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Ceph slow request & unstable issue
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Jeff Mitchell <jeffrey.mitchell@xxxxxxxxx>
- Re: REMINDER: all argonaut users should upgrade to v0.48.3argonaut
- From: Sage Weil <sage@xxxxxxxxxxx>
- 8 out of 12 OSDs died after expansion on 0.56.1 (void OSD::do_waiters())
- From: Wido den Hollander <wido@xxxxxxxxx>
- OSD don't start after upgrade form 0.47.2 to 0.56.1
- From: Michael Menge <michael.menge@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH] libceph: for chooseleaf rules, retry CRUSH map descent from root if leaf is failed
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: REMINDER: all argonaut users should upgrade to v0.48.3argonaut
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Yann Dupont <Yann.Dupont@xxxxxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Dino Yancey <dino2gnt@xxxxxxxxx>
- HOWTO: teuthology and code coverage
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: code coverage and teuthology
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH] configure.ac: fix problem with --enable-cephfs-java
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: CephFS issue
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: understanding cephx
- From: Michael Menge <michael.menge@xxxxxxxxxxxxxxxxxxxx>
- Re: CephFS issue
- From: Alexis GÜNST HORN <alexis.gunsthorn@xxxxxxxxxxxx>
- Re: Ceph slow request & unstable issue
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- ceph-osd can`t start after crush server
- From: "aleonov@xxxxxxxxxxxxxx" <aleonov@xxxxxxxxxxxxxx>
- Re: Adding flashcache for data disk to cache Ceph metadata writes
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Adding flashcache for data disk to cache Ceph metadata writes
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH] libceph: for chooseleaf rules, retry CRUSH map descent from root if leaf is failed
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/5] rbd: drop some unneeded parameters
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/2] rbd: standardize some variable names
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 0/4] rbd: four minor patches
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 2/2] rbd: only get snap context for write requests
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Ceph slow request & unstable issue
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH REPOST 1/2] rbd: make exists flag atomic
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Test infrastructure: 2 or more servers?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: end request on error in rbd_do_request() caller
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 2/2] rbd: a little more cleanup of rbd_rq_fn()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH REPOST 1/2] rbd: encapsulate handling for a single request
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- mds: first stab at lookup-by-ino problem/soln description
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH REPOST] libceph: reformat __reset_osd()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Test infrastructure: 2 or more servers?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: Test infrastructure: 2 or more servers?
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Test infrastructure: 2 or more servers?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Test infrastructure: 2 or more servers?
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- REMINDER: all argonaut users should upgrade to v0.48.3argonaut
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Concepts of whole cluster snapshots/backups and backups in general.
- From: Michael Grosser <mail@xxxxxxxxxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: code coverage and teuthology
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: Grid data placement
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- Re: Another rbd compatibility issue between 0.48.2argonaut-2 and 0.56.1 ?
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Grid data placement
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Grid data placement
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- Re: Grid data placement
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Grid data placement
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Patch "rbd: kill create_snap sysfs entry" has been added to the 3.4-stable tree
- From: <gregkh@xxxxxxxxxxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Grid data placement
- From: Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: code coverage and teuthology
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Noah Watkins <jayhawk@xxxxxxxxxxx>
- Re: Rack Awareness
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Rack Awareness
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Rack Awareness
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Rack Awareness
- From: Wido den Hollander <wido@xxxxxxxxx>
- Rack Awareness
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: code coverage and teuthology
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: mon down
- From: jie sun <0maidou0@xxxxxxxxx>
- Re: CephFS issue
- From: "Joshua J. Kugler" <joshua@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: James Page <james.page@xxxxxxxxxxxxx>
- mon down
- From: jie sun <0maidou0@xxxxxxxxx>
- RE: Slow requests
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Florian Haas <florian@xxxxxxxxxxx>
- [PATCH 2/2] rbd: prevent open for image being removed
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 1/2] rbd: define flags field, use it for exists flag
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 0/2] rbd: prevent open of image being unmapped
- From: Alex Elder <elder@xxxxxxxxxxx>
- RE: Seperate metadata disk for OSD
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: code coverage and teuthology
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH v2] rbd: Support plain/json/xml output formatting
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- understanding cephx
- From: Michael Menge <michael.menge@xxxxxxxxxxxxxxxxxxxx>
- RE: Seperate metadata disk for OSD
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Tom Lanyon <tom@xxxxxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: [PATCH v2] rbd: Support plain/json/xml output formatting
- From: Stratos Psomadakis <psomas@xxxxxxxx>
- [PATCH 1/2] mds: fix end check in Server::handle_client_readdir()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- OSD nodes with >=8 spinners, SSD-backed journals, and their performance impact
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: rbd kernel driver on the osd server
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: CephFS issue
- From: Alexis GÜNST HORN <alexis.gunsthorn@xxxxxxxxxxxx>
- Re: CephFS issue
- From: Wido den Hollander <wido@xxxxxxxxx>
- CephFS issue
- From: Alexis GÜNST HORN <alexis.gunsthorn@xxxxxxxxxxxx>
- Another rbd compatibility issue between 0.48.2argonaut-2 and 0.56.1 ?
- From: Simon Frerichs | Fremaks GmbH <frerichs@xxxxxxxxxx>
- Re: rbd kernel driver on the osd server
- From: Sage Weil <sage@xxxxxxxxxxx>
- [debug help]: get dprintk() outputs in src/crush/mapper.c or net/crush/mapper.c
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: rbd kernel driver on the osd server
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: rbd kernel driver on the osd server
- From: Wido den Hollander <wido@xxxxxxxxx>
- rbd kernel driver on the osd server
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- RE: Seperate metadata disk for OSD
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Noah Watkins <jayhawk@xxxxxxxxxxx>
- RE: Seperate metadata disk for OSD
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Seperate metadata disk for OSD
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Seperate metadata disk for OSD
- From: "Yan, Zheng " <yanzheng@xxxxxxxx>
- Re: Striped images and cluster misbehavior
- From: Andrey Korolyov <andrey@xxxxxxx>
- Seperate metadata disk for OSD
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- [PATCH] rbd: fix type of snap_id in rbd_dev_v2_snap_info()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: What is the acceptable attachment file size on the mail server?
- From: "Yan, Zheng " <yanzheng@xxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: radosgw / fail to authorized request
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- radosgw / fail to authorized request
- From: Shailesh Tyagi <shailesh@xxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: v0.48.3 argonaut update released
- From: James Page <james.page@xxxxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: REVIEW REQUEST: wip-rbd-helpercmds
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- REVIEW REQUEST: wip-rbd-helpercmds
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: Question about configuration
- From: Yasuhiro Ohara <yasu@xxxxxxxxxxxx>
- Re: Question about configuration
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Question about configuration
- From: Yasuhiro Ohara <yasu@xxxxxxxxxxxx>
- Re: Question about configuration
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Question about configuration
- From: Yasuhiro Ohara <yasu@xxxxxxxxxxxx>
- code coverage and teuthology
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: geo replication
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Usage of CEPH FS versa HDFS for Hadoop: TeraSort benchmark performance comparison issue
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Crushmap Design Question
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: What is the acceptable attachment file size on the mail server?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- What is the acceptable attachment file size on the mail server?
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: osd down (for 2 about 2 minutes) error after adding a new host to my cluster
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Ceph version 0.56.1, data loss on power failure
- From: Wido den Hollander <wido@xxxxxxxxx>
- Ceph version 0.56.1, data loss on power failure
- From: Marcin Szukala <szukala.marcin@xxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: OSD crash, ceph version 0.56.1
- From: Ian Pye <ianpye@xxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD crash, ceph version 0.56.1
- From: Sage Weil <sage@xxxxxxxxxxx>
- OSD crash, ceph version 0.56.1
- From: Ian Pye <ianpye@xxxxxxxxx>
- Re: OSD memory leaks?
- From: Dave Spano <dspano@xxxxxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Noah Watkins <jayhawk@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Dave Spano <dspano@xxxxxxxxxxxxxx>
- [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH] configure.ac: check for org.junit.rules.ExternalResource
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: geo replication
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: geo replication
- From: Mark Kampe <mark.kampe@xxxxxxxxxxx>
- [PATCH] osd/ReplicatedPG.cc: fix errors in _scrub()
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- geo replication
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- Re: Windows port
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: OSD memory leaks?
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Windows port
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- Re: OSD memory leaks?
- From: Dave Spano <dspano@xxxxxxxxxxxxxx>
- Re: Usage of CEPH FS versa HDFS for Hadoop: TeraSort benchmark performance comparison issue
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- RE: Usage of CEPH FS versa HDFS for Hadoop: TeraSort benchmark performance comparison issue
- From: "Lachfeld, Jutta" <jutta.lachfeld@xxxxxxxxxxxxxx>
- Re: Crushmap Design Question
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- RE: Crushmap Design Question
- From: "Moore, Shawn M" <smmoore@xxxxxxxxxxx>
- Re: Windows port
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Mark Kampe <mark.kampe@xxxxxxxxxxx>
- Re: OSD's slow down to a crawl
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Dennis Jacobfeuerborn <dennisml@xxxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Christopher Kunz <chrislist@xxxxxxxxxxx>
- Re: Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: Wido den Hollander <wido@xxxxxxxxx>
- Are there significant performance enhancements in 0.56.x to be expected soon or planned in the near future?
- From: "Lachfeld, Jutta" <jutta.lachfeld@xxxxxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: Crushmap Design Question
- From: Wido den Hollander <wido@xxxxxxxxx>
- RE: OSD's slow down to a crawl
- From: Matthew Anderson <matthewa@xxxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Sage Weil <sage@xxxxxxxxxxx>
- v0.48.3 argonaut update released
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: Crushmap Design Question
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Adjusting replicas on argonaut
- From: Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx>
- Re: Adjusting replicas on argonaut
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Adjusting replicas on argonaut
- From: Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx>
- Re: Adjusting replicas on argonaut
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Adjusting replicas on argonaut
- From: Bryan Stillwell <bstillwell@xxxxxxxxxxxxxxx>
- Re: "hit suicide timeout" message after upgrade to 0.56
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Rados gateway init timeout with cache
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- RE: Rados gateway init timeout with cache
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
- Crushmap Design Question
- From: "Moore, Shawn M" <smmoore@xxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: branches
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- branches
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Windows port
- From: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
- Re: "hit suicide timeout" message after upgrade to 0.56
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: what could go wrong with two clusters on the same network?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD Crashed when runing "rbd list"
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Windows port
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Rados gateway init timeout with cache
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: recoverying from 95% full osd
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: OSD Crashed when runing "rbd list"
- From: James Page <james.page@xxxxxxxxxx>
- OSD Crashed when runing "rbd list"
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: Windows port
- From: Dino Yancey <dino2gnt@xxxxxxxxx>
- RE: Is Ceph recovery able to handle massive crash
- From: "Moore, Shawn M" <smmoore@xxxxxxxxxxx>
- RE: Rados gateway init timeout with cache
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Rados gateway init timeout with cache
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
- recoverying from 95% full osd
- From: Roman Hlynovskiy <roman.hlynovskiy@xxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: v0.56.1 released
- From: Amon Ott <ao@xxxxxxxxxxxx>
- Re: v0.56.1 released
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: v0.56.1 released
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: v0.56.1 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Windows port
- From: Cesar Mello <cmello@xxxxxxxxx>
- Re: librados/librbd compatibility issue with v0.56
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: v0.56.1 released
- From: Dennis Jacobfeuerborn <dennisml@xxxxxxxxxxxx>
- v0.56.1 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: osd down (for 2 about 2 minutes) error after adding a new host to my cluster
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- osd down (for 2 about 2 minutes) error after adding a new host to my cluster
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: OSD memory leaks?
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Fwd: Interfaces proposed changes
- From: David Zafman <david.zafman@xxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Caleb Miles <caleb.miles@xxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Wido den Hollander <wido@xxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: radosgw segfault in 0.56
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: argonaut stable update coming shortly
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Filesystem size inconsistance problem
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 3/6] ceph: allow revoking duplicated caps issued by non-auth MDS
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: Filesystem size inconsistance problem
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: Filesystem size inconsistance problem
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 5/6] ceph: check mds_wanted for imported cap
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 4/6] ceph: allocate cap_release message when receiving cap import
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 3/6] ceph: allow revoking duplicated caps issued by non-auth MDS
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 2/6] ceph: move dirty inode to migrating list when clearing auth caps
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 1/6] ceph: re-calculate truncate_size for strip object
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 6/6] ceph: don't acquire i_mutex ceph_vmtruncate_work
- From: Sage Weil <sage@xxxxxxxxxxx>
- Filesystem size inconsistance problem
- From: Drunkard Zhang <gongfan193@xxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Xing Lin <xinglin@xxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: be picky about osd request status type
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- ceph stays degraded after crushmap rearrangement
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph caps (Ganesha + Ceph pnfs)
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: Is Ceph recovery able to handle massive crash
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Is Ceph recovery able to handle massive crash
- From: Denis Fondras <ceph@xxxxxxxxxxx>
- Re: [PATCH REPOST] rbd: be picky about osd request status type
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH REPOST] ceph: define ceph_encode_8_safe()
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- librados/librbd compatibility issue with v0.56
- From: Sage Weil <sage@xxxxxxxxxxx>
- argonaut stable update coming shortly
- From: Sage Weil <sage@xxxxxxxxxxx>
- Windows port
- From: Cesar Mello <cmello@xxxxxxxxx>
- ceph caps (Ganesha + Ceph pnfs)
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: Any idea about doing deduplication in ceph?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: ceph stability
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: which Linux kernel version corresponds to 0.48argonaut?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Usage of CEPH FS versa HDFS for Hadoop: TeraSort benchmark performance comparison issue
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH v2] rbd: Support plain/json/xml output formatting
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 0/6] fix build and packaging issues
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: [PATCH 0/6] fix build and packaging issues
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH 0/2] Librados aio stat
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 0/6] fix build and packaging issues
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 1/6] src/java/Makefile.am: fix default java dir
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 5/6] configure.ac: remove AC_PROG_RANLIB
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 2/6] ceph.spec.in: fix handling of java files
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 6/6] configure.ac: change junit4 handling
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 3/6] ceph.spec.in: rename libcephfs-java package to cephfs-java
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH 4/6] ceph.spec.in: fix libcephfs-jni package name
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH, v2] rbd: define and use rbd_warn()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH REPOST 2/4] rbd: add warning messages for missing arguments
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: OSD memory leaks?
- From: Sébastien Han <han.sebastien@xxxxxxxxx>
- THE END, for now
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: assign watch request more directly
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 6/6] rbd: move remaining osd op setup into rbd_osd_req_op_create()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 5/6] rbd: move call osd op setup into rbd_osd_req_op_create()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 4/6] rbd: define generalized osd request op routines
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/6] rbd: initialize off and len in rbd_create_rw_op()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/6] rbd: don't assign extent info in rbd_req_sync_op()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/6] rbd: don't assign extent info in rbd_do_request()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/6] rbd: consolidate osd request setup
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/2] rbd: don't leak rbd_req for rbd_req_sync_notify_ack()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/2] rbd: don't leak rbd_req on synchronous requests
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/2] rbd: fix two leaks
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: combine rbd sync watch/unwatch functions
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: use a common layout for each device
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 3/3] rbd: don't bother calculating file mapping
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 2/3] rbd: open code rbd_calc_raw_layout()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 1/3] rbd: pull in ceph_calc_raw_layout()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST 0/3] rbd: no need for file mapping calculation
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH REPOST] rbd: kill ceph_osd_req_op->flags
- From: Alex Elder <elder@xxxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]