CEPH Filesystem Development
[Prev Page][Next Page]
- Re: [PATCH 7/8] mds: fix race between scatter gather and dirfrag export
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH v2] Ceph: Punch hole support
- From: Sage Weil <sage@xxxxxxxxxxx>
- tcmalloc memory leak on squeeze
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- [PATCH v2] Ceph: Punch hole support
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: mon crash
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH V1] ceph: fix sleeping function called from invalid context.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] ceph: avoid meaningless calling ceph_caps_revoking if sync_mode == WB_SYNC_ALL.
- From: Sage Weil <sage@xxxxxxxxxxx>
- [GIT PULL] Ceph fix for -rc7
- From: Sage Weil <sage@xxxxxxxxxxx>
- squeeze tcmalloc memory leak
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Issue with RGW API
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: [PATCH 0/8] misc fixes for mds
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mon crash
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: [PATCH 7/8] mds: fix race between scatter gather and dirfrag export
- From: Sage Weil <sage@xxxxxxxxxxx>
- Issue with RGW API
- From: Edward Hope-Morley <edward.hope-morley@xxxxxxxxxxxxx>
- Re: [PATCH 6/8] mds: don't journal bare dirfrag
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: RGW and Keystone
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: Erasure code library summary
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Erasure code library summary
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: dynamically move busy pg's to fast storage
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Erasure code library summary
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- dynamically move busy pg's to fast storage
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: Erasure code library summary
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: RGW and Keystone
- From: Edward Hope-Morley <opentastic@xxxxxxxxx>
- Re: RGW and Keystone
- From: Edward Hope-Morley <edward.hope-morley@xxxxxxxxxxxxx>
- mon crash
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: Erasure code library summary
- From: Alex Elsayed <eternaleye@xxxxxxxxx>
- Re: Erasure code library summary
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Erasure code library summary
- From: Alex Elsayed <eternaleye@xxxxxxxxx>
- [PATCH] ceph: avoid meaningless calling ceph_caps_revoking if sync_mode == WB_SYNC_ALL.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Erasure code library summary
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH V1] ceph: fix sleeping function called from invalid context.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Erasure code library summary
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: Comments on Ceph distributed parity implementation
- From: Paul Von-Stamwitz <PVonStamwitz@xxxxxxxxxxxxxx>
- Re: Erasure code library summary
- From: Alex Elsayed <eternaleye@xxxxxxxxx>
- Re: Erasure code library summary
- From: Alex Elsayed <eternaleye@xxxxxxxxx>
- Re: Re: [PATCH] ceph: fix sleeping function called from invalid context.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Re: [PATCH] ceph: fix sleeping function called from invalid context.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: ceph in linux-next
- From: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx>
- Fwd: [ceph-users] Ceph cluster and rbd (communication trouble?)
- From: Matthijs Möhlmann <matthijs@xxxxxxxxxxxx>
- Re: OSD throttles documentation
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- OSD throttles documentation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- ceph-deploy problems on weird /dev device names?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v3 06/13] locks: protect most of the file_lock handling with i_lock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- ceph in linux-next
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH] rbd: remove RBD_DEBUG
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] rbd: silence GCC warnings
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] ceph: fix sleeping function called from invalid context.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Harvey Skinner <hpmpec2a@xxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: James Plank <plank@xxxxxxxxxx>
- Erasure code library summary
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Writing to RBD image while it's snapshot is being created causes I/O errors
- From: Karol Jurak <karol.jurak@xxxxxxxxx>
- [PATCH] ceph: fix sleeping function called from invalid context.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Benoît Parrein <benoit.parrein@xxxxxxxxxxxxxxxxxxxxxxx>
- Re: Writing to RBD image while it's snapshot is being created causes I/O errors
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] rbd rm <image> results in osd marked down wrongly with 0.61.3
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Elso Andras <elso.andras@xxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Benoît Parrein <benoit.parrein@xxxxxxxxxxxxxxxxxxxxxxx>
- RE: Comments on Ceph distributed parity implementation
- From: Paul Von-Stamwitz <PVonStamwitz@xxxxxxxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [PATCH v3 07/13] locks: avoid taking global lock if possible when waking up blocked waiters
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v3 06/13] locks: protect most of the file_lock handling with i_lock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Elso Andras <elso.andras@xxxxxxxxx>
- Re: [PATCH v3 06/13] locks: protect most of the file_lock handling with i_lock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 04/13] locks: make "added" in __posix_lock_file a bool
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 03/13] locks: comment cleanups and clarifications
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 05/13] locks: encapsulate the fl_link list handling
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 02/13] locks: make generic_add_lease and generic_delete_lease static
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 00/13] locks: scalability improvements for file locking
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 08/13] locks: convert fl_link to a hlist_node
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 09/13] locks: turn the blocked_list into a hashtable
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 01/13] cifs: use posix_unblock_lock instead of locks_delete_block
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 06/13] locks: protect most of the file_lock handling with i_lock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 12/13] seq_file: add seq_list_*_percpu helpers
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 07/13] locks: avoid taking global lock if possible when waking up blocked waiters
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 10/13] locks: add a new "lm_owner_key" lock operation
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 11/13] locks: give the blocked_hash its own spinlock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v3 13/13] locks: move file_lock_list to a set of percpu hlist_heads and convert file_lock_lock to an lglock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Fwd: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH] ceph: remove sb_start/end_write in ceph_aio_write.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Elso Andras <elso.andras@xxxxxxxxx>
- [PATCH 8/8] mds: fix remote wrlock rejoin
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 5/8] mds: fix cross-authorty rename race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 7/8] mds: fix race between scatter gather and dirfrag export
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 4/8] mds: try purging stray inode after storing backtrace
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 3/8] mds: handle undefined dirfrags when opening inode
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 6/8] mds: don't journal bare dirfrag
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 2/8] mds: fix frozen check in Server::try_open_auth_dirfrag()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/8] mds: don't update migrate_seq when importing non-auth cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 0/8] misc fixes for mds
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH] rbd: remove RBD_DEBUG
- From: Paul Bolle <pebolle@xxxxxxxxxx>
- [PATCH] rbd: silence GCC warnings
- From: Paul Bolle <pebolle@xxxxxxxxxx>
- Re: Writing to RBD image while it's snapshot is being created causes I/O errors
- From: Karol Jurak <karol.jurak@xxxxxxxxx>
- rbdwrapper: userland library for transparent access to rbd images
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: [PATCH 9/9] ceph: move inode to proper flushing list when auth MDS changes
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 3/3] mds: fix race between cap issue and revoke
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 2/3] mds: fix cap revoke race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 9/9] ceph: move inode to proper flushing list when auth MDS changes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Benoît Parrein <benoit.parrein@xxxxxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 2/2] rbd: use the correct length for format 2 object names
- From: Alex Elder <alex.elder@xxxxxxxxxx>
- Re: [PATCH 1/2] rbd: fetch object order before using it
- From: Alex Elder <alex.elder@xxxxxxxxxx>
- Re: krbd + format=2 ?
- From: Alex Elder <alex.elder@xxxxxxxxxx>
- Re: [PATCH v2 06/14] locks: don't walk inode->i_flock list in locks_show
- From: Simo <idra@xxxxxxxxx>
- Re: [PATCH v2 06/14] locks: don't walk inode->i_flock list in locks_show
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Leen Besselink <leen@xxxxxxxxxxxxxxxxx>
- Using GF-complete in Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: Comments on Ceph distributed parity implementation
- From: Paul Von-Stamwitz <PVonStamwitz@xxxxxxxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Joe Buck <jbbuck@xxxxxxxxx>
- Re: Comments on Ceph distributed parity implementation
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Comments on Ceph distributed parity implementation
- From: "Martin Flyvbjerg" <martinflyvbjerg@xxxxxxx>
- AW: radosgw- bind user to pool
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- Re: [PATCH 2/2] Punch hole support against 3.10-rc5
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 0/2] Kernel file system client support for punch hole
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Writing to RBD image while it's snapshot is being created causes I/O errors
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 1/2] Punch hole support against 3.8-rc3
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- [PATCH 2/2] Punch hole support against 3.10-rc5
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- [PATCH 0/2] Kernel file system client support for punch hole
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: Issues with ceph-deploy/deph-disk
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: radosgw- bind user to pool
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Writing to RBD image while it's snapshot is being created causes I/O errors
- From: Karol Jurak <karol.jurak@xxxxxxxxx>
- PG recovery throttling and queue processing optimizations
- From: Sergey Fionov <fionov@xxxxxxxxx>
- radosgw- bind user to pool
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- Issues with ceph-deploy/deph-disk
- From: Luke Jing Yuan <jyluke@xxxxxxxx>
- Re: [PATCH v2 06/14] locks: don't walk inode->i_flock list in locks_show
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v2 06/14] locks: don't walk inode->i_flock list in locks_show
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH 1/2] rbd: fetch object order before using it
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v2 14/14] locks: move file_lock_list to a set of percpu hlist_heads and convert file_lock_lock to an lglock
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v2 13/14] seq_file: add seq_list_*_percpu helpers
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v2 12/14] locks: give the blocked_hash its own spinlock
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v2 12/14] locks: give the blocked_hash its own spinlock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v2 07/14] locks: convert to i_lock to protect i_flock list
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v2 12/14] locks: give the blocked_hash its own spinlock
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v2 11/14] locks: add a new "lm_owner_key" lock operation
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [ceph-users] rbd rm <image> results in osd marked down wrongly with 0.61.3
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v2 10/14] locks: turn the blocked_list into a hashtable
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v2 07/14] locks: convert to i_lock to protect i_flock list
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [ceph-users] rbd rm <image> results in osd marked down wrongly with 0.61.3
- From: Smart Weblications GmbH - Florian Wiessner <f.wiessner@xxxxxxxxxxxxxxxxxxxxx>
- AW: AW: AW: radosrgw performance problems
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- Re: krbd + format=2 ?
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- Re: krbd + format=2 ?
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- [PATCH 1/2] rbd: fetch object order before using it
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- [PATCH 2/2] rbd: use the correct length for format 2 object names
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- v0.64 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Ceph developers, please note: changes to 'ceph' CLI tool in master branch
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: AW: AW: radosrgw performance problems
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- AW: AW: radosrgw performance problems
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- Re: AW: radosrgw performance problems
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- AW: radosrgw performance problems
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- AW: radosrgw performance problems
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- Re: krbd + format=2 ?
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- [GIT PULL] Ceph fixes for -rc6
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: PGLog::rewind_divergent_log use case
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH v2 00/14] locks: scalability improvements for file locking
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v2 00/14] locks: scalability improvements for file locking
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: radosrgw performance problems
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- radosrgw performance problems
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- Re: qemu-1.5.0 savevm error -95 while writing vm with ceph-rbd as storage-backend
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- [PATCH v2 03/14] locks: comment cleanups and clarifications
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 04/14] locks: make "added" in __posix_lock_file a bool
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 06/14] locks: don't walk inode->i_flock list in locks_show
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 02/14] locks: make generic_add_lease and generic_delete_lease static
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 01/14] cifs: use posix_unblock_lock instead of locks_delete_block
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 10/14] locks: turn the blocked_list into a hashtable
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 05/14] locks: encapsulate the fl_link list handling
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 00/14] locks: scalability improvements for file locking
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 09/14] locks: convert fl_link to a hlist_node
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 11/14] locks: add a new "lm_owner_key" lock operation
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 14/14] locks: move file_lock_list to a set of percpu hlist_heads and convert file_lock_lock to an lglock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 07/14] locks: convert to i_lock to protect i_flock list
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 08/14] locks: ensure that deadlock detection is atomic with respect to blocked_list modification
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 12/14] locks: give the blocked_hash its own spinlock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v2 13/14] seq_file: add seq_list_*_percpu helpers
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 9/9] ceph: move inode to proper flushing list when auth MDS changes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 9/9] ceph: move inode to proper flushing list when auth MDS changes
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 9/9] ceph: move inode to proper flushing list when auth MDS changes
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 1/9] libceph: fix safe completion
- From: Alex Elder <alex.elder@xxxxxxxxxx>
- Re: PGLog::rewind_divergent_log use case
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: General Protection Fault in 3.8.5
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- [no subject]
- From: Ta Ba Tuan <tuantb@xxxxxxxxxx>
- Re: [PATCH 0/9] fixes for kclient
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 2/9] libceph: call r_unsafe_callback when unsafe reply is received
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 12/26] rbd: Refactor bio cloning, don't clone biovecs
- From: Kent Overstreet <koverstreet@xxxxxxxxxx>
- Have some problems and failures with women? Solve this problem after following this link.
- From: "root@xxxxxxxx" <root@xxxxxxxx>
- PGLog::rewind_divergent_log use case
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: krbd + format=2 ?
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- Re: [PATCH 4/5] rbd: use rwsem to protect header updates
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 1/5] rbd: set removing flag while holding list lock
- From: Josh Durgin <josh.durgin@xxxxxxxxxxxxx>
- Re: [PATCH 2/5] rbd: protect against concurrent unmaps
- From: Josh Durgin <josh.durgin@xxxxxxxxxxxxx>
- Re: [PATCH 3/5] rbd: don't hold ctl_mutex to get/put device
- From: Josh Durgin <josh.durgin@xxxxxxxxxxxxx>
- Re: [PATCH 5/5] rbd: take a little credit
- From: Josh Durgin <josh.durgin@xxxxxxxxxxxxx>
- Re: krbd + format=2 ?
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: How many Pipe per Ceph OSD daemon will keep?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: How many Pipe per Ceph OSD daemon will keep?
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- v0.61.3 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: two osd stack on peereng after start osd to recovery
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: How many Pipe per Ceph OSD daemon will keep?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- How many Pipe per Ceph OSD daemon will keep?
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: two osd stack on peereng after start osd to recovery
- From: Dominik Mostowiec <dominikmostowiec@xxxxxxxxx>
- Re: [PATCH 0/2 v2] librados: Add RADOS locks to the C/C++ API
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: pg balancing
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: leveldb compaction overhead
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: leveldb compaction overhead
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: leveldb compaction overhead
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [PATCH v1 07/11] locks: only pull entries off of blocked_list when they are really unblocked
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 08/11] locks: convert fl_link to a hlist_node
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 07/11] locks: only pull entries off of blocked_list when they are really unblocked
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v1 07/11] locks: only pull entries off of blocked_list when they are really unblocked
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 08/11] locks: convert fl_link to a hlist_node
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v1 07/11] locks: only pull entries off of blocked_list when they are really unblocked
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v1 06/11] locks: convert to i_lock to protect i_flock list
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v1 08/11] locks: convert fl_link to a hlist_node
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 07/11] locks: only pull entries off of blocked_list when they are really unblocked
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 06/11] locks: convert to i_lock to protect i_flock list
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: PGLog::merge_log clarification
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- PGLog::merge_log clarification
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH v1 04/11] locks: make "added" in __posix_lock_file a bool
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 05/11] locks: encapsulate the fl_link list handling
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: flatten rbd export / export-diff ?
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: flatten rbd export / export-diff ?
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: RGW and Keystone
- From: Chmouel Boudjnah <chmouel@xxxxxxxxxxxx>
- Re: Operation per second meanining
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH v1 11/11] locks: give the blocked_hash its own spinlock
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: flatten rbd export / export-diff ?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v1 11/11] locks: give the blocked_hash its own spinlock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: RGW and Keystone
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: [PATCH v1 11/11] locks: give the blocked_hash its own spinlock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v1 11/11] locks: give the blocked_hash its own spinlock
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 11/11] locks: give the blocked_hash its own spinlock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v1 11/11] locks: give the blocked_hash its own spinlock
- From: "Stefan (metze) Metzmacher" <metze@xxxxxxxxx>
- flatten rbd export / export-diff ?
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: libcephfs: Open-By-Handle API question
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: [PATCH v1 00/11] locks: scalability improvements for file locking
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: rationale for a PGLog::merge_old_entry case
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH v1 00/11] locks: scalability improvements for file locking
- From: Jim Rees <rees@xxxxxxxxx>
- Re: [PATCH v1 03/11] locks: comment cleanups and clarifications
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH v1 00/11] locks: scalability improvements for file locking
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Operation per second meanining
- From: Roman Alekseev <rs.alekseev@xxxxxxxxx>
- libcephfs: Open-By-Handle API question
- From: Ilya Storozhilov <Ilya_Storozhilov@xxxxxxxx>
- RGW and Keystone
- From: Chmouel Boudjnah <chmouel@xxxxxxxxxxxx>
- [PATCH 2/2 v2] Add RADOS API lock tests
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- [PATCH 1/2 v2] Add RADOS lock mechanism to the librados C/C++ API.
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- [PATCH 0/2 v2] librados: Add RADOS locks to the C/C++ API
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- Re: [ceph-users] Ceph killed by OS because of OOM under high load
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- RE: [ceph-users] Ceph killed by OS because of OOM under high load
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- [PATCH 9/9] ceph: move inode to proper flushing list when auth MDS changes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 6/9] ceph: fix race between page writeback and truncate
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 5/9] ceph: reset iov_len when discarding cap release messages
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 8/9] ceph: clear migrate seq when MDS restarts
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 7/9] ceph: check migrate seq before changing auth cap
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 4/9] ceph: fix cap release race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 3/9] libceph: fix truncate size calculation
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 2/9] libceph: call r_unsafe_callback when unsafe reply is received
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/9] libceph: fix safe completion
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 0/9] fixes for kclient
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 2/2] mds: allow purging "dirty parent" stray inode
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/2] mds: initialize some member variables of MDCache
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH v1 03/11] locks: comment cleanups and clarifications
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 02/11] locks: make generic_add_lease and generic_delete_lease static
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 01/11] cifs: use posix_unblock_lock instead of locks_delete_block
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH v1 00/11] locks: scalability improvements for file locking
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: rationale for a PGLog::merge_old_entry case
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [PATCH v1 00/11] locks: scalability improvements for file locking
- From: Davidlohr Bueso <davidlohr.bueso@xxxxxx>
- Re: [ceph-users] Ceph killed by OS because of OOM under high load
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Ceph killed by OS because of OOM under high load
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Emil Renner Berthing <ceph@xxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: rbd image association
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: rbd image association
- From: Roman Alekseev <rs.alekseev@xxxxxxxxx>
- Re: rbd image association
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 0/2] librados: Add RADOS locks to the C/C++ API
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- krbd + format=2 ?
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- Re: Speed up 'rbd rm'
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- Re: rbd image association
- From: Roman Alekseev <rs.alekseev@xxxxxxxxx>
- Re: rbd image association
- From: Wolfgang Hennerbichler <wolfgang.hennerbichler@xxxxxxxxxxxxxxxx>
- Re: rbd image association
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: rbd image association
- From: Roman Alekseev <rs.alekseev@xxxxxxxxx>
- rbd image association
- From: Roman Alekseev <rs.alekseev@xxxxxxxxx>
- rationale for a PGLog::merge_old_entry case
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH v1 11/11] locks: give the blocked_hash its own spinlock
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 06/11] locks: convert to i_lock to protect i_flock list
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 07/11] locks: only pull entries off of blocked_list when they are really unblocked
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 00/11] locks: scalability improvements for file locking
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 09/11] locks: turn the blocked_list into a hashtable
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 08/11] locks: convert fl_link to a hlist_node
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 10/11] locks: add a new "lm_owner_key" lock operation
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 03/11] locks: comment cleanups and clarifications
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 04/11] locks: make "added" in __posix_lock_file a bool
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 05/11] locks: encapsulate the fl_link list handling
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 01/11] cifs: use posix_unblock_lock instead of locks_delete_block
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH v1 02/11] locks: make generic_add_lease and generic_delete_lease static
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 0/5] rbd: clean up use of ctl_mutex
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 5/5] rbd: take a little credit
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 4/5] rbd: use rwsem to protect header updates
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 3/5] rbd: don't hold ctl_mutex to get/put device
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 2/5] rbd: protect against concurrent unmaps
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 1/5] rbd: set removing flag while holding list lock
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH 0/5] rbd: clean up use of ctl_mutex
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: leveldb compaction overhead
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Speed up 'rbd rm'
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Finding out via librados if a cluster is near full
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: leveldb compaction overhead
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: [PATCH 2/2] Add RADOS API lock tests
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 1/2] Add RADOS lock mechanism to the librados C/C++ API.
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 0/2] librados: Add RADOS locks to the C/C++ API
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] rgw: Do not assume rest connection to be established
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: [ceph-users][solved] scrub error: found clone without head
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Olivier Bonvalet <mailinglist@xxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: caller_ops.size error messages upstream/cuttlefish
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: caller_ops.size error messages upstream/cuttlefish
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: caller_ops.size error messages upstream/cuttlefish
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- caller_ops.size error messages upstream/cuttlefish
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- [PATCH] rgw: Do not assume rest connection to be established
- From: christophe courtaut <christophe.courtaut@xxxxxxxxx>
- Re: Speed up 'rbd rm'
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- mon load..
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Speed up 'rbd rm'
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Speed up 'rbd rm'
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- Re: Java bindings for RADOS and RBD
- From: edison su <sudison@xxxxxxxxx>
- Re: radosgw: Files left over after deletion (even after the gc period/process)
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Speed up 'rbd rm'
- From: Smart Weblications GmbH - Florian Wiessner <f.wiessner@xxxxxxxxxxxxxxxxxxxxx>
- Re: Speed up 'rbd rm'
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Fwd: Java bindings for RADOS and RBD
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: [PATCH] rbd: clean up a few things in the refresh path
- From: Josh Durgin <josh.durgin@xxxxxxxxxxxxx>
- Re: [PATCH] rbd: protect against duplicate client creation
- From: Josh Durgin <josh.durgin@xxxxxxxxxxxxx>
- Re: [PATCH] libceph: print more info for short message header
- From: Josh Durgin <josh.durgin@xxxxxxxxxxxxx>
- Re: 5GB object limit in the RADOS Gateway
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: 5GB object limit in the RADOS Gateway
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- 5GB object limit in the RADOS Gateway
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Jörn Engel <joern@xxxxxxxxx>
- Re: radosgw: Files left over after deletion (even after the gc period/process)
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: Mojette Transform implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Mojette Transform implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Waiman Long <waiman.long@xxxxxx>
- Re: ceph gets stopped but never started...
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 0/2] librados: Add RADOS locks to the C/C++ API
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- [PATCH 2/2] Add RADOS API lock tests
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- [PATCH 1/2] Add RADOS lock mechanism to the librados C/C++ API.
- From: Filippos Giannakos <philipgian@xxxxxxxx>
- Re: ceph gets stopped but never started...
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: ceph gets stopped but never started...
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- ceph gets stopped but never started...
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- radosgw: Files left over after deletion (even after the gc period/process)
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Speed up 'rbd rm'
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- Re: [PATCH 0/30] mds: lookup-by-ino & fixes
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Jörn Engel <joern@xxxxxxxxx>
- leveldb compaction overhead
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Andi Kleen <andi@xxxxxxxxxxxxxx>
- [PATCH] libceph: print more info for short message header
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] rbd: protect against duplicate client creation
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] rbd: clean up a few things in the refresh path
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Waiman Long <waiman.long@xxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Waiman Long <waiman.long@xxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Andi Kleen <andi@xxxxxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Waiman Long <waiman.long@xxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Waiman Long <waiman.long@xxxxxx>
- Re: Speed up 'rbd rm'
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Fscache support for Ceph
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Simo Sorce <simo@xxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Andi Kleen <andi@xxxxxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Simo Sorce <simo@xxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Andi Kleen <andi@xxxxxxxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Waiman Long <waiman.long@xxxxxx>
- Re: Fscache support for Ceph
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [patch] ceph: tidy ceph_mdsmap_decode() a little
- From: Alex Elder <elder@xxxxxxxxxxx>
- Mojette Transform implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- High CPU usage when enabling mon leveldb compression
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Finding out via librados if a cluster is near full
- From: Wido den Hollander <wido@xxxxxxxx>
- Finding out via librados if a cluster is near full
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [patch] ceph: tidy ceph_mdsmap_decode() a little
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- Re: [patch] ceph: tidy ceph_mdsmap_decode() a little
- From: walter harms <wharms@xxxxxx>
- [patch] ceph: tidy ceph_mdsmap_decode() a little
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- Speed up 'rbd rm'
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- v0.63 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Missing cluster information
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: cuttlefish ceph-fuse writes make for frequent inconsistent pgs
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] ceph: improve error handling in ceph_mdsmap_decode
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH] ceph: improve error handling in ceph_mdsmap_decode
- From: Emil Goode <emilgoode@xxxxxxxxx>
- Re: cuttlefish ceph-fuse writes make for frequent inconsistent pgs
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: [PATCH 25/30] mds: bring back old style backtrace handling
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 23/30] mds: journal backtrace update in EMetaBlob::fullbit
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 0/30] mds: lookup-by-ino & fixes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- cuttlefish ceph-fuse writes make for frequent inconsistent pgs
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: [PATCH 25/30] mds: bring back old style backtrace handling
- From: Sage Weil <sage@xxxxxxxxxxx>
- AW: Numerical argument out of domain
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- Re: Numerical argument out of domain
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: pg_missing_t::is_missing semantics
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Numerical argument out of domain
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- Re: [PATCH 0/30] mds: lookup-by-ino & fixes
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: pg_missing_t::is_missing semantics
- From: Loic Dachary <loic@xxxxxxxxxxx>
- pg_missing_t::is_missing semantics
- From: Loic Dachary <loic@xxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: [PATCH 0/30] mds: lookup-by-ino & fixes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 29/30] mds: open inode by ino
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 25/30] mds: bring back old style backtrace handling
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 27/30] mds: remove old backtrace handling
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 18/30] mds: don't issue Fc cap from replica
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 14/30] mds: export CInode:mds_caps_wanted
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 13/30] mds: export CInode::STATE_NEEDSRECOVER
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: [PATCH 07/30] mds: fix straydn race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH] mds: use "open-by-ino" function to open remote link
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: Ceph backfilling explained ( maybe )
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph backfilling explained ( maybe )
- From: Leen Besselink <leen@xxxxxxxxxxxxxxxxx>
- Re: Ceph backfilling explained ( maybe )
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph backfilling explained ( maybe )
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: Ceph backfilling explained ( maybe )
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph backfilling explained ( maybe )
- From: Leen Besselink <leen@xxxxxxxxxxxxxxxxx>
- Re: Ceph backfilling explained ( maybe )
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph backfilling explained ( maybe )
- From: Leen Besselink <leen@xxxxxxxxxxxxxxxxx>
- Ceph backfilling explained ( maybe )
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: PGLog.{cc,h} review request
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [doc] strange navigation error
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [doc] strange navigation error
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: qemu-1.5.0 savevm error -95 while writing vm with ceph-rbd as storage-backend
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- Re: [doc] strange navigation error
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- [doc] strange navigation error
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: qemu-1.5.0 savevm error -95 while writing vm with ceph-rbd as storage-backend
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: qemu-1.5.0 savevm error -95 while writing vm with ceph-rbd as storage-backend
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- Re: qemu-1.5.0 savevm error -95 while writing vm with ceph-rbd as storage-backend
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [PATCH 0/30] mds: lookup-by-ino & fixes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- qemu-1.5.0 savevm error -95 while writing vm with ceph-rbd as storage-backend
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- libcephfs/Client api changes
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 25/30] mds: bring back old style backtrace handling
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 25/30] mds: bring back old style backtrace handling
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH 25/30] mds: bring back old style backtrace handling
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [PATCH 27/30] mds: remove old backtrace handling
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- [PATCH 2/2] Enable fscache as an optional feature of ceph.
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 1/2] Fscache glue implementation for Ceph
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Fscache support for Ceph
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Waiman Long <waiman.long@xxxxxx>
- Re: [PATCH] rbd: flush dcache after zeroing page data
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] libceph: add lingering request reference when registered
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] rbd: wait for safe callback for write requests
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- [patch] ceph: remove unneeded truncation
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- Re: [PATCH 0/30] mds: lookup-by-ino & fixes
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 18/30] mds: don't issue Fc cap from replica
- From: Sage Weil <sage@xxxxxxxxxxx>
- how to pass mount options in ceph-deploy osd create or prepare commands
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: [PATCH 14/30] mds: export CInode:mds_caps_wanted
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 13/30] mds: export CInode::STATE_NEEDSRECOVER
- From: Sage Weil <sage@xxxxxxxxxxx>
- Ceph newbie questions
- From: Jäger, Philipp <Philipp.Jaeger@xxxxxxx>
- using ceph-deploy, how do i specify the cluster address?
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- ceph-deploy: how to change filesystem type from xfs to btrfs using --fs-type option
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: [PATCH 07/30] mds: fix straydn race
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] rbd: flush dcache after zeroing page data
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] rbd: flush dcache after zeroing page data
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] libceph: add lingering request reference when registered
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] rbd: wait for safe callback for write requests
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- =?gb18030?b?u9i4tKO6W1BBVENIIDEvMyB2M10gZGNhY2hlOiBE?==?gb18030?b?b24ndCB0YWtlIHVubmVjZXNzYXJ5IGxvY2sgaW4g?==?gb18030?b?ZF9jb3VudCB1cGRhdGU=?=
- From: "=?gb18030?b?cmVtYXBlcg==?=" <yp.fangdong@xxxxxxxxx>
- [PATCH 11/30] mds: unfreeze inode when after rename rollback finishes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 09/30] mds: fix typo in Server::do_rename_rollback
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 08/30] mds: fix import cancel race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 05/30] mds: fix uncommitted master wait
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 06/30] mds: fix slave commit tracking
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 13/30] mds: export CInode::STATE_NEEDSRECOVER
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 12/30] mds: send slave request after target MDS is active
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 10/30] mds: remove buggy cache rejoin code
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 17/30] mds: defer releasing cap if necessary
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 16/30] mds: fix Locker::request_inode_file_caps()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 15/30] mds: notify auth MDS when cap_wanted changes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 14/30] mds: export CInode:mds_caps_wanted
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 07/30] mds: fix straydn race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 30/30] mds: open missing cap inodes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 23/30] mds: journal backtrace update in EMetaBlob::fullbit
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 24/30] mds: rename last_renamed_version to backtrace_version
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 20/30] mds: slient MDCache::trim_non_auth()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 27/30] mds: remove old backtrace handling
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 29/30] mds: open inode by ino
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 28/30] mds: move fetch_backtrace() to class MDCache
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 26/30] mds: update backtraces when unlinking inodes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 25/30] mds: bring back old style backtrace handling
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 22/30] mds: reorder EMetaBlob::add_primary_dentry's parameters
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 21/30] mds: warn on unconnected snap realms
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 18/30] mds: don't issue Fc cap from replica
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 19/30] mds: fix check for base inode discovery
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 02/30] mds: fix underwater dentry cleanup
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 03/30] mds: don't stop at export bounds when journaling dir context
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 04/30] mds: adjust subtree auth if import aborts in PREPPED state
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 01/30] mds: journal new subtrees created by rename
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 0/30] mds: lookup-by-ino & fixes
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- RE: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- [PATCH 0/3 v3] dcache: make it more scalable on large system
- From: Waiman Long <Waiman.Long@xxxxxx>
- [PATCH 3/3 v3] dcache: change rename_lock to a sequence read/write lock
- From: Waiman Long <Waiman.Long@xxxxxx>
- [PATCH 2/3 v3] dcache: introduce a new sequence read/write lock type
- From: Waiman Long <Waiman.Long@xxxxxx>
- [PATCH 1/3 v3] dcache: Don't take unnecessary lock in d_count update
- From: Waiman Long <Waiman.Long@xxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: found some issues on ceph v0.61.2
- From: "ymorita000@xxxxxxxxx" <ymorita000@xxxxxxxxx>
- Re: PGLog.{cc,h} review request
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Dzianis Kahanovich <mahatma@xxxxxxxxxxxxxx>
- Re: [ceph-users] scrub error: found clone without head
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Wolfgang Hennerbichler <wolfgang.hennerbichler@xxxxxxxxxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Anders Saaby <anders@xxxxxxxxx>
- ceph-deploy errors on CentOS
- From: Isaac Otsiabah <zmoo76b@xxxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [ceph-users] mon IO usage
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Emil Renner Berthing <ceph@xxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Anders Saaby <anders@xxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Emil Renner Berthing <ceph@xxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [ceph-users] mon IO usage
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] mon IO usage
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Mike Dawson <mdawson@xxxxxxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Emil Renner Berthing <ceph@xxxxxxxx>
- Re: [ceph-users] mon IO usage
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] mon IO usage
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: General Protection Fault in 3.8.5
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: [ceph-users] mon IO usage
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Segmentation faults in ceph-osd
- From: Emil Renner Berthing <ceph@xxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: OSD memory leak when scrubbing [0.56.6]
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- OSD memory leak when scrubbing [0.56.6]
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- Re: [ceph-users] mon IO usage
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxxxxxx>
- Re: [ceph-users] mon IO usage
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxxxxxx>
- Segmentation faults in ceph-osd
- From: Emil Renner Berthing <ceph@xxxxxxxx>
- Re: found some issues on ceph v0.61.2
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- found some issues on ceph v0.61.2
- From: "ymorita000@xxxxxxxxx" <ymorita000@xxxxxxxxx>
- Re: General Protection Fault in 3.8.5
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: General Protection Fault in 3.8.5
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: General Protection Fault in 3.8.5
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: General Protection Fault in 3.8.5
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- fscache, caps do not have share
- From: Milosz Tanski <milosz@xxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: [ceph-users] PG down & incomplete
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- PGLog.{cc,h} review request
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Using Ceph and CloudStack? Let us know!
- From: Constantinos Venetsanopoulos <cven@xxxxxxxx>
- RE: windows rbd
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: windows rbd
- From: Alex Elder <elder@xxxxxxxxxxx>
- windows rbd
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: fscache support
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [ceph-users] PG down & incomplete
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [ceph-users] PG down & incomplete
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: fscache support
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] PG down & incomplete
- From: John Wilkins <john.wilkins@xxxxxxxxxxx>
- Re: [ceph-users] PG down & incomplete
- From: John Wilkins <john.wilkins@xxxxxxxxxxx>
- fscache support
- From: Milosz Tanski <milosz@xxxxxxxxx>
- auth 151e37f2 subscribe ceph-devel ceph.list@xxxxxxxxx
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- ceph v6.1, rbd-fuse issue,rbd_list: error %d Numerical result out of range
- From: "Sean" <sean_cao@xxxxxxxxxxxxx>
- Re: [ceph-users] PG down & incomplete
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [ceph-users] PG down & incomplete
- From: John Wilkins <john.wilkins@xxxxxxxxxxx>
- Re: [ceph-users] PG down & incomplete
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [PATCH, v2] rbd: fix cleanup in rbd_add()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH, v2] rbd: drop original request earlier for existence check
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH, v2] rbd: fix cleanup in rbd_add()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] rbd: don't destroy ceph_opts in rbd_add()
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- [PATCH, v2] rbd: drop original request earlier for existence check
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH, v2] rbd: fix cleanup in rbd_add()
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] rbd: don't destroy ceph_opts in rbd_add()
- From: Alex Elder <elder@xxxxxxxxxxx>
- teuthology yaml change (read this!)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Seg Fault on rgw 0.61.1 with cluster in 0.61
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH] ceph df: fix si units for 'global' stats
- From: Mike Kelly <pioto@xxxxxxxxx>
- Re: [PATCH] libceph: must hold mutex for reset_changed_osds()
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] libceph: must hold mutex for reset_changed_osds()
- From: Sage Weil <sage@xxxxxxxxxxx>
- Using Ceph and CloudStack? Let us know!
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: [PATCH] Fix some little/big endian issues
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v2 2/3] ceph: add missing cpu_to_le32() calls when encoding a reconnect capability
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH] libceph: must hold mutex for reset_changed_osds()
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: 0.61 Cuttlefish / ceph-deploy missing in repos
- From: Gary Lowell <gary.lowell@xxxxxxxxxxx>
- Re: 0.61 Cuttlefish / ceph-deploy missing in repos
- From: Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx>
- Re: [PATCH v2 3/3] ceph: ceph_pagelist_append might sleep while atomic
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: [PATCH v2 3/3] ceph: ceph_pagelist_append might sleep while atomic
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH v2 2/3] ceph: add missing cpu_to_le32() calls when encoding a reconnect capability
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [PATCH v2 1/3] ceph: fix up comment for ceph_count_locks() as to which lock to hold
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH v2 2/3] ceph: add missing cpu_to_le32() calls when encoding a reconnect capability
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- [PATCH v2 3/3] ceph: ceph_pagelist_append might sleep while atomic
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- [PATCH v2 1/3] ceph: fix up comment for ceph_count_locks() as to which lock to hold
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- [PATCH v2 0/3] ceph: fix might_sleep while atomic
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: 0.61 Cuttlefish / ceph-deploy missing
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- RE: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- [PATCH] Fix some little/big endian issues
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- RE: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: 0.61 Cuttlefish / ceph-deploy missing
- From: Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx>
- RE: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: Wales Wang <wormwang@xxxxxxxxx>
- Re: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- v0.62 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Re: [ceph] update op added to a waiting queue or discarded (2c57719)
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Fwd: Re: [ceph] update op added to a waiting queue or discarded (2c57719)
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [GIT PULL] Ceph fixes for -rc2
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: QCOW2 to RBD format 2 in one step
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: QCOW2 to RBD format 2 in one step
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: QCOW2 to RBD format 2 in one step
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] libceph: ceph_pagelist_append might sleep while atomic
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: [PATCH] libceph: ceph_pagelist_append might sleep while atomic
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH v4 07/20] ceph: use ->invalidatepage() length argument
- From: Lukas Czerner <lczerner@xxxxxxxxxx>
- zero-copy bufferlists
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: pg balancing
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: [ceph-users] OSD state flipping when cluster-network in high utilization
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: pg balancing
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: pg balancing
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: QCOW2 to RBD format 2 in one step
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Seg Fault on rgw 0.61.1 with cluster in 0.61
- From: Faidon Liambotis <paravoid@xxxxxxxxxx>
- Re: QCOW2 to RBD format 2 in one step
- From: Leen Besselink <leen@xxxxxxxxxxxxxxxxx>
- Re: QCOW2 to RBD format 2 in one step
- From: Damien Churchill <damoxc@xxxxxxxxx>
- Re: QCOW2 to RBD format 2 in one step
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: QCOW2 to RBD format 2 in one step
- From: Leen Besselink <leen@xxxxxxxxxxxxxxxxx>
- QCOW2 to RBD format 2 in one step
- From: Wido den Hollander <wido@xxxxxxxx>
- v0.61.2 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] rbd: drop original request earlier for existence check
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] rbd: drop original request earlier for existence check
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] rbd: fix cleanup in rbd_add()
- From: Alex Elder <elder@xxxxxxxxxxx>
- pg balancing
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 1/5, v2] rbd: get parent info on refresh
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] rbd: fix parent request size assumption
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [ceph-users] shared images
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: [ceph-users] shared images
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [ceph-users] shared images
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- [PATCH 1/5, v2] rbd: get parent info on refresh
- From: Alex Elder <elder@xxxxxxxxxxx>
- [PATCH] rbd: fix parent request size assumption
- From: Alex Elder <elder@xxxxxxxxxxx>
- Re: [ceph-users] shared images
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- shared images
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: [ceph-users] RBD vs RADOS benchmark performance
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: [ceph-users] RBD vs RADOS benchmark performance
- From: Greg <itooo@xxxxxxxxx>
- Re: [ceph-users] RBD vs RADOS benchmark performance
- From: Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: [ceph-users] RBD vs RADOS benchmark performance
- From: Greg <itooo@xxxxxxxxx>
- Re: [ceph-users] e release
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: 0.56.6: MDS asserts on restart
- From: Christopher Kunz <chrislist@xxxxxxxxxxx>
- Re: [ceph-users] RBD vs RADOS benchmark performance
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: question about ceph and FC SAN LUN data IO path
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [ceph-users] RBD vs RADOS benchmark performance
- From: Greg <itooo@xxxxxxxxx>
- question about ceph and FC SAN LUN data IO path
- From: Dennis Chen <xschen@xxxxxxxxxxxxx>
- RBD image format in rbd_stat()
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [ceph-users] monitor upgrade from 0.56.6 to 0.61.1 on squeeze failed!
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Repeatable OSD crash on 0.61.1
- From: Josh West <jsw@xxxxxxx>
- Re: [ceph-users] e release
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: [PATCH] rbd: fix leak of format 2 snapshot context
- From: Josh Durgin <josh.durgin@xxxxxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]