CEPH Filesystem Development
[Prev Page][Next Page]
- Re: [PATCH] reinstate ceph cluster_snap support
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: [PATCH v5] Ceph: Punch hole support for kernel client
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v5] Ceph: Punch hole support for kernel client
- From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH 0/2] Fscache cleanup and fix
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 2/2] ceph: fscache cleanup
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 1/2] ceph: Do not leak fscache workqueue
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 0/2] Fscache cleanup and fix
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Blueprint: Add LevelDB support to ceph cluster backend store
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] reinstate ceph cluster_snap support
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH] rbd: fix buffer size for writes to images with snapshots
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: linux-next: Tree for Aug 27 (ceph)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: linux-next: Tree for Aug 27 (ceph)
- From: Randy Dunlap <rdunlap@xxxxxxxxxxxxx>
- Re: [PATCH] rbd: fix I/O error propagation for reads
- From: Alex Elder <alex.elder@xxxxxxxxxx>
- Re: [PATCH] rbd: fix I/O error propagation for reads
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: trivial bug in aio_write
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] Ceph + Xen - RBD io hang
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] cleanup: removed last references to g_conf from auth
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] cleanup: removed last references to g_conf from auth
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: is it possible to set up debug env for ceph?
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 02/15] rbd: convert bus code to use bus_groups
- From: Alex Elder <alex.elder@xxxxxxxxxx>
- Re: [PATCH] rbd: fix I/O error propagation for reads
- From: Alex Elder <alex.elder@xxxxxxxxxx>
- trivial bug in aio_write
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [PATCH] cleanup: removed last references to g_conf from auth
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- [PATCH] cleanup: removed last references to g_conf from auth
- From: "Roald J. van Loon" <roaldvanloon@xxxxxxxxx>
- [PATCH] cleanup: removed last references to g_conf from auth
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: [PATCH] rbd: fix I/O error propagation for reads
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] rbd: fix I/O error propagation for reads
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- [PATCH] rbd: fix I/O error propagation for reads
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH 3/5] ceph: use fscache as a local presisent cache
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] bucket count limit
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: CEPH Erasure Encoding + OSD Scalability
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: CEPH Erasure Encoding + OSD Scalability
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH] reinstate ceph cluster_snap support
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: [PATCH] mds: remove waiting lock before merging with neighbours
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: [PATCH] enable mds rejoin with active inodes' old parent xattrs
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: [PATCH] reinstate ceph cluster_snap support
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 02/15] rbd: convert bus code to use bus_groups
- From: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
- Re: [PATCH] mds: remove waiting lock before merging with neighbours
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH 3/3] ceph: rework trim caps code
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 2/2] client: trim deleted inode
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 3/3] ceph: rework trim caps code
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH 2/2] client: trim deleted inode
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH] enable mds rejoin with active inodes' old parent xattrs
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: jerasure-1.2A valgrind error
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: v0.67.2 Dumpling released
- From: Sage Weil <sage@xxxxxxxxxxx>
- v0.67.2 Dumpling released
- From: Sage Weil <sage@xxxxxxxxxxx>
- jerasure-1.2A valgrind error
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [ANN] ceph-deploy 1.2.2 released!
- From: Alfredo Deza <alfredo.deza@xxxxxxxxxxx>
- Re: [PATCH] mds: update backtrace when old format inode is touched
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: [PATCH] enable mds rejoin with active inodes' old parent xattrs
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: [PATCH] mds: update backtrace when old format inode is touched
- From: Alexandre Oliva <oliva@xxxxxxx>
- [PATCH] mds: update backtrace when old format inode is touched
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- RE: [ceph-users] Help needed porting Ceph to RSockets
- From: "Hefty, Sean" <sean.hefty@xxxxxxxxx>
- Re: CEPH Erasure Encoding + OSD Scalability
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: CEPH Erasure Encoding + OSD Scalability
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- Re: Jerasure & Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: radosgw-agent testing
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: radosgw-agent testing
- From: christophe courtaut <christophe.courtaut@xxxxxxxxx>
- Re: [PATCH V6] ceph: use vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] enable mds rejoin with active inodes' old parent xattrs
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: radosgw-agent testing
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: Need some help with the RBD Java bindings
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: [ceph-users] bucket count limit
- From: Dominik Mostowiec <dominikmostowiec@xxxxxxxxx>
- Re: [ceph-users] bucket count limit
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: bucket count limit
- From: Dominik Mostowiec <dominikmostowiec@xxxxxxxxx>
- bucket count limit
- From: Dominik Mostowiec <dominikmostowiec@xxxxxxxxx>
- bucket count limit
- From: Mostowiec Dominik <Dominik.Mostowiec@xxxxxxxxxxxx>
- Re: [PATCHv4 0/5] ceph: persistent caching with fscache
- From: Milosz Tanski <milosz@xxxxxxxxx>
- radosgw-agent testing
- From: christophe courtaut <christophe.courtaut@xxxxxxxxx>
- [PATCH] reinstate ceph cluster_snap support
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: app design recommendations
- From: Nulik Nol <nuliknol@xxxxxxxxx>
- Re: [PATCH] enable mds rejoin with active inodes' old parent xattrs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- [PATCH] enable mds rejoin with active inodes' old parent xattrs
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: Need some help with the RBD Java bindings
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [PATCH] ceph: allow sync_read/write return partial successed size of read/write.
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH] ceph: allow sync_read/write return partial successed size of read/write.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCHv4 0/5] ceph: persistent caching with fscache
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [PATCH V6] ceph: use vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sha Zhengju <handai.szj@xxxxxxxxx>
- Re: [ceph-users] kernel BUG at net/ceph/osd_client.c:2103
- From: Laurent Barbe <laurent@xxxxxxxxxxx>
- [PATCH 2/5] new fscache interface to check cache consistency
- From: Hongyi Jia <milosz@xxxxxxxxx>
- [PATCH 3/5] ceph: use fscache as a local presisent cache
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 5/5] ceph: clean PgPrivate2 on returning from readpages
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 4/5] fscache: netfs function for cleanup post readpages
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 1/5] new cachefiles interface to check cache consistency
- From: Hongyi Jia <milosz@xxxxxxxxx>
- [PATCHv4 0/5] ceph: persistent caching with fscache
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: subdir-objects
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: subdir-objects
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: subdir-objects
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: subdir-objects
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: Need some help with the RBD Java bindings
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- subdir-objects
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- RE: still recovery issues with cuttlefish
- From: Yann ROBIN <yann.robin@xxxxxxxxxxxxx>
- Re: [PATCH V6] ceph: use vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- Fwd: app design recommendations
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: Need some help with the RBD Java bindings
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: Matthew Anderson <manderson8787@xxxxxxxxx>
- Jerasure & Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: app design recommendations
- From: Wido den Hollander <wido@xxxxxxxx>
- [PATCH V6] ceph: use vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sha Zhengju <handai.szj@xxxxxxxxx>
- [PATCH] ceph: allow sync_read/write return partial successed size of read/write.
- From: majianpeng <majianpeng@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph (fwd)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- do not upgrade bobtail -> dumpling directly until 0.67.2
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Need some help with the RBD Java bindings
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: libvirt: Removing RBD volumes with snapshots, auto purge or not?
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- app design recommendations
- From: Nulik Nol <nuliknol@xxxxxxxxx>
- Re: libvirt: Removing RBD volumes with snapshots, auto purge or not?
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: libvirt: Removing RBD volumes with snapshots, auto purge or not?
- From: Andrey Korolyov <andrey@xxxxxxx>
- libvirt: Removing RBD volumes with snapshots, auto purge or not?
- From: Wido den Hollander <wido@xxxxxxxx>
- RE: [ceph-users] Help needed porting Ceph to RSockets
- From: "Hefty, Sean" <sean.hefty@xxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Erasure Code plugin system with an example : review request
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Review request : Erasure Code plugin loader implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: [PATCH v5] Ceph-fuse: Fallocate and punch hole support
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v5] Ceph-fuse: Fallocate and punch hole support
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- RE: v0.61.8 Cuttlefish released
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [ceph-users] Flapping osd / continuously reported as failed
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- RE: v0.61.8 Cuttlefish released
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: v0.61.8 Cuttlefish released
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- RE: [ceph-users] Flapping osd / continuously reported as failed
- From: Mostowiec Dominik <Dominik.Mostowiec@xxxxxxxxxxxx>
- RE: [ceph-users] large memory leak on scrubbing
- From: Mostowiec Dominik <Dominik.Mostowiec@xxxxxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- v0.61.8 Cuttlefish released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] Flapping osd / continuously reported as failed
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- RE: [ceph-users] Help needed porting Ceph to RSockets
- From: "Hefty, Sean" <sean.hefty@xxxxxxxxx>
- Re: Review request : Erasure Code plugin loader implementation
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: [ceph-users] large memory leak on scrubbing
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Review request : Erasure Code plugin loader implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: Review request : Erasure Code plugin loader implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: CEPH Erasure Encoding + OSD Scalability
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: [ceph-users] large memory leak on scrubbing
- From: Mostowiec Dominik <Dominik.Mostowiec@xxxxxxxxxxxx>
- Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: Review request : Erasure Code plugin loader implementation
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph-deploy mon create / gatherkeys problems
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Review request : Erasure Code plugin loader implementation
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph-deploy mon create / gatherkeys problems
- From: Eric Eastman <eric0e@xxxxxxx>
- Re: Review request : Erasure Code plugin loader implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Review request : Erasure Code plugin loader implementation
- From: Sage Weil <sage@xxxxxxxxxxx>
- Review request : Erasure Code plugin loader implementation
- From: Loic Dachary <loic@xxxxxxxxxxx>
- ceph-deploy mon create / gatherkeys problems
- From: Sage Weil <sage@xxxxxxxxxxx>
- v0.67.1 Dumpling released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] large memory leak on scrubbing
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] Mds lock
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: debugging librbd async
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- RE: [ceph-users] Help needed porting Ceph to RSockets
- From: "Hefty, Sean" <sean.hefty@xxxxxxxxx>
- RE: debugging librbd async
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: kclient: Missing directories
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: kclient: Missing directories
- From: Milosz Tanski <milosz@xxxxxxxxx>
- kclient: Missing directories
- From: Milosz Tanski <milosz@xxxxxxxxx>
- ceph-deploy mon pitfall..
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Google recruitment spam
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- Re: Google recruitment spam
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: debugging librbd async
- From: Sage Weil <sage@xxxxxxxxxxx>
- large memory leak on scrubbing
- From: Mostowiec Dominik <Dominik.Mostowiec@xxxxxxxxxxxx>
- [ANN] ceph-deploy 1.2.1 released
- From: Alfredo Deza <alfredo.deza@xxxxxxxxxxx>
- RE: [ceph-users] Flapping osd / continuously reported as failed
- From: Mostowiec Dominik <Dominik.Mostowiec@xxxxxxxxxxxx>
- Google recruitment spam
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Frederik Thuysbaert <frederik.thuysbaert@xxxxxxxxx>
- Re: Show outdated diffs on github
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Show outdated diffs on github
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Show outdated diffs on github
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: debugging librbd async
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [PATCH v5] Ceph-fuse: Fallocate and punch hole support
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: debugging librbd async
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: debugging librbd async
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: debugging librbd async
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v5] Ceph: Punch hole support for kernel client
- From: Sage Weil <sage@xxxxxxxxxxx>
- debugging librbd async
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [PATCH] Ceph: Remove useless variable revoked_rdcache
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: Need some help with the RBD Java bindings
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: Deferred deletion of ObjectContext
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Need some help with the RBD Java bindings
- From: Wido den Hollander <wido@xxxxxxxx>
- Deferred deletion of ObjectContext
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: rbd: format 2 support in rbd.ko ?
- From: Damien Churchill <damoxc@xxxxxxxxx>
- rbd: format 2 support in rbd.ko ?
- From: Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx>
- [PATCH] debian/control libgoogle-perftools-dev (>= 2.0-2)
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- [PATCH] allow also curl openssl binding
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- [PATCH] Ceph: Remove useless variable revoked_rdcache
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [PATCH v5] Ceph-fuse: Fallocate and punch hole support
- From: yunchuanwen <yunchuanwen@xxxxxxxxxxxxxxx>
- Re: [patch 1/2] libceph: fix error handling in handle_reply()
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] v0.67 Dumpling released
- From: Mikaël Cluseau <mcluseau@xxxxxx>
- [patch 3/2] libceph: create_singlethread_workqueue() doesn't return ERR_PTRs
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- [patch 2/2] libceph: potential NULL dereference in ceph_osdc_handle_map()
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- [patch 1/2] libceph: fix error handling in handle_reply()
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- Re: [PATCH v5] Ceph-fuse: Fallocate and punch hole support
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH v5] Ceph-fuse: Fallocate and punch hole support
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- [PATCH v5] Ceph: Punch hole support for kernel client
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- [PATCH v4] Ceph: Punch hole support for kernel client
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: radosgw S3 api
- From: Lorieri <lorieri@xxxxxxxxx>
- Re: cephfs set_layout - tuning
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: cephfs set_layout - tuning
- From: Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx>
- Re: cephfs set_layout - EINVAL - solved
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: github pull requests, comments and rebase
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: review request : ReplicatedPG::AccessMode::wake removal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: RGW blueprint for plugin architecture
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- RE: [ceph-users] Help needed porting Ceph to RSockets
- From: "Hefty, Sean" <sean.hefty@xxxxxxxxx>
- Re: How to collect coverage with teuthology ?
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: v0.67 Dumpling released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v4] Ceph-fuse: Fallocate and punch hole support
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- RGW blueprint for plugin architecture
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- How to collect coverage with teuthology ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- review request : ReplicatedPG::AccessMode::wake removal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Frederik Thuysbaert <frederik.thuysbaert@xxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Frederik Thuysbaert <frederik.thuysbaert@xxxxxxxxx>
- Re: v0.67 Dumpling released
- From: James Page <james.page@xxxxxxxxxx>
- Re: teuthology and code coverage
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: v0.67 Dumpling released
- From: Dietmar Maurer <dietmar@xxxxxxxxxxx>
- Re: [PATCH v4] Ceph-fuse: Fallocate and punch hole support
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: teuthology and code coverage
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: [PATCH v4] Ceph-fuse: Fallocate and punch hole support
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- v0.67 Dumpling released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH v4] Ceph-fuse: Fallocate and punch hole support
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH] Ceph-qa: change the fsx.sh to support hole punching test
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- [PATCH v4] Ceph-fuse: Fallocate and punch hole support
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: poll/sendmsg problem with 3.5.0-37-generic #58~precise1-Ubuntu
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- RE: [ceph-users] Help needed porting Ceph to RSockets
- From: "Hefty, Sean" <sean.hefty@xxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- [no subject]
- teuthology and code coverage
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Frederik Thuysbaert <frederik.thuysbaert@xxxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: "Atchley, Scott" <atchleyes@xxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: poll/sendmsg problem with 3.5.0-37-generic #58~precise1-Ubuntu
- From: Luis Henriques <luis.henriques@xxxxxxxxxxxxx>
- Re: Call for participants : teuthology weekly meeting
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: teuthology : ulimit: error
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- poll/sendmsg problem with 3.5.0-37-generic #58~precise1-Ubuntu
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] Help needed porting Ceph to RSockets
- From: Matthew Anderson <manderson8787@xxxxxxxxx>
- Re: teuthology : ulimit: error
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: Correct usage of rbd_aio_release
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- blueprint follow-up: paper cuts
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Correct usage of rbd_aio_release
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: ceph admin command: debuggging
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: Pages still marked with private_2
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Blueprint: inline data support (step 2)
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: teuthology : ulimit: error
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: teuthology : ulimit: error
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [PATCH 0/2] Cleanup invalidate page
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [PATCH 0/2] Cleanup invalidate page
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 0/2] Cleanup invalidate page
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Blueprint: inline data support (step 2)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 0/2] Cleanup invalidate page
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Help needed porting Ceph to RSockets
- From: Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx>
- [PATCH 2/2] ceph: cleanup the logic in ceph_invalidatepage
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 1/2] ceph: Remove bogus check in invalidatepage
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 0/2] Cleanup invalidate page
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: [PATCH 3/3] ceph: use fscache as a local presisent cache
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: cephfs set_layout
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Could we introduce launchpad/gerrit for ceph
- From: James Page <james.page@xxxxxxxxxx>
- Re: RGW getting revoked tokens from keystone not working
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- RE: Could we introduce launchpad/gerrit for ceph
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: cephfs set_layout
- From: Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx>
- Re: cephfs set_layout - EINVAL - solved
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Reduce log verbosity
- From: Jean-Daniel BUSSY <silversurfer972@xxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: cephfs set_layout - EINVAL - solved
- From: Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx>
- cephfs set_layout - EINVAL
- From: Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: Could we introduce launchpad/gerrit for ceph
- From: Sage Weil <sage@xxxxxxxxxxx>
- Could we introduce launchpad/gerrit for ceph
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: blueprint: osd: ceph on zfs
- From: Sage Weil <sage@xxxxxxxxxxx>
- cds: rsockets follow-up
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 3/3] ceph: use fscache as a local presisent cache
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Re: [PATCH TRIVIVAL] ceph: Move the place for EOLDSNAPC handle in ceph_aio_write to easily understand
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Re: [PATCH TRIVIVAL] ceph: Move the place for EOLDSNAPC handle in ceph_aio_write to easily understand
- From: majianpeng <majianpeng@xxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: ceph admin command: debuggging
- From: Sage Weil <sage@xxxxxxxxxxx>
- teuthology : ulimit: error
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: github pull requests, comments and rebase
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph admin command: debuggging
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: [PATCH TRIVIVAL] ceph: Move the place for EOLDSNAPC handle in ceph_aio_write to easily understand
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] kernel BUG at net/ceph/osd_client.c:2103
- From: Laurent Barbe <laurent@xxxxxxxxxxx>
- Re: Transcript: Erasure coded storage backend (step 2)
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Oliver Francke <Oliver.Francke@xxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- ceph admin command: debuggging
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- github pull requests, comments and rebase
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH TRIVIVAL] ceph: Move the place for EOLDSNAPC handle in ceph_aio_write to easily understand
- From: majianpeng <majianpeng@xxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- [PATCH 3/3] ceph: use fscache as a local presisent cache
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 2/3] new fscache interface to check cache consistency
- From: Hongyi Jia <milosz@xxxxxxxxx>
- [PATCH 1/3] new cachefiles interface to check cache consistency
- From: Hongyi Jia <milosz@xxxxxxxxx>
- [PATCH 0/3] ceph: persistent caching with fscache
- From: Milosz Tanski <milosz@xxxxxxxxx>
- [PATCH 13/22] ceph: Convert to immutable biovecs
- From: Kent Overstreet <kmo@xxxxxxxxxxxxx>
- [PATCH 15/22] rbd: Refactor bio cloning, don't clone biovecs
- From: Kent Overstreet <kmo@xxxxxxxxxxxxx>
- Call for participants : teuthology weekly meeting
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Transcript: Erasure coded storage backend (step 2)
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: /etc/init.d/ceph script will restart stop start the daemon twice
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: Sage Weil <sage@xxxxxxxxxxx>
- /etc/init.d/ceph script will restart stop start the daemon twice
- From: huangjun <hjwsm1989@xxxxxxxxx>
- RE: ocf script for ceph quorum check and fs
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Reduce log verbosity
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: API request
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: API request
- From: David Howells <dhowells@xxxxxxxxxx>
- RE: bug in /etc/init.d/ceph debian
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- API request
- From: Milosz Tanski <milosz@xxxxxxxxx>
- RE: ocf script for ceph quorum check and fs
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: Reduce cluster log verbosity
- From: Jean-Daniel BUSSY <silversurfer972@xxxxxxxxx>
- Reduce cluster log verbosity
- From: Jean-Daniel BUSSY <silversurfer972@xxxxxxxxx>
- Reduce log verbosity
- From: Jean-Daniel BUSSY <silversurfer972@xxxxxxxxx>
- Re: ceph-deploy progress and CDS session
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: [ceph-users] compile error on centos 5.9
- From: huangjun <hjwsm1989@xxxxxxxxx>
- RGW getting revoked tokens from keystone not working
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: Rados Protocoll
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- Re: [PATCH, REBASED] ceph: fix bugs about handling short-read for sync read mode.
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: Re: [PATCH] ceph: Update FUSE_USE_VERSION from 26 to 30.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH] ceph: Update FUSE_USE_VERSION from 26 to 30.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Missing Option to give amount of PGs on pool creation with librados
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Wei Liu <wei.liu2@xxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Wei Liu <wei.liu2@xxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: PG Backend Proposal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Missing Option to give amount of PGs on pool creation with librados
- From: Niklas Goerke <niklas@xxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Pasi Kärkkäinen <pasik@xxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] compile error on centos 5.9
- From: huangjun <hjwsm1989@xxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: Sylvain Munaut <s.munaut@xxxxxxxxxxxxxxxxxxxx>
- [PATCH 3/3] ceph: rework trim caps code
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 2/3] ceph: fix request max size
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/3] ceph: introduce i_truncate_mutex
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 5/6] mds: change LOCK_SCAN to unstable state
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 6/6] mds: don't issue caps while session is stale
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 4/6] mds: handle "state == LOCK_LOCK_XLOCK" when cancelling xlock
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 2/6] mds: revoke GSHARED cap when finishing xlock
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 3/6] mds: remove "type != CEPH_LOCK_DN" check in Locker::cancel_locking()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/6] mds: fix cap revoke confirmation
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 0/6] misc fixes for mds
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: Rados Protocoll
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: Re: [PATCH 2/2] ceph: Add pg_name filed in struct ceph_ioctl_dataloc.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [ceph-users] kernel BUG at net/ceph/osd_client.c:2103
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Session Swap: infiniband / rgw multitenancy
- From: Ross Turk <ross@xxxxxxxxxxx>
- Re: [PATCH 2/2] ceph: Add pg_name filed in struct ceph_ioctl_dataloc.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] ceph: fix null pointer dereference
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH] ceph: fix null pointer dereference
- From: Nathaniel Yazdani <n1ght.4nd.d4y@xxxxxxxxx>
- Re: [ceph-users] compile error on centos 5.9
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] kernel BUG at net/ceph/osd_client.c:2103
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: blueprint: osd: ceph on zfs
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 2/2] ceph: Add pg_name filed in struct ceph_ioctl_dataloc.
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH 1/2] libceph: Add a new func ceph_calc_ceph_temp_pg and export it.
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH 0/2] print pgname using cephfs.
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH] cephfs: Add a function which print pg name using cephfs.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: blueprint: osd: ceph on zfs
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: [ceph-users] kernel BUG at net/ceph/osd_client.c:2103
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: [PATCH] ceph: fix bugs about handling short-read for sync read mode.
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: blueprint: osd: ceph on zfs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: blueprint: osd: ceph on zfs
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: [PATCH V5 2/8] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sha Zhengju <handai.szj@xxxxxxxxx>
- bug in /etc/init.d/ceph debian
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [PATCH V5 2/8] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] ceph: fix bugs about handling short-read for sync read mode.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] ceph: Add check returned value on func ceph_calc_ceph_pg.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] ceph-deploy progress and CDS session
- From: Eric Eastman <eric0e@xxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- ceph-deploy progress and CDS session
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- mds sessions at cds
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: rados_clone_range for different pgs
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH V5 2/8] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- rados_clone_range for different pgs
- From: Oleg Krasnianskiy <oleg.krasnianskiy@xxxxxxxxx>
- ocf script for ceph quorum check and fs
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: [PATCH V5 2/8] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sha Zhengju <handai.szj@xxxxxxxxx>
- [PATCH] ceph: Add check returned value on func ceph_calc_ceph_pg.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH V5 2/8] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sha Zhengju <handai.szj@xxxxxxxxx>
- Re: Rados Protocoll
- From: Niklas Goerke <niklas@xxxxxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] ceph: fix bugs about handling short-read for sync read mode.
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Rados Protocoll
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Rados Protocoll
- From: Niklas Goerke <niklas@xxxxxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [PATCH V5 2/8] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sage Weil <sage@xxxxxxxxxxx>
- v0.67-rc3 Dumpling release candidate
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] mds: remove waiting lock before merging with neighbours
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: PG Backend Proposal
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- PG Backend Proposal
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: still recovery issues with cuttlefish
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: [PATCH] Add missing buildrequires for Fedora
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: [PATCH] mds: remove waiting lock before merging with neighbours
- From: David Disseldorp <ddiss@xxxxxxx>
- [PATCH V5 2/8] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
- From: Sha Zhengju <handai.szj@xxxxxxxxx>
- Re: LFS & Ceph
- From: Chmouel Boudjnah <chmouel@xxxxxxxxxxx>
- still recovery issues with cuttlefish
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- [PATCH] ceph: Update FUSE_USE_VERSION from 26 to 30.
- From: majianpeng <majianpeng@xxxxxxxxx>
- RE: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
- From: James Harper <james.harper@xxxxxxxxxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- RE: Read ahead affect Ceph read performance much
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Read ahead affect Ceph read performance much
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: Blueprint: inline data support (step 2)
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: [ceph-users] Problem with MON after reboot
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- RE: [ceph-users] Problem with MON after reboot
- From: Luke Jing Yuan <jyluke@xxxxxxxx>
- Re: [ceph-users] Problem with MON after reboot
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Problem with MON after reboot
- From: Luke Jing Yuan <jyluke@xxxxxxxx>
- RE: [ceph-users] Problem with MON after reboot
- From: Luke Jing Yuan <jyluke@xxxxxxxx>
- Re: [ceph-users] Problem with MON after reboot
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Blueprint: Add LevelDB support to ceph cluster backend store
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: Blueprint: Add LevelDB support to ceph cluster backend store
- From: 袁冬 <yuandong1222@xxxxxxxxx>
- Re: Blueprint: Add LevelDB support to ceph cluster backend store
- From: 袁冬 <yuandong1222@xxxxxxxxx>
- Re: Blueprint: Add LevelDB support to ceph cluster backend store
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Blueprint: Add LevelDB support to ceph cluster backend store
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Blueprint: Add LevelDB support to ceph cluster backend store
- From: Alex Elsayed <eternaleye@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- RE: Read ahead affect Ceph read performance much
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Blueprint: Add LevelDB support to ceph cluster backend store
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Blueprint: inline data support (step 2)
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Blueprint: inline data support (step 2)
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- blueprint : erasure code
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH] libceph: fix deadlock in ceph_build_auth()
- From: David Miller <davem@xxxxxxxxxxxxx>
- Re: Fwd: [ceph-users] Small fix for ceph.spec
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- blueprint: cache pool overlay
- From: Sage Weil <sage@xxxxxxxxxxx>
- [RFC] Factors affect CephFS read performance
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: krbd & live resize
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Negative degradation?
- From: David McBride <dwm37@xxxxxxxxx>
- [PATCH 8/9] ceph: WQ_NON_REENTRANT is meaningless and going away
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: krbd & live resize
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: krbd & live resize
- From: Laurent Barbe <laurent@xxxxxxxxxxx>
- Re: krbd & live resize
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: krbd & live resize
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: krbd & live resize
- From: Laurent Barbe <laurent@xxxxxxxxxxx>
- Re: mds.0 crashed with 0.61.7
- From: Andreas Friedrich <andreas.friedrich@xxxxxxxxxxxxxx>
- Negative degradation?
- From: Roald van Loon <roaldvanloon@xxxxxxxxx>
- Re: Fwd: [ceph-users] Small fix for ceph.spec
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- [PATCH] Add missing buildrequires for Fedora
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- Re: Fwd: [ceph-users] Small fix for ceph.spec
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- Re: Fwd: [ceph-users] Small fix for ceph.spec
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- blueprint: rgw multitenancy
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- blueprint: rgw bucket scalability
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- blueprint: librgw
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Fwd: [ceph-users] Small fix for ceph.spec
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- blueprint: rgw quota
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- blueprint: RADOS Object Temperature Monitoring
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- blueprint: rgw multi-region disaster recovery, second phase
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- krbd & live resize
- From: Loic Dachary <loic@xxxxxxxxxxx>
- blueprint: object redirects
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Anyone in NYC next week?
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: mds.0 crashed with 0.61.7
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mds.0 crashed with 0.61.7
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: mds.0 crashed with 0.61.7
- From: Sage Weil <sage@xxxxxxxxxxx>
- mds.0 crashed with 0.61.7
- From: Andreas Friedrich <andreas.friedrich@xxxxxxxxxxxxxx>
- [PATCH] mds: remove waiting lock before merging with neighbours
- From: David Disseldorp <ddiss@xxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: Read ahead affect Ceph read performance much
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Read ahead affect Ceph read performance much
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: ObjectContext & PGRegistry API
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Read ahead affect Ceph read performance much
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- blueprint: mds memory efficiency
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH] libceph: fix deadlock in ceph_build_auth()
- From: Alexey Khoroshilov <khoroshilov@xxxxxxxxx>
- Re: blueprint: ceph platform portability
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: LFS & Ceph
- From: Frederic Lepied <frederic.lepied@xxxxxxxxxxxx>
- Re: blueprint: ceph platform portability
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: blueprint: ceph platform portability
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- Re: blueprint: ceph platform portability
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: blueprint: ceph platform portability
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- blueprint: ceph platform portability
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- [PATCH][TRIVIAL] ceph: Modify comments for checkeof.
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH][TRIVIAL] ceph: Add comments for ENOENT which returned from osd.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- blueprint: osd: ceph on zfs
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: a few rados blueprints
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: a few rados blueprints
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: a few rados blueprints
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: a few rados blueprints
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- a few rados blueprints
- From: Sage Weil <sage@xxxxxxxxxxx>
- v0.61.7 Cuttlefish update released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Anyone in NYC next week?
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- New Defects reported by Coverity Scan for ceph (fwd)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] Flapping osd / continuously reported as failed
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- Re: Re: question about striped_read
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH 1/3] libceph: call r_unsafe_callback when unsafe reply is received
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: LFS & Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Re: question about striped_read
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- RE: [ceph-users] Flapping osd / continuously reported as failed
- From: Mostowiec Dominik <Dominik.Mostowiec@xxxxxxxxxxxx>
- Re: Re: question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: question about striped_read
- From: Sage Weil <sage@xxxxxxxxxxx>
- v0.67-rc2 dumpling release candidate
- From: Sage Weil <sage@xxxxxxxxxxx>
- Anyone in NYC next week?
- From: Sage Weil <sage@xxxxxxxxxxx>
- question about striped_read
- From: majianpeng <majianpeng@xxxxxxxxx>
- LFS & Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Sage Weil <sage@xxxxxxxxxxx>
- sharedptr_registry.hpp unit tests
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] v0.61.6 Cuttlefish update released
- From: Alex Bligh <alex@xxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- RE: [ceph-users] Flapping osd / continuously reported as failed
- From: Studziński Krzysztof <krzysztof.studzinski@xxxxxxxxxxxx>
- Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Upgrading from 0.61.5 to 0.61.6 ended in disaster
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [PATCH] ceph: fix freeing inode vs removing session caps race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: ceph file system: extended attributes differ between ceph.ko and ceph-fuse
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- [PATCH] ceph: fix freeing inode vs removing session caps race
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- v0.61.6 Cuttlefish update released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Re: [PATCH] ceph: Don't use ceph-sync-mode for synchronous-fs.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [parch] ceph: cleanup types in striped_read()
- From: Sage Weil <sage@xxxxxxxxxxx>
- RE: [ceph-users] Flapping osd / continuously reported as failed
- From: Studziński Krzysztof <krzysztof.studzinski@xxxxxxxxxxxx>
- Re: ceph file system: extended attributes differ between ceph.ko and ceph-fuse
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [ceph-users] Flapping osd / continuously reported as failed
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- RE: [ceph-users] Flapping osd / continuously reported as failed
- From: Studziński Krzysztof <krzysztof.studzinski@xxxxxxxxxxxx>
- Re: [ceph-users] Flapping osd / continuously reported as failed
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Flapping osd / continuously reported as failed
- From: Studziński Krzysztof <krzysztof.studzinski@xxxxxxxxxxxx>
- Re: [parch] ceph: cleanup types in striped_read()
- From: Alex Elder <alex.elder@xxxxxxxxxx>
- [parch] ceph: cleanup types in striped_read()
- From: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
- [PATCH] ceph: introduce i_truncate_mutex
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [ceph-users] rgw bucket index
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: [PATCH 0/6] misc fixes for mds
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH] ceph: trim deleted inode
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH] ceph: trim deleted inode
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [PATCH] ceph: trim deleted inode
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: [PATCH] ceph: trim deleted inode
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] Optimize Ceph cluster (kernel, osd, rbd)
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- ObjectContext & PGRegistry API
- From: Loic Dachary <loic@xxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- [PATCH v3] Ceph: Punch hole support for kernel client
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- [PATCH] Ceph-qa: change the fsx.sh to support hole punching test
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- [PATCH v4] Ceph-fuse: Fallocate and punch hole support
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
- [PATCH 2/2] ceph: wake up writer if vmtruncate work get blocked
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/2] ceph: drop CAP_LINK_SHARED when sending "link" request to MDS
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- rgw bucket index
- From: Dominik Mostowiec <dominikmostowiec@xxxxxxxxx>
- [PATCH 2/2] client: trim deleted inode
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/2] mds: notify clients about deleted inode
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH] ceph: trim deleted inode
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- Re: Internal Qemu snapshots with RBD and libvirt
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Internal Qemu snapshots with RBD and libvirt
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Internal Qemu snapshots with RBD and libvirt
- From: Marcus Sorensen <shadowsor@xxxxxxxxx>
- Re: Internal Qemu snapshots with RBD and libvirt
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: Internal Qemu snapshots with RBD and libvirt
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: [PATCH] mon: use first_commited instead of latest_full map if latest_bl.length() == 0
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] v0.61.5 Cuttlefish update released
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph (fwd)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] v0.61.5 Cuttlefish update released
- From: Sage Weil <sage@xxxxxxxxxxx>
- optimal values for osd threads
- From: Dominik Mostowiec <dominikmostowiec@xxxxxxxxx>
- Re: [PATCH] mon: use first_commited instead of latest_full map if latest_bl.length() == 0
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- [PATCH] mon: use first_commited instead of latest_full map if latest_bl.length() == 0
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] v0.61.5 Cuttlefish update released
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] v0.61.5 Cuttlefish update released
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] v0.61.5 Cuttlefish update released
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: [ceph-users] v0.61.5 Cuttlefish update released
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: v0.61.5 Cuttlefish update released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [ceph-users] ceph -w warning "I don't have pgid 0.2c8"?
- From: Ta Ba Tuan <tuantb@xxxxxxxxxx>
- Re: [ceph-users] ceph -w warning "I don't have pgid 0.2c8"?
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- v0.61.5 Cuttlefish update released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Internal Qemu snapshots with RBD and libvirt
- From: Wido den Hollander <wido@xxxxxxxx>
- ceph file system: extended attributes differ between ceph.ko and ceph-fuse
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- 3.10.0 failed paging request from kthread_data
- From: "Jim Schutt" <jaschut@xxxxxxxxxx>
- Re: [ceph-users] ceph -w warning "I don't have pgid 0.2c8"?
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [PATCH v3] Ceph-fuse: Fallocate and punch hole support
- From: Sage Weil <sage@xxxxxxxxxxx>
- [PATCH 6/6] mds: change LOCK_SCAN to unstable state
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 4/6] mds: handle "state == LOCK_LOCK_XLOCK" when cancelling xlock
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 5/6] mds: wake xlock waiter when xlock is done
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 3/6] mds: remove "type != CEPH_LOCK_DN" check in Locker::cancel_locking()
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 2/6] mds: revoke GSHARED cap when finishing xlock
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 1/6] mds: fix cap revoke confirmation
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH 0/6] misc fixes for mds
- From: "Yan, Zheng" <zheng.z.yan@xxxxxxxxx>
- [PATCH v3] Ceph-fuse: Fallocate and punch hole support
- From: Li Wang <liwang@xxxxxxxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]