CEPH Filesystem Development
[Prev Page][Next Page]
- The Async messenger benchmark with latest master
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Possible bug when getting batched xattrs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- Re: [Ceph-Devel] NO pg created for erasure-coded pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: questions about erasure coded pool and rados
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: teuthology config file
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: vstart.sh crashes MON with --paxos-propose-interval=0.01 and one MDS
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: Possible bug when getting batched xattrs
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- Re: questions about erasure coded pool and rados
- From: Owen Synge <osynge@xxxxxxxx>
- questions about erasure coded pool and rados
- From: yue longguang <yuelongguang@xxxxxxxxx>
- RE: [Ceph-Devel] NO pg created for erasure-coded pool
- From: <ghislain.chevalier@xxxxxxxxxx>
- Re: [PATCH 2/5] block: add function to issue compare and write
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Possible bug when getting batched xattrs
- From: Joaquim Rocha <joaquim.rocha@xxxxxxx>
- Re: [PATCH 0/5] block/scsi/lio support for COMPARE_AND_WRITE
- From: Hannes Reinecke <hare@xxxxxxx>
- 10/14/2014 Weekly Ceph Performance Meeting
- From: "Zhang, Jian" <jian.zhang@xxxxxxxxx>
- Re: vstart.sh crashes MON with --paxos-propose-interval=0.01 and one MDS
- From: David Zafman <david.zafman@xxxxxxxxxxx>
- Re: kerberos / AD requirements, blueprint
- vstart.sh crashes MON with --paxos-propose-interval=0.01 and one MDS
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: [PATCH 0/5] block/scsi/lio support for COMPARE_AND_WRITE
- From: "Elliott, Robert (Server Storage)" <Elliott@xxxxxx>
- Re: [PATCH 0/5] block/scsi/lio support for COMPARE_AND_WRITE
- From: Douglas Gilbert <dgilbert@xxxxxxxxxxxx>
- delete your branches after merge
- From: Sage Weil <sage@xxxxxxxxxxxx>
- kerberos / AD requirements, blueprint
- From: Sage Weil <sage@xxxxxxxxxxxx>
- capping pgs per osd
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: ceph performance call: buffer
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: ceph performance call: buffer
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph performance call: buffer
- From: Loic Dachary <loic@xxxxxxxxxxx>
- ceph performance call: buffer
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: [Ceph-Devel] NO pg created for erasure-coded pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: [Ceph-Devel] NO pg created for erasure-coded pool
- From: <ghislain.chevalier@xxxxxxxxxx>
- RE: [Ceph-Devel] NO pg created for erasure-coded pool
- From: <ghislain.chevalier@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [PATCH 0/5] block/scsi/lio support for COMPARE_AND_WRITE
- From: Douglas Gilbert <dgilbert@xxxxxxxxxxxx>
- Re: [PATCH 1/1] libceph: use memalloc flags for net IO
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- RE: 10/14/2014 Weekly Ceph Performance Meeting
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- [PATCH 1/1] libceph: use memalloc flags for net IO
- From: michaelc@xxxxxxxxxxx
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- [PATCH 1/5] block: set the nr of sectors a dev can compare and write atomically
- From: michaelc@xxxxxxxxxxx
- [PATCH 4/5] lio: use REQ_COMPARE_AND_WRITE if supported
- From: michaelc@xxxxxxxxxxx
- [PATCH 2/5] block: add function to issue compare and write
- From: michaelc@xxxxxxxxxxx
- [PATCH 3/5] scsi: add support for COMPARE_AND_WRITE
- From: michaelc@xxxxxxxxxxx
- [PATCH 0/5] block/scsi/lio support for COMPARE_AND_WRITE
- From: michaelc@xxxxxxxxxxx
- [PATCH 5/5] lio iblock: add support for REQ_CMP_AND_WRITE
- From: michaelc@xxxxxxxxxxx
- RE: 10/14/2014 Weekly Ceph Performance Meeting
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Firefly maintenance release schedule
- From: Dmitry Borodaenko <dborodaenko@xxxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [Ceph-Devel] NO pg created for erasure-coded pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Firefly maintenance release schedule
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Alphe Salas <asalas@xxxxxxxxx>
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: Firefly maintenance release schedule
- From: Dmitry Borodaenko <dborodaenko@xxxxxxxxxxxx>
- 10/15/2014 Weekly Ceph Performance Meeting Recording
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- RE:[Ceph-Devel] NO pg created for erasure-coded pool
- From: <ghislain.chevalier@xxxxxxxxxx>
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: [Ceph-Devel] NO pg created for erasure-coded pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- RE: [Ceph-Devel] NO pg created for erasure-coded pool
- From: <ghislain.chevalier@xxxxxxxxxx>
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Amon Ott <a.ott@xxxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: [Ceph-Devel] NO pg created for erasure-coded pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Amon Ott <a.ott@xxxxxxxxxxxx>
- Re: 10/14/2014 Weekly Ceph Performance Meeting
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Nicheal <zay11022@xxxxxxxxx>
- 10/14/2014 Weekly Ceph Performance Meeting
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Nicheal <zay11022@xxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Alphe Salas <asalas@xxxxxxxxx>
- [GIT PULL] Ceph updates for 3.18-rc1
- From: Sage Weil <sage@xxxxxxxxxxx>
- v0.80.7 Firefly released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [Ceph-Devel] NO pg created for erasure-coded pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH 2/3] rbd: rbd workqueues need a resque worker
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH 1/3] libceph: ceph-msgr workqueue needs a resque worker
- From: Sage Weil <sage@xxxxxxxxxxxx>
- RE: [Ceph-Devel] NO pg created for erasure-coded pool
- From: <ghislain.chevalier@xxxxxxxxxx>
- Re: [Ceph-Devel] NO pg created for eruasre-coded pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- RE: [Ceph-Devel] NO pg created for eruasre-coded pool
- From: <ghislain.chevalier@xxxxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: [Ceph-Devel] NO pg created for eruasre-coded pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Thomas Lemarchand <thomas.lemarchand@xxxxxxxxxxxxxxxxxx>
- [Ceph-Devel] NO pg created for eruasre-coded pool
- From: <ghislain.chevalier@xxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Amon Ott <a.ott@xxxxxxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: NEON / SIMD
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] Micro Ceph summit during the OpenStack summit
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Multiple issues with glibc heap management
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: [ceph-users] the state of cephfs in giant
- From: Eric Eastman <eric0e@xxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Wido den Hollander <wido@xxxxxxxx>
- the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: WriteBack Throttle kill the performace of the disk
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: [ceph-users] Micro Ceph summit during the OpenStack summit
- From: "Jonathan D. Proulx" <jon@xxxxxxxxxxxxx>
- WriteBack Throttle kill the performace of the disk
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: [ceph-users] Micro Ceph summit during the OpenStack summit
- From: Sebastien Han <sebastien.han@xxxxxxxxxxxx>
- Re: NEON / SIMD
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: qemu drive-mirror to rbd storage : no sparse rbd image
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: qemu drive-mirror to rbd storage : no sparse rbd image
- From: Paolo Bonzini <pbonzini@xxxxxxxxxx>
- Re: qemu drive-mirror to rbd storage : no sparse rbd image
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Micro Ceph summit during the OpenStack summit
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: qemu drive-mirror to rbd storage : no sparse rbd image
- From: Paolo Bonzini <pbonzini@xxxxxxxxxx>
- Re: [Qemu-devel] qemu drive-mirror to rbd storage : no sparse rbd image
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: [Qemu-devel] qemu drive-mirror to rbd storage : no sparse rbd image
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- NEON / SIMD
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: RHEL7 source packages
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RHEL7 source packages
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [Qemu-devel] qemu drive-mirror to rbd storage : no sparse rbd image
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: [Qemu-devel] qemu drive-mirror to rbd storage : no sparse rbd image
- From: Fam Zheng <famz@xxxxxxxxxx>
- Re: [Qemu-devel] qemu drive-mirror to rbd storage : no sparse rbd image
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: [Qemu-devel] qemu drive-mirror to rbd storage : no sparse rbd image
- From: Fam Zheng <famz@xxxxxxxxxx>
- hammer blueprints
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: "Duan, Jiangang" <jiangang.duan@xxxxxxxxx>
- CephFS priorities (survey!)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Firefly v0.80.6 issues 9696 and 9732
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Micro Ceph summit during the OpenStack summit
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Sage Weil <sweil@xxxxxxxxxx>
- ISA plugin tests
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH 3/3] rbd: use a single workqueue for all devices
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- [PATCH 2/3] rbd: rbd workqueues need a resque worker
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- [PATCH 1/3] libceph: ceph-msgr workqueue needs a resque worker
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- [PATCH 0/3] libceph, rbd: don't lockup under memory pressure
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Request for comments
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: why index (collectionIndex) need a lock?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Request for comments
- From: Barclay Jameson <almightybeeij@xxxxxxxxx>
- Re: arm7 gitbuilder ?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: ARM NEON optimisations for gf-complete/jerasure/ceph-erasure
- From: Janne Grunau <j@xxxxxxxxxx>
- arm7 gitbuilder ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: v0.86 released (Giant release candidate)
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: ceph-devel irc channel archive
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: ceph-devel irc channel archive
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph-devel irc channel archive
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [PATCH 1/1 linux-next] ceph: fix bool assignments
- From: Fabian Frederick <fabf@xxxxxxxxx>
- Re: [PATCH 1/1 linux-next] ceph: fix bool assignments
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: [PATCH 1/1 linux-next] ceph: fix bool assignments
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- ceph-devel irc channel archive
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [doc] Adding/Removing OSD is not thorough
- From: deanraccoon <deanraccoon@xxxxxxxxx>
- Re: rados.py: add tmap_to_omap method
- From: Loic Dachary <loic@xxxxxxxxxxx>
- rados.py: add tmap_to_omap method
- From: Alexandre Oliva <oliva@xxxxxxx>
- Re: [PATCH 1/1 linux-next] ceph: fix bool assignments
- From: Dan Mick <dan.mick@xxxxxxxxxxx>
- [PATCH 1/1 linux-next] ceph: fix bool assignments
- From: Fabian Frederick <fabf@xxxxxxxxx>
- [ANN] ceph-deploy 1.5.18 released
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [Ceph-qa] New Defects reported by Coverity Scan for ceph
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- New Defects reported by Coverity Scan for ceph
- From: scan-admin@xxxxxxxxxxxx
- some questions about transactions during write op?
- From: Tim Zhang <cofol1986@xxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Paul Von-Stamwitz <PVonStamwitz@xxxxxxxxxxxxxx>
- [PATCH 1/1 linux-next] ceph: return error code directly.
- From: Fabian Frederick <fabf@xxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: "Duan, Jiangang" <jiangang.duan@xxxxxxxxx>
- Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: "Duan, Jiangang" <jiangang.duan@xxxxxxxxx>
- RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- 10/7/2014 Weekly Ceph Performance Meeting Recording
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: make check failures
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: make check failures
- From: David Zafman <david.zafman@xxxxxxxxxxx>
- Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: [PATCH] libceph: sync osd op definitions in rados.h
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph (fwd)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] libceph: sync osd op definitions in rados.h
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] libceph: separate multiple ops with commas in debugfs output
- From: Sage Weil <sweil@xxxxxxxxxx>
- qemu drive-mirror to rbd storage : no sparse rbd image
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- [PATCH] libceph: separate multiple ops with commas in debugfs output
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- [PATCH] libceph: sync osd op definitions in rados.h
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- v0.86 Contributor Credits
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Loic Dachary <loic@xxxxxxxxxxx>
- make check failures
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [RFC]New Message Implementation Based on Event
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- 10/7/2014 Weekly Ceph Performance Meeting
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- v0.86 released (Giant release candidate)
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: [RFC]New Message Implementation Based on Event
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: [RFC]New Message Implementation Based on Event
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: [RFC]New Message Implementation Based on Event
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Ceph Developer Summit: Hammer
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- in NYC Wednesday for ceph day
- From: Sage Weil <sweil@xxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- 2 questions about bucket index log
- From: Patrycja Szabłowska <szablowska.patrycja@xxxxxxxxx>
- Re: Sorting pull requests per label
- From: Sage Weil <sweil@xxxxxxxxxx>
- Sorting pull requests per label
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: Regarding key/value interface
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: FW: Weekly Ceph Performance Meeting Invitation
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: Regarding key/value interface
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: jerasure: galois_uninit_default_field
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: Regarding key/value interface
- From: Allen Samuels <Allen.Samuels@xxxxxxxxxxx>
- RE: Regarding key/value interface
- From: Sage Weil <sweil@xxxxxxxxxx>
- Static code analysis fixes for gf-complete
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: Regarding key/value interface
- From: Varada Kari <Varada.Kari@xxxxxxxxxxx>
- RE: Regarding key/value interface
- From: Varada Kari <Varada.Kari@xxxxxxxxxxx>
- Re: [ceph-users] Ceph Developer Summit: Hammer
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: FW: Weekly Ceph Performance Meeting Invitation
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: Regarding key/value interface
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: [ceph-users] Ceph Developer Summit: Hammer
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Regarding key/value interface
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: issue #8752 (inconsistent PGs on RBD caching pool)
- From: Dmitry Smirnov <onlyjob@xxxxxxxxxx>
- FW: Weekly Ceph Performance Meeting Invitation
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Support the Ada Initiative: a challenge to the open storage community
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: issue #8752 (inconsistent PGs on RBD caching pool)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- Re: [PATCH v3 1/4] buffer: add an aligned buffer with less alignment than a page
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: issue #8752 (inconsistent PGs on RBD caching pool)
- From: Dmitry Smirnov <onlyjob@xxxxxxxxxx>
- Re: [PATCH v3 0/4] buffer alignment for erasure code SIMD
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: [PATCH v3 1/4] buffer: add an aligned buffer with less alignment than a page
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: [PATCH v3 1/4] buffer: add an aligned buffer with less alignment than a page
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- Re: [PATCH 1/1 linux-next] libceph: remove redundant declaration
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: ceph-disk vs keyvaluestore
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: [ceph-users] Ceph Developer Summit: Hammer
- From: Yann Dupont <yd@xxxxxxxxx>
- Re: v0.80.6 Firefly released
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- v0.80.6 Firefly released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph-disk vs keyvaluestore
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] Ceph Developer Summit: Hammer
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: ceph-disk vs keyvaluestore
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: [ceph-users] Ceph Developer Summit: Hammer
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: [ceph-users] Ceph Developer Summit: Hammer
- From: Sage Weil <sweil@xxxxxxxxxx>
- New Ceph submodule : ceph-erasure-code-corpus
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Weekly Ceph Performance Meeting Invitation
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- 10/1/2014 Weekly Ceph Performance Meeting Recording
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Weekly Ceph Performance Meeting Invitation
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Weekly Ceph Performance Meeting Invitation
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: FileJournal bug : additional information request
- From: Sheldon Mustard <sheldon.mustard@xxxxxxxxxxx>
- Re: why index (collectionIndex) need a lock?
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- Re: librados AIO problem diagnostic
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- Re: FileJournal bug : additional information request
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Andreas Bluemle <andreas.bluemle@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- Re: librados AIO problem diagnostic
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: librados AIO problem diagnostic
- From: Sebastien Ponce <sebastien.ponce@xxxxxxx>
- librados AIO problem diagnostic
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Weekly Ceph Performance Meeting Invitation
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: why index (collectionIndex) need a lock?
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: why index (collectionIndex) need a lock?
- From: Noah Watkins <noah.watkins@xxxxxxxxxxx>
- Ceph Developer Summit: Hammer
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- [PATCH 1/1 linux-next] libceph: remove redundant declaration
- From: Fabian Frederick <fabf@xxxxxxxxx>
- Weekly Ceph Performance Meeting Invitation
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- RE: why index (collectionIndex) need a lock?
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: New Defects reported by Coverity Scan for ceph (fwd)
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: New Defects reported by Coverity Scan for ceph (fwd)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: New Defects reported by Coverity Scan for ceph (fwd)
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: New Defects reported by Coverity Scan for ceph (fwd)
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: why index (collectionIndex) need a lock?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: FileJournal bug : additional information request
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: FileJournal bug : additional information request
- From: Sheldon Mustard <sheldon.mustard@xxxxxxxxxxx>
- FileJournal bug : additional information request
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: design a rados-based distributed kv store support scan op
- From: Sage Weil <sweil@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph (fwd)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: design a rados-based distributed kv store support scan op
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- design a rados-based distributed kv store support scan op
- From: Plato Zhang <sangongs@xxxxxxxxx>
- why index (collectionIndex) need a lock?
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: ceph-disk vs keyvaluestore
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: ceph-disk vs keyvaluestore
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: ceph-disk vs keyvaluestore
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- ceph-disk vs keyvaluestore
- From: Sage Weil <sweil@xxxxxxxxxx>
- exposing tiers via librados
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: C++11
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: C++11
- From: Milosz Tanski <milosz@xxxxxxxxx>
- C++11
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH v3 0/4] buffer alignment for erasure code SIMD
- From: Milosz Tanski <milosz@xxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: [PATCH v3 1/4] buffer: add an aligned buffer with less alignment than a page
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH v3 0/4] buffer alignment for erasure code SIMD
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH v3 1/4] buffer: add an aligned buffer with less alignment than a page
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH v3 4/4] ceph_erasure_code_benchmark: use 32-byte aligned input
- From: Janne Grunau <j@xxxxxxxxxx>
- [PATCH v3 1/4] buffer: add an aligned buffer with less alignment than a page
- From: Janne Grunau <j@xxxxxxxxxx>
- [PATCH v3 3/4] erasure code: use 32-byte aligned buffers
- From: Janne Grunau <j@xxxxxxxxxx>
- [PATCH v3 2/4] erasure code: use a function for the chunk mapping index
- From: Janne Grunau <j@xxxxxxxxxx>
- [PATCH v3 0/4] buffer alignment for erasure code SIMD
- From: Janne Grunau <j@xxxxxxxxxx>
- jerasure: galois_uninit_default_field
- From: Loic Dachary <loic@xxxxxxxxxxx>
- rgw create bucket 403 error
- From: Zhao zhiming <zhaozhiming003@xxxxxxxxx>
- RE: Latency Improvement Report for ShardedOpWQ
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: Latency Improvement Report for ShardedOpWQ
- From: "Ma, Jianpeng" <jianpeng.ma@xxxxxxxxx>
- RE: Latency Improvement Report for ShardedOpWQ
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Latency Improvement Report for ShardedOpWQ
- From: Dong Yuan <yuandong1222@xxxxxxxxx>
- RE: Latency Improvement Report for ShardedOpWQ
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Latency Improvement Report for ShardedOpWQ
- From: Dong Yuan <yuandong1222@xxxxxxxxx>
- Re: Weekly performance meeting
- From: Guang Yang <yguang11@xxxxxxxxxxx>
- Re: RGW URL Parsing
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: storlets
- From: Yehuda Sadeh <yehuda@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Sage Weil <sweil@xxxxxxxxxx>
- storlets
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Weekly performance meeting
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Weekly performance meeting
- From: Loic Dachary <loic@xxxxxxxxxxx>
- RE: Weekly performance meeting
- From: "Zhang, Jian" <jian.zhang@xxxxxxxxx>
- Re: Weekly performance meeting
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Dong Yuan <yuandong1222@xxxxxxxxx>
- RE: Weekly performance meeting
- From: Dror Goldenberg <gdror@xxxxxxxxxxxx>
- RE: Weekly performance meeting
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: Weekly performance meeting
- From: "Zhang, Jian" <jian.zhang@xxxxxxxxx>
- Re: Weekly performance meeting
- From: Dong Yuan <yuandong1222@xxxxxxxxx>
- Re: Weekly performance meeting
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Guang Yang <yguang11@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: Weekly performance meeting
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- RE: Weekly performance meeting
- From: Paul Von-Stamwitz <PVonStamwitz@xxxxxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Vu Pham <vuhuong@xxxxxxxxxxxx>
- RE: Weekly performance meeting
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Weekly performance meeting
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Weekly performance meeting
- From: "Matt W. Benjamin" <matt@xxxxxxxxxxxx>
- Weekly performance meeting
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: v0.67.11 dumpling released
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: v0.67.11 dumpling released
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- Re: [ceph-users] v0.67.11 dumpling released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [Ceph-maintainers] v0.67.11 dumpling released
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] v0.67.11 dumpling released
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: v0.67.11 dumpling released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: EC-ISA buffer alignment
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: v0.67.11 dumpling released
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- Re: New Defects reported by Coverity Scan for ceph (fwd)
- From: John Spray <john.spray@xxxxxxxxxx>
- EC-ISA buffer alignment
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- v0.67.11 dumpling released
- From: Sage Weil <sage@xxxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph (fwd)
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Fwd: question about object replication theory
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Fwd: question about client's cluster aware
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Impact of page cache on OSD read performance for SSD
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Fwd: question about object replication theory
- From: yue longguang <yuelongguang@xxxxxxxxx>
- Fwd: question about client's cluster aware
- From: yue longguang <yuelongguang@xxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: BlaumRoth with w=7 : what are the consequences ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: [ceph-users] Status of snapshots in CephFS
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: [ceph-users] Status of snapshots in CephFS
- From: Florian Haas <florian.haas@xxxxxxxxxxx>
- BlaumRoth with w=7 : what are the consequences ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Impact of page cache on OSD read performance for SSD
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: Impact of page cache on OSD read performance for SSD
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: Impact of page cache on OSD read performance for SSD
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Impact of page cache on OSD read performance for SSD
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Impact of page cache on OSD read performance for SSD
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Impact of page cache on OSD read performance for SSD
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Impact of page cache on OSD read performance for SSD
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Impact of page cache on OSD read performance for SSD
- From: Milosz Tanski <milosz@xxxxxxxxx>
- Impact of page cache on OSD read performance for SSD
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [PATCH 2/2] libceph: resend lingering requests with a new tid
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- question about client's cluster aware
- From: yue longguang <yuelongguang@xxxxxxxxx>
- question about object replication theory
- From: yue longguang <yuelongguang@xxxxxxxxx>
- rgw doc CONFIGURING PRINT CONTINUE
- From: Zhao zhiming <zhaozhiming003@xxxxxxxxx>
- Ceph Day Speaking Slots
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- Re: [PATCH 2/2] libceph: resend lingering requests with a new tid
- From: Alex Elder <elder@xxxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: issue 8747 / 9011
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: issue 8747 / 9011
- From: Dmitry Smirnov <onlyjob@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: issue 8747 / 9011
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: issue 8747 / 9011
- From: Dmitry Smirnov <onlyjob@xxxxxxxxxxxxxx>
- issue 8747 / 9011
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: why ZFS on ceph is unstable?
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: why ZFS on ceph is unstable?
- From: Eric Eastman <eric0e@xxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] Status of snapshots in CephFS
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: why ZFS on ceph is unstable?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: David Moreau Simard <dmsimard@xxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: [PATCH v2 2/3] ec: use 32-byte aligned buffers
- From: Loic Dachary <loic@xxxxxxxxxxx>
- why ZFS on ceph is unstable?
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: v2 aligned buffer changes for erasure codes
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: [PATCH] ceph: move ceph_find_inode() outside the s_mutex
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 3/3] ceph: include the initial ACL in create/mkdir/mknod MDS requests
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 2/3] ceph: use pagelist to present MDS request data
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 1/3] libceph: reference counting pagelist
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 1/3] libceph: reference counting pagelist
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: radosgw-admin list users?
- From: Zhao zhiming <zhaozhiming003@xxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: radosgw-admin list users?
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: radosgw-admin list users?
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- radosgw-admin list users?
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: snap_trimming + backfilling is inefficient with many purged_snaps
- From: Florian Haas <florian@xxxxxxxxxxx>
- RE: v2 aligned buffer changes for erasure codes
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: v2 aligned buffer changes for erasure codes
- From: Janne Grunau <j@xxxxxxxxxx>
- RE: v2 aligned buffer changes for erasure codes
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- Re: v2 aligned buffer changes for erasure codes
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- snap_trimming + backfilling is inefficient with many purged_snaps
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: v2 aligned buffer changes for erasure codes
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- RE: v2 aligned buffer changes for erasure codes
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- RE: v2 aligned buffer changes for erasure codes
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- [PATCH v2 2/3] ec: use 32-byte aligned buffers
- From: Janne Grunau <j@xxxxxxxxxx>
- [PATCH v2 3/3] ceph_erasure_code_benchmark: align the encoding input
- From: Janne Grunau <j@xxxxxxxxxx>
- [PATCH v2 1/3] buffer: add an aligned buffer with less alignment than a page
- From: Janne Grunau <j@xxxxxxxxxx>
- v2 aligned buffer changes for erasure codes
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: ARM NEON optimisations for gf-complete/jerasure/ceph-erasure
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- How to use radosgw-admin to delete some or all users?
- From: Zhao zhiming <zhaozhiming003@xxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- RE: [ceph-users] Crushmap ruleset for rack aware PG placement
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: puzzled with the design pattern of ceph journal, really ruining performance
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: [PATCH] ceph: remove redundant code for max file size verification
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- RE: puzzled with the design pattern of ceph journal, really ruining performance
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [ceph-users] Crushmap ruleset for rack aware PG placement
- From: "Johnu George (johnugeo)" <johnugeo@xxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: puzzled with the design pattern of ceph journal, really ruining performance
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: severe librbd performance degradation in Giant
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- severe librbd performance degradation in Giant
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [ceph-users] Crushmap ruleset for rack aware PG placement
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] Crushmap ruleset for rack aware PG placement
- From: "Johnu George (johnugeo)" <johnugeo@xxxxxxxxx>
- Re: RadosGW objects to Rados object mapping
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Next Week: Ceph Day San Jose
- From: Ross Turk <ross@xxxxxxxxxx>
- Re: [ceph-users] Crushmap ruleset for rack aware PG placement
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: RadosGW objects to Rados object mapping
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: puzzled with the design pattern of ceph journal, really ruining performance
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: [ceph-users] Crushmap ruleset for rack aware PG placement
- From: "Johnu George (johnugeo)" <johnugeo@xxxxxxxxx>
- RadosGW objects to Rados object mapping
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: puzzled with the design pattern of ceph journal, really ruining performance
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- [PATCH] ceph: remove redundant code for max file size verification
- From: Chao Yu <chao2.yu@xxxxxxxxxxx>
- RE: puzzled with the design pattern of ceph journal, really ruining performance
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- RE: puzzled with the design pattern of ceph journal, really ruining performance
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- [PATCH] ceph: remove redundant io_iter_advance()
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- puzzled with the design pattern of ceph journal, really ruining performance
- From: 姚宁 <zay11022@xxxxxxxxx>
- Re: is function get_net_marked_down right?
- From: Cheng Wei-Chung <freeze.vicente.cheng@xxxxxxxxx>
- Re: is function get_net_marked_down right?
- From: Sage Weil <sweil@xxxxxxxxxx>
- is function get_net_marked_down right?
- From: Cheng Wei-Chung <freeze.vicente.cheng@xxxxxxxxx>
- RE: OSD is crashing during delete operation
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- [PATCH] ceph: move ceph_find_inode() outside the s_mutex
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- New Defects reported by Coverity Scan for ceph (fwd)
- From: Sage Weil <sweil@xxxxxxxxxx>
- [PATCH 3/3] ceph: include the initial ACL in create/mkdir/mknod MDS requests
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 2/3] ceph: use pagelist to present MDS request data
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 1/3] libceph: reference counting pagelist
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Crushmap ruleset for rack aware PG placement
- From: Daniel Swarbrick <daniel.swarbrick@xxxxxxxxxxxxxxxx>
- RE: Cache tiering slow request issue: currently waiting for rw locks
- From: "Wang, Zhiqiang" <zhiqiang.wang@xxxxxxxxx>
- RE: [PATCH 2/3] ec: make use of added aligned buffers
- From: "Ma, Jianpeng" <jianpeng.ma@xxxxxxxxx>
- Re: [PATCH 2/3] ec: make use of added aligned buffers
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH] ceph: request xattrs if xattr_version is zero
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- RE: [PATCH 2/3] ec: make use of added aligned buffers
- From: "Ma, Jianpeng" <jianpeng.ma@xxxxxxxxx>
- Re: [PATCH 2/3] ec: make use of added aligned buffers
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [RFC]New Message Implementation Based on Event
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- can we shrink the lock of ThreadPool::worker
- From: star fan <jfanix@xxxxxxxxx>
- RE: [PATCH 2/3] ec: make use of added aligned buffers
- From: "Ma, Jianpeng" <jianpeng.ma@xxxxxxxxx>
- RE: [PATCH 2/3] ec: make use of added aligned buffers
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: [PATCH 2/3] ec: make use of added aligned buffers
- From: "Ma, Jianpeng" <jianpeng.ma@xxxxxxxxx>
- RE: OSD is crashing during delete operation
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Crushmap ruleset for rack aware PG placement
- From: Amit Vijairania <amit.vijairania@xxxxxxxxx>
- Re: [PATCH 2/3] ec: make use of added aligned buffers
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH 1/3] buffer: add an aligned buffer with less alignment than a page
- From: Loic Dachary <loic@xxxxxxxxxxx>
- [PATCH 1/3] buffer: add an aligned buffer with less alignment than a page
- From: Janne Grunau <j@xxxxxxxxxx>
- [PATCH 2/3] ec: make use of added aligned buffers
- From: Janne Grunau <j@xxxxxxxxxx>
- [PATCH 3/3] ceph_erasure_code_benchmark: align the encoding input
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: [RFC]New Message Implementation Based on Event
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Crushmap ruleset for rack aware PG placement
- From: Sage Weil <sweil@xxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- [PATCH] ceph: include the initial ACL in create/mkdir/mknod MDS requests
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Tools and archive to check for non regression of erasure coded content
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Crushmap ruleset for rack aware PG placement
- From: Amit Vijairania <amit.vijairania@xxxxxxxxx>
- RE: Tools and archive to check for non regression of erasure coded content
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- Ceph RBD kernel module support for Cache Tiering
- From: Amit Vijairania <amit.vijairania@xxxxxxxxx>
- Re: [ceph-users] Cache tier unable to auto flush data to storage tier
- From: Jean-Charles LOPEZ <jc.lopez@xxxxxxxxxxx>
- RE: [ceph-users] OpTracker optimization
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [ceph-users] OpTracker optimization
- From: Sage Weil <sweil@xxxxxxxxxx>
- Tools and archive to check for non regression of erasure coded content
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [ceph-users] OpTracker optimization
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- RE: OpTracker optimization
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [PATCH 2/2] ceph: make sure request isn't in any waiting list when kicking request.
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 1/2] ceph: protect kick_requests() with mdsc->mutex
- From: Sage Weil <sweil@xxxxxxxxxx>
- reminder: giant vs master
- From: Sage Weil <sweil@xxxxxxxxxx>
- librados Locator API
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- Re: Fwd: S3 API Compatibility support
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: [PATCH] daemons: write pid file even when told not to daemonize
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: jerasure buffer misaligned
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: jerasure buffer misaligned
- From: Janne Grunau <j@xxxxxxxxxx>
- Re: set_alloc_hint old osds
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- RE: FW: CURSH optimization for unbalanced pg distribution
- From: "He, Yujie" <yujie.he@xxxxxxxxx>
- RE: Regarding key/value interface
- From: Allen Samuels <Allen.Samuels@xxxxxxxxxxx>
- RE: FW: CURSH optimization for unbalanced pg distribution
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Regarding key/value interface
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: Regarding key/value interface
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Regarding key/value interface
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- [RFC]New Message Implementation Based on Event
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: RBD readahead strategies
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Regarding key/value interface
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- RE: Regarding key/value interface
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Regarding key/value interface
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Regarding key/value interface
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: osd cpu usage is bigger than 100%
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- [GIT PULL] Ceph fixes for 3.17-rc5
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: set_alloc_hint old osds
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: set_alloc_hint old osds
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- jerasure buffer misaligned
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: set_alloc_hint old osds
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: set_alloc_hint old osds
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: set_alloc_hint old osds
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: set_alloc_hint old osds
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- set_alloc_hint old osds
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: OpTracker optimization
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- RE: OpTracker optimization
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [PATCH] rbd: do not return -ERANGE on auth failure
- From: Alex Elder <elder@xxxxxxxx>
- Re: [PATCH] rbd: do not return -ERANGE on auth failure
- From: Alex Elder <elder@xxxxxxxx>
- Re: [PATCH] rbd: do not return -ERANGE on auth failure
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: [PATCH] rbd: do not return -ERANGE on auth failure
- From: Alex Elder <elder@xxxxxxxx>
- Re: [PATCH] rbd: do not return -ERANGE on auth failure
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: [PATCH] rbd: do not return -ERANGE on auth failure
- From: Alex Elder <elder@xxxxxxxx>
- [PATCH] rbd: do not return -ERANGE on auth failure
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- RGW threads hung - more logs
- From: Guang Yang <yguang11@xxxxxxxxxxx>
- Re: [PATCH] libceph: fix a memory leak in handle_watch_notify
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: [PATCH] libceph: fix a memory leak in handle_watch_notify
- From: Alex Elder <elder@xxxxxxxx>
- Re: [PATCH] libceph: fix a memory leak in handle_watch_notify
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- osd cpu usage is bigger than 100%
- From: yue longguang <yuelongguang@xxxxxxxxx>
- [PATCH 2/2] ceph: make sure request isn't in any waiting list when kicking request.
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 1/2] ceph: protect kick_requests() with mdsc->mutex
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- RE: OpTracker optimization
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: OpTracker optimization
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [PATCH] libceph: fix a memory leak in handle_watch_notify
- From: Alex Elder <elder@xxxxxxxx>
- [PATCH] libceph: fix a memory leak in handle_watch_notify
- From: roy.qing.li@xxxxxxxxx
- RBD readahead strategies
- From: Adam Crume <adamcrume@xxxxxxxxx>
- Re: OpTracker optimization
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- RE: OpTracker optimization
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: OpTracker optimization
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- RE: OpTracker optimization
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: OpTracker optimization
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- RE: OpTracker optimization
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: [PATCH net-next 0/5] net: Convert pr_warning to pr_warn
- From: David Miller <davem@xxxxxxxxxxxxx>
- Re: OpTracker optimization
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: [ceph-users] [ANN] ceph-deploy 1.5.14 released
- From: Scottix <scottix@xxxxxxxxx>
- Re: [PATCH net-next 2/5] ceph: Convert pr_warning to pr_warn
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: [PATCH net-next 2/5] ceph: Convert pr_warning to pr_warn
- From: Joe Perches <joe@xxxxxxxxxxx>
- [ANN] ceph-deploy 1.5.14 released
- From: Alfredo Deza <alfredo.deza@xxxxxxxxxxx>
- Re: [PATCH 3/3] libceph: do not hard code max auth ticket len
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 2/3] libceph: add process_one_ticket() helper
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 1/3] libceph: gracefully handle large reply messages from the mon
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Question to RWLock & reverse DNS ip=>hostname
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH net-next 2/5] ceph: Convert pr_warning to pr_warn
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: [PATCH net-next 2/5] ceph: Convert pr_warning to pr_warn
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: [ceph-users] question about RGW
- From: Sage Weil <sweil@xxxxxxxxxx>
- [PATCH] ceph: trim unused inodes before reconnecting to recovering MDS
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH] libceph: fix a use after free issue in osdmap_set_max_osd
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- [PATCH 3/3] libceph: do not hard code max auth ticket len
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- [PATCH 2/3] libceph: add process_one_ticket() helper
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- [PATCH 1/3] libceph: gracefully handle large reply messages from the mon
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- [PATCH 0/3] libceph: #8979 fix (wip-auth-8979)
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- RE: Question to RWLock & reverse DNS ip=>hostname
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- [PATCH net-next 0/5] net: Convert pr_warning to pr_warn
- From: Joe Perches <joe@xxxxxxxxxxx>
- [PATCH net-next 2/5] ceph: Convert pr_warning to pr_warn
- From: Joe Perches <joe@xxxxxxxxxxx>
- RE: FW: CURSH optimization for unbalanced pg distribution
- From: "He, Yujie" <yujie.he@xxxxxxxxx>
- Re: FW: CURSH optimization for unbalanced pg distribution
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- RE: FW: CURSH optimization for unbalanced pg distribution
- From: "Zhang, Jian" <jian.zhang@xxxxxxxxx>
- RE: Cache tiering slow request issue: currently waiting for rw locks
- From: "Wang, Zhiqiang" <zhiqiang.wang@xxxxxxxxx>
- RE: FW: CURSH optimization for unbalanced pg distribution
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] daemons: write pid file even when told not to daemonize
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] daemons: write pid file even when told not to daemonize
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: [PATCH] daemons: write pid file even when told not to daemonize
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Question to RWLock & reverse DNS ip=>hostname
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Question to RWLock & reverse DNS ip=>hostname
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Cache tiering slow request issue: currently waiting for rw locks
- From: Sage Weil <sweil@xxxxxxxxxx>
- Question to RWLock & reverse DNS ip=>hostname
- From: Andreas Joachim Peters <Andreas.Joachim.Peters@xxxxxxx>
- Re: FW: CURSH optimization for unbalanced pg distribution
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Better output of ceph df
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- RE: Cache tiering slow request issue: currently waiting for rw locks
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- RE: Cache tiering slow request issue: currently waiting for rw locks
- From: "Wang, Zhiqiang" <zhiqiang.wang@xxxxxxxxx>
- Upcoming v0.86 contributor list
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- v0.85 contributors credits
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- all my osds are down, but ceph -s tells they are up and in.
- From: yue longguang <yuelongguang@xxxxxxxxx>
- RE: OSD is crashing while running admin socket
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: OSD is crashing while running admin socket
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- RE: OSD is crashing while running admin socket
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: OSD is crashing while running admin socket
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: ceph data locality
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- v0.85 released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: ceph data locality
- From: "Johnu George (johnugeo)" <johnugeo@xxxxxxxxx>
- Re: ceph data locality
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- RE: Mon gets flooded with log messages for default log level
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Mon gets flooded with log messages for default log level
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- ceph branch status
- From: ceph branch robot <nobody@xxxxxxxx>
- RE: Mon gets flooded with log messages for default log level
- From: Sage Weil <sweil@xxxxxxxxxx>
- RE: Mon gets flooded with log messages for default log level
- From: Aanchal Agrawal <Aanchal.Agrawal@xxxxxxxxxxx>
- Placement Groups : chosing the right pg_num
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Storing cls and erasure code plugins in a pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Storing cls and erasure code plugins in a pool
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]