CEPH Filesystem Development
[Prev Page][Next Page]
- mgr dashboard SSL cert and default port
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [PATCH] ceph: fix writeback thread waits on itself
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH] ceph: fix writeback thread waits on itself
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- OpenStack Summit Vancouver 2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- [PATCH] ceph: fix writeback thread waits on itself
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [sepia] [QA/dashboard] Frontend tests on Jenkins
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: [Nfs-ganesha-devel] [nfs-ganesha RFC PATCH v2 10/13] support: add a rados_grace support library
- From: bfields@xxxxxxxxxxxx (J. Bruce Fields)
- Re: Last call for Jewel 10.2.11
- From: Alex Litvak <alexander.v.litvak@xxxxxxxxx>
- Re: rbd tracing efforts using zipkin
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 14/14] mm: turn on vm_fault_t type checking
- From: Christoph Hellwig <hch@xxxxxx>
- Re: vm_fault_t conversion, for real
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 14/14] mm: turn on vm_fault_t type checking
- From: "Darrick J. Wong" <darrick.wong@xxxxxxxxxx>
- Re: [sepia] [QA/dashboard] Frontend tests on Jenkins
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: [sepia] [QA/dashboard] Frontend tests on Jenkins
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Single DB/WAL volume shareable among multiple BlueStore instances
- From: Igor Fedotov <ifedotov@xxxxxxx>
- Re: vm_fault_t conversion, for real
- From: Matthew Wilcox <willy@xxxxxxxxxxxxx>
- Re: [PATCH 10/14] vgem: separate errno from VM_FAULT_* values
- From: Matthew Wilcox <willy@xxxxxxxxxxxxx>
- Re: [PATCH 14/14] mm: turn on vm_fault_t type checking
- From: Christoph Hellwig <hch@xxxxxx>
- Re: vm_fault_t conversion, for real
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 10/14] vgem: separate errno from VM_FAULT_* values
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 14/14] mm: turn on vm_fault_t type checking
- From: Matthew Wilcox <willy@xxxxxxxxxxxxx>
- Re: vm_fault_t conversion, for real
- From: Matthew Wilcox <willy@xxxxxxxxxxxxx>
- Re: [PATCH 01/14] orangefs: don't return errno values from ->fault
- From: Matthew Wilcox <willy@xxxxxxxxxxxxx>
- Re: [PATCH 06/14] btrfs: separate errno from VM_FAULT_* values
- From: David Sterba <dsterba@xxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: [PATCH 10/14] vgem: separate errno from VM_FAULT_* values
- From: Daniel Vetter <daniel@xxxxxxxx>
- rbd tracing efforts using zipkin
- From: "Chamarthy, Mahati" <mahati.chamarthy@xxxxxxxxx>
- vm_fault_t conversion, for real
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 03/14] dax: make the dax_iomap_fault prototype consistent
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 04/14] mm: remove the unused device_private_entry_fault export
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 05/14] ceph: untangle ceph_filemap_fault
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 06/14] btrfs: separate errno from VM_FAULT_* values
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 07/14] ext4: separate errno from VM_FAULT_* values
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 10/14] vgem: separate errno from VM_FAULT_* values
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 09/14] ubifs: separate errno from VM_FAULT_* values
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 11/14] ttm: separate errno from VM_FAULT_* values
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 08/14] ocfs2: separate errno from VM_FAULT_* values
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 12/14] lustre: separate errno from VM_FAULT_* values
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 13/14] mm: move arch specific VM_FAULT_* flags to mm.h
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 14/14] mm: turn on vm_fault_t type checking
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 02/14] fs: make the filemap_page_mkwrite prototype consistent
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 01/14] orangefs: don't return errno values from ->fault
- From: Christoph Hellwig <hch@xxxxxx>
- CentOS Dojo at CERN
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Coverity: enable SCA again
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Single DB/WAL volume shareable among multiple BlueStore instances
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: Single DB/WAL volume shareable among multiple BlueStore instances
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph User Survey 2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Single DB/WAL volume shareable among multiple BlueStore instances
- From: Igor Fedotov <ifedotov@xxxxxxx>
- Re: [RFC PATCH] OSD and kRBD request expiry (was Re: iSCSI active/active stale io guard)
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [RFC PATCH] OSD and kRBD request expiry (was Re: iSCSI active/active stale io guard)
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: [RFC PATCH] OSD and kRBD request expiry (was Re: iSCSI active/active stale io guard)
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [RFC PATCH] OSD and kRBD request expiry (was Re: iSCSI active/active stale io guard)
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: [sepia] [QA/dashboard] Frontend tests on Jenkins
- From: Lenz Grimmer <lgrimmer@xxxxxxxx>
- Re: Last call for Jewel 10.2.11
- From: Amon Ott <a.ott@xxxxxxxxxxxx>
- Re: [hammer][monitor]change ceph tell mon compact command to run asynchronously
- From: xiangyang yu <penglaiyxy@xxxxxxxxx>
- Re: [RFC PATCH] OSD and kRBD request expiry (was Re: iSCSI active/active stale io guard)
- From: David Disseldorp <ddiss@xxxxxxx>
- Grafana Dashboards
- From: Paul Cuzner <pcuzner@xxxxxxxxxx>
- Coverity: enable SCA again
- From: Danny Al-Gaaf <danny.al-gaaf@xxxxxxxxx>
- Re: unittest_async_shared_mutex thows a fit....
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: unittest_async_shared_mutex thows a fit....
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: [hammer][monitor]change ceph tell mon compact command to run asynchronously
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [RFC PATCH] OSD and kRBD request expiry (was Re: iSCSI active/active stale io guard)
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: unittest_async_shared_mutex thows a fit....
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- unittest_async_shared_mutex thows a fit....
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- clang builds...
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [RFC PATCH] OSD and kRBD request expiry (was Re: iSCSI active/active stale io guard)
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- [RFC PATCH] OSD and kRBD request expiry (was Re: iSCSI active/active stale io guard)
- From: David Disseldorp <ddiss@xxxxxxx>
- REMINDER: Sepia Lab Downtime Tonight thru Tomorrow Morning
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: [ceph-users] RBD Cache and rbd-nbd
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- mset: mstart shell env
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] RBD Cache and rbd-nbd
- From: Marc Schöchlin <ms@xxxxxxxxxx>
- [hammer][monitor]change ceph tell mon compact command to run asynchronously
- From: xiangyang yu <penglaiyxy@xxxxxxxxx>
- Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- Re: About RADOS level replication
- From: Sage Weil <sweil@xxxxxxxxxx>
- [PATCH 0/6] Transition vfs to 64-bit timestamps
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- [PATCH 3/6] ceph: make inode time prints to be long long
- From: Deepa Dinamani <deepa.kernel@xxxxxxxxx>
- Re: [ceph-users] Open-sourcing GRNET's Ceph-related tooling
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: About RADOS level replication
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- v13.1.0 Mimic (RC) released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- [GIT PULL] Ceph fixes for 4.17-rc5
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ceph-users] RBD Cache and rbd-nbd
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [PATCH] rbd: interlock object-map/fast-diff features together
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH] rbd: interlock object-map/fast-diff features together
- From: Mao Zhongyi <maozy.fnst@xxxxxxxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: [PATCH] ceph: abort osd requests on force umount
- From: Luis Henriques <lhenriques@xxxxxxxx>
- [PATCH] ceph: abort osd requests on force umount
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [ceph-users] RBD Cache and rbd-nbd
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: community performance meeting 5/10/2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- community performance meeting 5/10/2018
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: rgw-multisite: add multipart sync for rgw zones
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: fix rsize/wsize capping in ceph_direct_read_write()
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- rgw-multisite: add multipart sync for rgw zones
- From: Xinying Song <songxinying.ftd@xxxxxxxxx>
- Re: Keep dmclock as a git subtree or switch it to a git submodule?
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [PATCH 2/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: object_contexts and SharedLRU in PrimaryPG
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above
- From: Julia Kreger <juliaashleykreger@xxxxxxxxx>
- Re: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Keep dmclock as a git subtree or switch it to a git submodule?
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Keep dmclock as a git subtree or switch it to a git submodule?
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- Re: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above
- From: Matthew Thode <prometheanfire@xxxxxxxxxx>
- Re: [PATCH 0/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [PATCH 0/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- object_contexts and SharedLRU in PrimaryPG
- From: YuFan Chen <wiz.chen@xxxxxxxxx>
- Re: remove WITH_EMBEDDED option and libcephd support
- From: Bassam Tabbara <bassam@xxxxxxxxxx>
- Re: remove WITH_EMBEDDED option and libcephd support
- From: Dan Mick <dmick@xxxxxxxxxx>
- remove WITH_EMBEDDED option and libcephd support
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: osd assertion failure during scrub
- From: Chris Dunlop <chris@xxxxxxxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Why librados::IoCtxImpl::notify waits for CEPH_OSD_OP_NOTIFY to complete?
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH 0/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Why librados::IoCtxImpl::notify waits for CEPH_OSD_OP_NOTIFY to complete?
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- Re: unable to build ceph under Fedora 28
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [ceph-users] Show and Tell: Grafana cluster dashboard
- From: Kai Wagner <kwagner@xxxxxxxx>
- Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- Re: make aborts while building jewel and luminous on fedora 28
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [PATCH 1/9] rbd: show the rbd options in sysfs
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- make aborts while building jewel and luminous on fedora 28
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: [ceph-users] Show and Tell: Grafana cluster dashboard
- From: Reed Dier <reed.dier@xxxxxxxxxxx>
- Re: [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 4/9] ceph: show all options in client_options even if option is equal with default value
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- RE: [ceph-users] Show and Tell: Grafana cluster dashboard
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- Re: [PATCH 1/9] rbd: show the rbd options in sysfs
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: [ceph-users] Show and Tell: Grafana cluster dashboard
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: [ceph-users] Show and Tell: Grafana cluster dashboard
- From: Kurt Bauer <kurt.bauer@xxxxxxxxxxxx>
- Re: [ceph-users] Show and Tell: Grafana cluster dashboard
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- [PATCH 2/2] SubmittingPatches: fix typo of SubmittingPatches.rst
- From: Mao Zhongyi <maozy.fnst@xxxxxxxxxxxxxx>
- [PATCH 1/2] Rados: fix comments of IoCtx
- From: Mao Zhongyi <maozy.fnst@xxxxxxxxxxxxxx>
- Show and Tell: Grafana cluster dashboard
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: [PATCH 2/3] ceph: define argument structure for handle_cap_grant
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [PATCH 0/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Luis Henriques <lhenriques@xxxxxxxx>
- [PATCH 7/9] ceph: set the req->r_abort_on_full in ceph_osdc_call when we are writing
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 3/9] rbd: refresh features and set the disk to readonly if there is unsupported bit
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 4/9] ceph: show all options in client_options even if option is equal with default value
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 5/9] rbd: protect flag bit of RBD_DEV_FLAG_BLACKLISTED with lock_rwsem
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 6/9] rbd: wake up waiter in rbd_acquire_lock if we got -EBLACKLISTED
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 8/9] rbd: set req->r_abort_on_full in writing
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 9/9] rbd: try to acquire lock once before going waiting
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 2/9] rbd: return the features to caller even if there is unsupported bits
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 1/9] rbd: show the rbd options in sysfs
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- [PATCH 0/9] rbd: mics improvement for rbd
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- Re: About ersure-code and isal avx512
- From: "Gohad, Tushar" <tushar.gohad@xxxxxxxxx>
- About ersure-code and isal avx512
- From: YuFan Chen <wiz.chen@xxxxxxxxx>
- Re: iSCSI active/active stale io guard
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: mimic is forked
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: iSCSI active/active stale io guard
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: mimic is forked
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: About RADOS level replication
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [PATCH 2/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 1/2] libceph: add osd_req_op_extent_osd_data_bvecs()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH 0/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 2/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 1/2] libceph: add osd_req_op_extent_osd_data_bvecs()
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH 0/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: internal compiler error while building jewel
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: John Spray <jspray@xxxxxxxxxx>
- [PATCH 2/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 1/2] libceph: add osd_req_op_extent_osd_data_bvecs()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 0/2] ceph: fix iov_iter issues in ceph_direct_read_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- internal compiler error while building jewel
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: mimic is forked
- From: Lenz Grimmer <lgrimmer@xxxxxxxx>
- Re: [PATCH 2/3] ceph: define argument structure for handle_cap_grant
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: mimic is forked
- From: Lenz Grimmer <lgrimmer@xxxxxxxx>
- Re: About RADOS level replication
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: Can we have back the ability to create device class before OSDs start?
- From: Sebastien Han <shan@xxxxxxxxxx>
- Re: mimic is forked
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: mimic is forked
- From: Ricardo Dias <rdias@xxxxxxxx>
- mimic is forked
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: logging in seastar-osd
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Read only FS with nfs-ganesha
- From: Muminul Islam Russell <misla011@xxxxxxx>
- [nfs-ganesha RFC PATCH v2 06/13] SAL: add recovery operation to maybe start a grace period
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 13/13] FSAL_CEPH: kill off old session before the mount
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 12/13] SAL: add new clustered RADOS recovery backend
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 07/13] SAL: add new set_enforcing operation
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 09/13] main: add way to stall server until grace is being enforced
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 03/13] main: initialize recovery backend earlier
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 11/13] tools: add new rados_grace manipulation tool
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 10/13] support: add a rados_grace support library
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 08/13] SAL: add a way to check for grace period being enforced cluster-wide
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 04/13] SAL: make some rados_kv symbols public
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 05/13] SAL: add new try_lift_grace recovery operation
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 02/13] reaper: add a way to wake up the reaper immediately
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 01/13] HASHTABLE: add a hashtable_for_each function
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [nfs-ganesha RFC PATCH v2 00/13] experimental rados_cluster recovery backend
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- [PATCH] ceph: show wsize only if non-default
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH] ceph: fix rsize/wsize capping in ceph_direct_read_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Scheduled Sepia Lab Maintenance for May 15
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: [PATCH 2/3] ceph: define argument structure for handle_cap_grant
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [PATCH 2/3] ceph: define argument structure for handle_cap_grant
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: [PATCH 2/3] ceph: define argument structure for handle_cap_grant
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: issue with building vstart ceph cluster from source
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?
- From: David Zafman <dzafman@xxxxxxxxxx>
- Fwd: [ceph-users] Announcing mountpoint, August 27-28, 2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: making ceph_volume_client py3 compatible
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- making ceph_volume_client py3 compatible
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: iSCSI active/active stale io guard
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: Ceph User Survey 2018
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Ceph User Survey 2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: issue with building vstart ceph cluster from source
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: unable to build ceph under Fedora 28
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: issue with building vstart ceph cluster from source
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: unable to build ceph under Fedora 28
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- unable to build ceph under Fedora 28
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: issue with building vstart ceph cluster from source
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: issue with building vstart ceph cluster from source
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- [PATCH] libceph: use MSG_TRUNC for discarding received bytes
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- issue with building vstart ceph cluster from source
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- [ceph-client:testing 4/5] fs/ceph/caps.c:3170:42: sparse: incompatible types in comparison expression (different address spaces)
- From: kbuild test robot <lkp@xxxxxxxxx>
- [GIT PULL] Ceph fixes for 4.17-rc3
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Upstream tracking tool
- From: Mohamad Gebai <mgebai@xxxxxxx>
- [PATCH 3/3] ceph: handle the new nfiles/nsubdirs fields in cap message
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 2/3] ceph: define argument structure for handle_cap_grant
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 1/3] ceph: update i_files/i_subdirs only when Fs cap is issued
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: Issues installing latest version of ceph from source
- From: Jos Collin <jcollin@xxxxxxxxxx>
- Re: Contributing to OSD
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Contributing to OSD
- From: Victor Denisov <denisovenator@xxxxxxxxx>
- Ceph Performance Weekly - April 26th 2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Can we have back the ability to create device class before OSDs start?
- From: Sebastien Han <shan@xxxxxxxxxx>
- Ceph Tech Talk canceled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: [PATCH] libceph: validate con->state at the top of try_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Issues installing latest version of ceph from source
- From: Lenz Grimmer <lgrimmer@xxxxxxxx>
- Re: Issues installing latest version of ceph from source
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: Crash during rados put
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: Crash during rados put
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Issues installing latest version of ceph from source
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Issues installing latest version of ceph from source
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Re: Crash during rados put
- From: Elita Lobo <loboelita@xxxxxxxxx>
- Ceph Developer Monthly - May 2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Updated arm64 builders
- From: Dan Mick <dmick@xxxxxxxxxx>
- Welcome to Ceph's Outreachy Participants for the Summer 2018 Round
- From: Ali Maredia <amaredia@xxxxxxxxxx>
- Re: logging in seastar-osd
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: logging in seastar-osd
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: [PATCH] libceph: validate con->state at the top of try_write()
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [PATCH] libceph: get rid of more_kvec in try_write()
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: [PATCH 1/2] ceph: use bit flags to define vxattr attributes
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: logging in seastar-osd
- From: Mohamad Gebai <mgebai@xxxxxxx>
- [PATCH 2/2] ceph: always get rstat from auth mds
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- [PATCH 1/2] ceph: use bit flags to define vxattr attributes
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: logging in seastar-osd
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: logging in seastar-osd
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Integrating XEN Server : Long query time for "rbd ls -l" queries
- From: Marc Schöchlin <ms@xxxxxxxxxx>
- [PATCH] libceph: get rid of more_kvec in try_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH] libceph: validate con->state at the top of try_write()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: logging in seastar-osd
- From: kefu chai <tchaikov@xxxxxxxxx>
- backport-create-issue script (was Re: Contributing to OSD)
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Contributing to OSD
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Contributing to OSD
- From: Victor Denisov <denisovenator@xxxxxxxxx>
- v12.2.5 Luminous released
- From: Abhishek <abhishek@xxxxxxxx>
- Re: logging in seastar-osd
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: logging in seastar-osd
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: logging in seastar-osd
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: logging in seastar-osd
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: logging in seastar-osd
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: logging in seastar-osd
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: [ceph-users] Cephalocon APAC 2018 report, videos and slides
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: logging in seastar-osd
- From: kefu chai <tchaikov@xxxxxxxxx>
- Cephalocon APAC 2018 report, videos and slides
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: logging in seastar-osd
- From: Sage Weil <sage@xxxxxxxxxxxx>
- logging in seastar-osd
- From: kefu chai <tchaikov@xxxxxxxxx>
- mgr module interface change in mimic: config vs. store separation
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Upstream tracking tool
- From: Shengjing Zhu <zsj950618@xxxxxxxxx>
- Re: Upstream tracking tool
- From: Kai Wagner <kwagner@xxxxxxxx>
- Re: Upstream tracking tool
- From: Lenz Grimmer <lgrimmer@xxxxxxxx>
- Re: what's the meaning of CEPH_OSD_ALLOC_HINT_FLAG_IMMUTABLE flag?
- From: John Spray <jspray@xxxxxxxxxx>
- what's the meaning of CEPH_OSD_ALLOC_HINT_FLAG_IMMUTABLE flag?
- From: "Honggang(Joseph) Yang" <eagle.rtlinux@xxxxxxxxx>
- Re: injectable config values vs non-injectable?
- From: John Spray <jspray@xxxxxxxxxx>
- how to unset recovery_deletes
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: Upstream tracking tool
- From: Lenz Grimmer <lgrimmer@xxxxxxxx>
- Re: injectable config values vs non-injectable?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Upstream tracking tool
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: Upstream tracking tool
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: injectable config values vs non-injectable?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [PATCH 0/2] monc fixes for http://tracker.ceph.com/issues/23537
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- [PATCH 2/2] libceph: reschedule a tick in finish_hunting()
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 1/2] libceph: un-backoff on tick when we have a authenticated session
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH 0/2] monc fixes for http://tracker.ceph.com/issues/23537
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Upstream tracking tool
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Upstream tracking tool
- From: John Spray <jspray@xxxxxxxxxx>
- injectable config values vs non-injectable?
- From: Wyllys Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Upstream tracking tool
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: cephfs client hang issue
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: Boost (1.67) question
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Boost (1.67) question
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Boost (1.67) question
- From: kefu chai <tchaikov@xxxxxxxxx>
- Boost (1.67) question
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: cephfs client hang issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: Please Stop Merges to Luminous branch until QE window is over.
- From: Sage Weil <sage@xxxxxxxxxxxx>
- pg merging will be in nautilus
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: Please Stop Merges to Luminous branch until QE window is over.
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: cephfs client hang issue
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: 12.2.5 QE Luminous validation status
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- 12.2.5 QE Luminous validation status
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Please Stop Merges to Luminous branch until QE window is over.
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- [QA/dashboard] Frontend tests on Jenkins
- From: Stephan Müller <smueller@xxxxxxxx>
- cephfs client hang issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: Building only libcephfs
- From: John Spray <jspray@xxxxxxxxxx>
- Building only libcephfs
- From: Martin Palma <martin@xxxxxxxx>
- Re: Branch naming & Backport workflows [Was Re: Asking for approval for 12.2.5 for QE validation]
- From: Nathan Cutler <ncutler@xxxxxxx>
- Ceph Performance Weekly - Apr 19, 2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Branch naming & Backport workflows [Was Re: Asking for approval for 12.2.5 for QE validation]
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Branch naming & Backport workflows [Was Re: Asking for approval for 12.2.5 for QE validation]
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: ceph-disk vs ceph-volume
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Create a trigger for rbd trash purge
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Create a trigger for rbd trash purge
- From: Theofilos Mouratidis <mtheofilos@xxxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: CephFS get directory size without mounting the fs
- From: Martin Palma <martin@xxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Mimic freeze
- From: Sage Weil <sweil@xxxxxxxxxx>
- [GIT PULL] Ceph fixes for 4.17-rc2
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: new (> 4.16) kernel cephfs clients behaviour on < mimic
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [rbd] cinder multi-attach volume
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: new (> 4.16) kernel cephfs clients behaviour on < mimic
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: new (> 4.16) kernel cephfs clients behaviour on < mimic
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: CephFS get directory size without mounting the fs
- From: John Spray <jspray@xxxxxxxxxx>
- CephFS get directory size without mounting the fs
- From: Martin Palma <martin@xxxxxxxx>
- [rbd] cinder multi-attach volume
- From: Jaze Lee <jazeltq@xxxxxxxxx>
- Re: Transition to Python 3
- From: Tim Serong <tserong@xxxxxxxx>
- Re: new (> 4.16) kernel cephfs clients behaviour on < mimic
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [sepia] Transition to Python 3
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [sepia] Transition to Python 3
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: ceph-disk vs ceph-volume
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: ceph-disk vs ceph-volume
- From: Aleksey Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Re: [sepia] Transition to Python 3
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [sepia] Transition to Python 3
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Luminous branch freeze & Handing 12.2.5 for QE validation
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: ceph-disk vs ceph-volume
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Transition to Python 3
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- new (> 4.16) kernel cephfs clients behaviour on < mimic
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: [PATCH] libceph: optimize ceph_msg_new
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] libceph: optimize ceph_msg_new
- From: "cgxu519@xxxxxxx" <cgxu519@xxxxxxx>
- Re: [PATCH] libceph: optimize ceph_msg_new
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Transition to Python 3
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Transition to Python 3
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- [PATCH] libceph: optimize ceph_msg_new
- From: Chengguang Xu <cgxu519@xxxxxxx>
- Re: seastar and 'tame reactor'
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Transition to Python 3
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Monitor 'perf dump' stats in Mgr module
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Transition to Python 3
- From: John Spray <jspray@xxxxxxxxxx>
- Re: decluttering redmine issue fields
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Transition to Python 3
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Monitor 'perf dump' stats in Mgr module
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Transition to Python 3
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: Transition to Python 3
- From: Volker Theile <vtheile@xxxxxxxx>
- Transition to Python 3
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: OpenTracing enabling in Ceph?
- From: Yingxin Cheng <yingxincheng@xxxxxxxxx>
- Re: decluttering redmine issue fields
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: decluttering redmine issue fields
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: A few questions about RGW multisite
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: How to check work of mClock QoS?
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- jenkins "make check" and rpm/deb build failure
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Asking for approval for 12.2.5 for QE validation
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: [PATCH] libceph: add error handling for osd_req_op_cls_init
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Asking for approval for 12.2.5 for QE validation
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- A few questions about RGW multisite
- From: Xinying Song <songxinying.ftd@xxxxxxxxx>
- Re: OpenTracing enabling in Ceph?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Monitor 'perf dump' stats in Mgr module
- From: John Spray <jspray@xxxxxxxxxx>
- Monitor 'perf dump' stats in Mgr module
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Cephalocon QA: Test development/individual contributors
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Cephalocon QA: RGW scrubbing
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: High MON cpu usage when cluster is changing
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: High MON cpu usage when cluster is changing
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: High MON cpu usage when cluster is changing
- From: Sage Weil <sweil@xxxxxxxxxx>
- High MON cpu usage when cluster is changing
- From: Xiaoxi Chen <superdebuger@xxxxxxxxx>
- Re: dev branches in ceph.git
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: dev branches in ceph.git
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: dev branches in ceph.git
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: dev branches in ceph.git
- From: John Spray <jspray@xxxxxxxxxx>
- dev branches in ceph.git
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- OpenTracing enabling in Ceph?
- From: Yingxin Cheng <yingxincheng@xxxxxxxxx>
- Re: Asking for PRs to be included in 12.2.5 Luminous
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: some rbd operations question
- From: handong He <hedongho@xxxxxxxxx>
- Re: Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Perf meeting URL for 04/12/2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Asking for PRs to be included in 12.2.5 Luminous
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- How to check work of mClock QoS?
- From: aboutbus <aboutbus@xxxxxxxxx>
- Re: some rbd operations question
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: some rbd operations question
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Asking for PRs to be included in 12.2.5 Luminous
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Perf meeting URL for 04/12/2018
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: some rbd operations question
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Static Analysis
- From: kefu chai <tchaikov@xxxxxxxxx>
- some rbd operations question
- From: handong He <hedongho@xxxxxxxxx>
- RE: Crash during rados put
- From: "Ma, Jianpeng" <jianpeng.ma@xxxxxxxxx>
- Crash during rados put
- From: Myna V <mynaramana@xxxxxxxxx>
- [PATCH] libceph: add error handling for osd_req_op_cls_init
- From: Chengguang Xu <cgxu519@xxxxxxx>
- Static Analysis
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: [ceph-users] cephfs snapshot format upgrade
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Asking for PRs to be included in 12.2.5 Luminous
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- [ceph-client:wip-jd-testing 2/2] drivers/block/nbd.c:237:15: error: 'bdev' undeclared; did you mean 'cdev'?
- From: kbuild test robot <lkp@xxxxxxxxx>
- Re: decluttering redmine issue fields
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [ceph-users] cephfs snapshot format upgrade
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: [ceph-users] cephfs snapshot format upgrade
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: [PATCH v3] rbd: support timeout in rbd_wait_state_locked
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- Re: [ceph-users] cephfs snapshot format upgrade
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [ceph-users] cephfs snapshot format upgrade
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: decluttering redmine issue fields
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: decluttering redmine issue fields
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: decluttering redmine issue fields
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: decluttering redmine issue fields
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: decluttering redmine issue fields
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: decluttering redmine issue fields
- From: John Spray <jspray@xxxxxxxxxx>
- Re: ceph-disk vs ceph-volume
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [ceph-users] cephfs snapshot format upgrade
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: ceph-disk vs ceph-volume
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- decluttering redmine issue fields
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: qos_params and CEPH_FEATURE_QOS_DMC
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- [GIT PULL] Ceph updates for 4.17-rc1
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: ceph-disk vs ceph-volume
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: ceph-disk vs ceph-volume
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- From: xiangyang yu <penglaiyxy@xxxxxxxxx>
- ceph-disk vs ceph-volume
- From: Aleksei Gutikov <aleksey.gutikov@xxxxxxxxxx>
- Re: [PATCH v3] rbd: support timeout in rbd_wait_state_locked
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Asking for PRs to be included in 12.2.5 Luminous
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [ceph-users] cephfs snapshot format upgrade
- From: John Spray <jspray@xxxxxxxxxx>
- cephfs snapshot format upgrade
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: json, binary data, and ceph config-key
- From: John Spray <jspray@xxxxxxxxxx>
- Re: json, binary data, and ceph config-key
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: json, binary data, and ceph config-key
- From: John Spray <jspray@xxxxxxxxxx>
- Re: json, binary data, and ceph config-key
- From: John Spray <jspray@xxxxxxxxxx>
- Re: json, binary data, and ceph config-key
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: json, binary data, and ceph config-key
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- Re: json, binary data, and ceph config-key
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- json, binary data, and ceph config-key
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH v3] rbd: support timeout in rbd_wait_state_locked
- From: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>
- Re: Asking for PRs to be included in 12.2.5 Luminous
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- RE: mon: validate capabilitys before add auth entity
- From: Sage Weil <sage@xxxxxxxxxxxx>
- RE: mon: validate capabilitys before add auth entity
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- mimic freeze is a week away
- From: Sage Weil <sweil@xxxxxxxxxx>
- qos_params and CEPH_FEATURE_QOS_DMC
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: pad.ceph.com Update
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: Ceph 'brag' Manager Module
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- pad.ceph.com Update
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Ceph Dashboard v2 update
- From: Lenz Grimmer <lgrimmer@xxxxxxxx>
- Re: Cephalocon QA: Test development/individual contributors
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [PATCH v3] rbd: support timeout in rbd_wait_state_locked
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Last call for Jewel 10.2.11
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Asking for PRs to be included in 12.2.5 Luminous
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: Questions about rbd-mirror
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: ceph-volume: hard to replace ceps-disk now and not good implemented
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Questions about rbd-mirror
- From: YuShengzuo <yu.shengzuo@xxxxxxxxxxx>
- Re: Asking for PRs to be included in 12.2.5 Luminous
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Cephalocon QA: Test development/individual contributors
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Cephalocon QA: Performance testing in teuthology
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Asking for PRs to be included in 12.2.5 Luminous
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: Shaman builds builds jewel-based branches for trusty only sometimes
- From: kefu chai <tchaikov@xxxxxxxxx>
- aio with cephfs fuse
- From: Muminul Islam Russell <misla011@xxxxxxx>
- Re: Shaman builds builds jewel-based branches for trusty only sometimes
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: creating pools/pgs vs split
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Shaman builds builds jewel-based branches for trusty only sometimes
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Shaman builds builds jewel-based branches for trusty only sometimes
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Shaman builds builds jewel-based branches for trusty only sometimes
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: Cephalocon QA: Performance testing in teuthology
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Contributing to OSD
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [sepia] changes managing "nightlies" schedule
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: creating pools/pgs vs split
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: creating pools/pgs vs split
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: creating pools/pgs vs split
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- Re: creating pools/pgs vs split
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Performance meeting URL
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- creating pools/pgs vs split
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Are MDS pins meant to be persisted?
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Tracing Ceph results
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Are MDS pins meant to be persisted?
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: Are MDS pins meant to be persisted?
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Are MDS pins meant to be persisted?
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: Cephalocon QA: Test development/individual contributors
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Ceph Dashboard IRC Channel
- From: Kai Wagner <kwagner@xxxxxxxx>
- Re: Cephalocon QA: RGW scrubbing
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: Tracing Ceph results
- From: liuchang0812 <liuchang0812@xxxxxxxxx>
- Re: Contributing to OSD
- From: Victor Denisov <denisovenator@xxxxxxxxx>
- Re: emplace question
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Cephalocon QA: Missing test coverage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Cephalocon QA: RGW scrubbing
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Cephalocon QA: Missing test coverage
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: Cephalocon QA: Teuthology infrastructure/new labs
- From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
- Re: emplace question
- From: Jesse Williamson <jwilliamson@xxxxxxx>
- Re: Cephalocon QA: Performance testing in teuthology
- From: Neha Ojha <nojha@xxxxxxxxxx>
- Re: Cephalocon QA: Missing test coverage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Cephalocon QA: RGW scrubbing
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: [PATCH] [v2] rbd: avoid Wreturn-type warnings
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Questions about rbd-mirror
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- [PATCH] [v2] rbd: avoid Wreturn-type warnings
- From: Arnd Bergmann <arnd@xxxxxxxx>
- Re: [PATCH] rbd: add missing return statements
- From: Arnd Bergmann <arnd@xxxxxxxx>
- get_mapped_pools
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: Cephalocon QA: Test development/individual contributors
- From: Ricardo Dias <rdias@xxxxxxxx>
- Re: [PATCH] rbd: add missing return statements
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH] rbd: add missing return statements
- From: Arnd Bergmann <arnd@xxxxxxxx>
- Cephalocon QA: RGW scrubbing
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Cephalocon QA: Performance testing in teuthology
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Cephalocon QA: Test development/individual contributors
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Cephalocon QA: Teuthology infrastructure/new labs
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Cephalocon QA: Missing test coverage
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Cephalocon QA Meeting
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [sepia] changes managing "nightlies" schedule
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: iSCSI active/active stale io guard
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: emplace question
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: emplace question
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Is this month's CDM cancelled?
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- emplace question
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- Ceph Developer Monthly - April 2018
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx>
- changes managing "nightlies" schedule
- From: Yuri Weinstein <yweinste@xxxxxxxxxx>
- Re: iSCSI active/active stale io guard
- From: "Maged Mokhtar" <mmokhtar@xxxxxxxxxxx>
- cephalocon APAC videos
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Ceph 'brag' Manager Module
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Is this month's CDM cancelled?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: Policy based object tiering in RGW
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Is this month's CDM cancelled?
- From: Xuehan Xu <xxhdx1985126@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Li Wang <laurence.liwang@xxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- From: "cgxu519@xxxxxxx" <cgxu519@xxxxxxx>
- Re: iSCSI active/active stale io guard
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: iSCSI active/active stale io guard
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: 答复: Reading data before peered to improve performance
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: iSCSI active/active stale io guard
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Contributing to OSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: debugging pg states
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: ceph-volume: hard to replace ceps-disk now and not good implemented
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: make check failure of PRs due to pip
- From: Nathan Cutler <ncutler@xxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- make check failure of PRs due to pip
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: ceph-volume: hard to replace ceps-disk now and not good implemented
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: ceph-volume: hard to replace ceps-disk now and not good implemented
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- From: xiangyang yu <penglaiyxy@xxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: ceph-volume: hard to replace ceps-disk now and not good implemented
- From: Ning Yao <zay11022@xxxxxxxxx>
- Re: Policy based object tiering in RGW
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: ceph-volume: hard to replace ceps-disk now and not good implemented
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Contributing to OSD
- From: Victor Denisov <denisovenator@xxxxxxxxx>
- Re: ceph-volume: hard to replace ceps-disk now and not good implemented
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: buffer bluestore deferred io
- From: zengran zhang <z13121369189@xxxxxxxxx>
- [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit
- From: xiangyang yu <penglaiyxy@xxxxxxxxx>
- ceph-volume: hard to replace ceps-disk now and not good implemented
- From: Ning Yao <zay11022@xxxxxxxxx>
- Policy based object tiering in RGW
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: [PATCH] rbd: remove VLA usage
- From: "Gustavo A. R. Silva" <gustavo@xxxxxxxxxxxxxx>
- Re: [PATCH] rbd: remove VLA usage
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- [PATCH] rbd: remove VLA usage
- From: "Gustavo A. R. Silva" <gustavo@xxxxxxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Jeff Layton <jlayton@xxxxxxxxxx>
- Re: use dmclock for ceph rgw QoS (resend)
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: buffer bluestore deferred io
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Luis Henriques <lhenriques@xxxxxxxx>
- [GIT PULL] Ceph fix for 4.16-rc8
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Varada Kari <varada.kari@xxxxxxxxx>
- use dmclock for ceph rgw QoS
- From: Will Zhao <zhao6305@xxxxxxxxx>
- buffer bluestore deferred io
- From: zengran zhang <z13121369189@xxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Li Wang <laurence.liwang@xxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: rgwscrub
- From: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: "Yan, Zheng" <zyan@xxxxxxxxxx>
- Re: mds laggy issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- rgwscrub
- From: "Varada Kari (System Engineer)" <varadaraja.kari@xxxxxxxxxxxx>
- Re: mds laggy issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Ordering subscription messages to MonClient vs. command responses
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: Ordering subscription messages to MonClient vs. command responses
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: debugging pg states
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: debugging pg states
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: bug #10915 client: hangs on umount if it had an MDS session evicted
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: [PATCH] ceph: only dirty ITER_IOVEC pages for direct read
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: OSD dump_historic_slow_ops output processing tools
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: OSD dump_historic_slow_ops output processing tools
- From: Mykola Golub <to.my.trociny@xxxxxxxxx>
- Re: mds laggy issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- performance meeting url temporarily changed
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- OSD dump_historic_slow_ops output processing tools
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: Ordering subscription messages to MonClient vs. command responses
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [PATCH v2] block: rbd: update sysfs interface
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: debugging pg states
- From: John Spray <jspray@xxxxxxxxxx>
- Re: some question about seastar merged in Ceph
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: Ceph 'brag' Manager Module
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: mds laggy issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: xiaoyan li <wisher2003@xxxxxxxxx>
- Re: mds laggy issue
- From: 陶冬冬 <tdd21151186@xxxxxxxxx>
- Re: mds laggy issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: mds laggy issue
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- debugging pg states
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Mark Nelson <mark.a.nelson@xxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Matt Benjamin <mbenjami@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: "Adam C. Emerson" <aemerson@xxxxxxxxxx>
- Re: storing pg logs outside of rocksdb
- From: Mark Nelson <mnelson@xxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[Scanner]
[Linux SCSI]
[Samba]
[Yosemite Hikes]