CEPH Filesystem Users
[Prev Page][Next Page]
- OpTracker optimization
- From: sam.just@xxxxxxxxxxx (Samuel Just)
- OpTracker optimization
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- OpTracker optimization
- From: sam.just@xxxxxxxxxxx (Samuel Just)
- Ceph-deploy bug; CentOS 7, Firefly
- From: piers@xxxxx (Piers Dawson-Damer)
- OpTracker optimization
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- OpTracker optimization
- From: sam.just@xxxxxxxxxxx (Samuel Just)
- OpTracker optimization
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- CephFS roadmap (was Re: NAS on RBD)
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- OpTracker optimization
- From: sam.just@xxxxxxxxxxx (Samuel Just)
- max_bucket limit -- safe to disable?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- why one osd-op from client can get two osd-op-reply?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- [ANN] ceph-deploy 1.5.14 released
- From: scottix@xxxxxxxxx (Scottix)
- [ANN] ceph-deploy 1.5.14 released
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Ceph-deploy bug; CentOS 7, Firefly
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Cache Pool writing too much on ssds, poor performance?
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- question about librbd io
- From: josh.durgin@xxxxxxxxxxx (Josh Durgin)
- osd cpu usage is bigger than 100%
- From: fastsync@xxxxxxx (yuelongguang)
- question about RGW
- From: sweil@xxxxxxxxxx (Sage Weil)
- Ceph on RHEL 7 with multiple OSD's
- From: yrabl@xxxxxxxxxx (yrabl at redhat.com)
- region creation is failing
- From: santhosh.fernandes@xxxxxxxxx (Santhosh Fernandes)
- Best practices on Filesystem recovery on RBD block volume?
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- max_bucket limit -- safe to disable?
- From: daniel.schneller@xxxxxxxxxxxxxxxx (Daniel Schneller)
- Best practices on Filesystem recovery on RBD block volume?
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Ceph on RHEL 7 with multiple OSD's
- From: bglackin@xxxxxxx (BG)
- Best practices on Filesystem recovery on RBD block volume?
- From: keith@xxxxxxxxxxxxxxxxxx (Keith Phua)
- Best practices on Filesystem recovery on RBD block volume?
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- why one osd-op from client can get two osd-op-reply?
- From: fastsync@xxxxxxx (yuelongguang)
- bad performance of leveldb on 0.85
- From: Derek@xxxxxxxxx (廖建锋)
- Problem with customized crush rule for EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- question about RGW
- From: baijiaruo@xxxxxxx (baijiaruo at 126.com)
- SSD journal deployment experiences
- From: chibi@xxxxxxx (Christian Balzer)
- Ceph-deploy bug; CentOS 7, Firefly
- From: piers@xxxxx (Piers Dawson-Damer)
- Problem with customized crush rule for EC pool
- From: leidong@xxxxxxxxxxxxx (Lei Dong)
- Best practices on Filesystem recovery on RBD block volume?
- From: keith@xxxxxxxxxxxxxxxxxx (Keith Phua)
- osd unexpected error by leveldb
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- CephFS roadmap (was Re: NAS on RBD)
- From: blair.bethwaite@xxxxxxxxx (Blair Bethwaite)
- FW: FW: CURSH optimization for unbalanced pg distribution
- From: jian.zhang@xxxxxxxxx (Zhang, Jian)
- ceph data consistency
- From: xiaoxi.chen@xxxxxxxxx (Chen, Xiaoxi)
- NAS on RBD
- From: qgrasso@xxxxxxxxxx (Quenten Grasso)
- OpTracker optimization
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- max_bucket limit -- safe to disable?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Remaped osd at remote restart
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- SSD journal deployment experiences
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- ceph data consistency
- From: sweil@xxxxxxxxxx (Sage Weil)
- SSD journal deployment experiences
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph + Postfix/Zimbra
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- CephFS roadmap (was Re: NAS on RBD)
- From: sweil@xxxxxxxxxx (Sage Weil)
- Ceph Filesystem - Production?
- From: fxmulder@xxxxxxxxx (James Devine)
- ceph data consistency
- From: chibi@xxxxxxx (Christian Balzer)
- max_bucket limit -- safe to disable?
- From: daniel.schneller@xxxxxxxxxxxxxxxx (Daniel Schneller)
- NAS on RBD
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- NAS on RBD
- From: mkozanecki@xxxxxxxxxx (Michal Kozanecki)
- question about librbd io
- From: fastsync@xxxxxxx (yuelongguang)
- Problem with customized crush rule for EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Ceph on RHEL 7 with multiple OSD's
- From: marco@xxxxxxxxx (Marco Garcês)
- Ceph on RHEL 7 with multiple OSD's
- From: mkozanecki@xxxxxxxxxx (Michal Kozanecki)
- NAS on RBD
- From: blair.bethwaite@xxxxxxxxx (Blair Bethwaite)
- Problem with customized crush rule for EC pool
- From: leidong@xxxxxxxxxxxxx (Lei Dong)
- NAS on RBD
- From: blair.bethwaite@xxxxxxxxx (Blair Bethwaite)
- Problem with customized crush rule for EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Problem with customized crush rule for EC pool
- From: leidong@xxxxxxxxxxxxx (Lei Dong)
- NAS on RBD
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Ceph on RHEL 7 with multiple OSD's
- From: bglackin@xxxxxxx (BG)
- 回复: Re: 回复: mix ceph verion with 0.80.5 and 0.85
- From: Derek@xxxxxxxxx (廖建锋)
- [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85
- From: Derek@xxxxxxxxx (廖建锋)
- [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- number of PGs (global vs per pool)
- From: chibi@xxxxxxx (Christian Balzer)
- number of PGs (global vs per pool)
- From: wido@xxxxxxxx (Wido den Hollander)
- NAS on RBD
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- number of PGs (global vs per pool)
- From: periquito@xxxxxxxxx (Luis Periquito)
- NAS on RBD
- From: chibi@xxxxxxx (Christian Balzer)
- monitoring tool for monitoring end-user
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- NAS on RBD
- From: blair.bethwaite@xxxxxxxxx (Blair Bethwaite)
- resizing the OSD
- From: martin@xxxxxxxxxxx (Martin B Nielsen)
- heterogeneous set of storage disks as a single storage
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- Is ceph osd reweight always safe to use?
- From: chibi@xxxxxxx (Christian Balzer)
- Is ceph osd reweight always safe to use?
- From: botemout@xxxxxxxxx (JR)
- [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85
- From: Derek@xxxxxxxxx (廖建锋)
- ceph cluster inconsistency keyvaluestore
- From: sweil@xxxxxxxxxx (Sage Weil)
- all my osds are down, but ceph -s tells they are up and in.
- From: sweil@xxxxxxxxxx (Sage Weil)
- [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85
- From: chn.kei@xxxxxxxxx (Jason King)
- 回复: mix ceph verion with 0.80.5 and 0.85
- From: Derek@xxxxxxxxx (廖建锋)
- Is ceph osd reweight always safe to use?
- From: botemout@xxxxxxxxx (JR)
- SSD journal deployment experiences
- From: chibi@xxxxxxx (Christian Balzer)
- mix ceph verion with 0.80.5 and 0.85
- From: Derek@xxxxxxxxx (廖建锋)
- Is ceph osd reweight always safe to use?
- From: chibi@xxxxxxx (Christian Balzer)
- all my osds are down, but ceph -s tells they are up and in.
- From: fastsync@xxxxxxx (yuelongguang)
- SSD journal deployment experiences
- From: qgrasso@xxxxxxxxxx (Quenten Grasso)
- Is ceph osd reweight always safe to use?
- From: chibi@xxxxxxx (Christian Balzer)
- Updating the pg and pgp values
- From: chibi@xxxxxxx (Christian Balzer)
- resizing the OSD
- From: chibi@xxxxxxx (Christian Balzer)
- OSD is crashing while running admin socket
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- OSD is crashing while running admin socket
- From: sweil@xxxxxxxxxx (Sage Weil)
- OSD is crashing while running admin socket
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- OSD is crashing while running admin socket
- From: sam.just@xxxxxxxxxxx (Samuel Just)
- OSD is crashing while running admin socket
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Is ceph osd reweight always safe to use?
- From: botemout@xxxxxxxxx (JR)
- osd crash: trim_objectcould not find coid
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- osd crash: trim_objectcould not find coid
- From: francois@xxxxxxxxxxxxx (Francois Deppierraz)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- osd crash: trim_objectcould not find coid
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Delays while waiting_for_osdmap according to dump_historic_ops
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Updating the pg and pgp values
- From: jshah2005@xxxxxx (JIten Shah)
- Updating the pg and pgp values
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Updating the pg and pgp values
- From: jshah2005@xxxxxx (JIten Shah)
- Is ceph osd reweight always safe to use?
- From: botemout@xxxxxxxxx (JR)
- Updating the pg and pgp values
- From: jshah2005@xxxxxx (JIten Shah)
- Updating the pg and pgp values
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Updating the pg and pgp values
- From: jshah2005@xxxxxx (JIten Shah)
- resizing the OSD
- From: jshah2005@xxxxxx (JIten Shah)
- Is ceph osd reweight always safe to use?
- From: chibi@xxxxxxx (Christian Balzer)
- Ceph object back up details
- From: yehuda@xxxxxxxxxx (Yehuda Sadeh)
- Is ceph osd reweight always safe to use?
- From: botemout@xxxxxxxxx (JR)
- Remaped osd at remote restart
- From: ekormann@xxxxxxxxx (Eduard Kormann)
- ceph cluster inconsistency keyvaluestore
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Ceph on RHEL 7 with multiple OSD's
- From: bglackin@xxxxxxx (BG)
- Ceph on RHEL 7 with multiple OSD's
- From: bglackin@xxxxxxx (BG)
- Ceph on RHEL 7 with multiple OSD's
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Ceph on RHEL 7 with multiple OSD's
- From: bglackin@xxxxxxx (BG)
- ceph cluster inconsistency keyvaluestore
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- I fail to add a monitor in a ceph cluster
- From: pgs@xxxxxxxxxxxx (Pascal GREGIS)
- I fail to add a monitor in a ceph cluster
- From: pgs@xxxxxxxxxxxx (Pascal GREGIS)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- osd crash: trim_objectcould not find coid
- From: francois@xxxxxxxxxxxxx (Francois Deppierraz)
- Crush Location
- From: wido@xxxxxxxx (Wido den Hollander)
- delete performance
- From: periquito@xxxxxxxxx (Luis Periquito)
- number of PGs
- From: periquito@xxxxxxxxx (Luis Periquito)
- Crush Location
- From: jakesjohn12345@xxxxxxxxx (Jakes John)
- Performance really drops from 700MB/s to 10MB/s
- From: mr.npp@xxxxxxxxxxxxxxxxxxx (Mr. NPP)
- Ceph object back up details
- From: swamireddy@xxxxxxxxx (M Ranga Swami Reddy)
- Ceph and TRIM on SSD disks
- From: chibi@xxxxxxx (Christian Balzer)
- Ceph and TRIM on SSD disks
- From: alex@xxxxxxxxxx (Alex Moore)
- Delays while waiting_for_osdmap according to dump_historic_ops
- From: alex@xxxxxxxxxx (Alex Moore)
- Ceph and TRIM on SSD disks
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Ceph on RHEL 7 with multiple OSD's
- From: loic@xxxxxxxxxxx (Loic Dachary)
- 'incomplete' PGs: what does it mean?
- From: john@xxxxxxxxxxx (John Morris)
- 'incomplete' PGs: what does it mean?
- From: john@xxxxxxxxxxx (John Morris)
- ceph cluster inconsistency keyvaluestore
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- [ceph-users] 答复: 答复: 答复: 答复: ceph osd unexpected error
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Ceph on RHEL 7 with multiple OSD's
- From: yrabl@xxxxxxxxxx (yrabl at redhat.com)
- ceph cluster inconsistency keyvaluestore
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- ceph cluster inconsistency keyvaluestore
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- 答复: 答复: 答复: ceph osd unexpected error
- From: Derek@xxxxxxxxx (廖建锋)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- 答复: 答复: ceph osd unexpected error
- From: Derek@xxxxxxxxx (廖建锋)
- [ceph-users] 答复: ceph osd unexpected error
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- resizing the OSD
- From: chibi@xxxxxxx (Christian Balzer)
- 答复: ceph osd unexpected error
- From: Derek@xxxxxxxxx (廖建锋)
- Huge issues with slow requests
- From: chibi@xxxxxxx (Christian Balzer)
- SSD journal deployment experiences
- From: scott@xxxxxxxxxxx (Scott Laird)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- resizing the OSD
- From: jshah2005@xxxxxx (JIten Shah)
- ceph osd unexpected error
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- SSD journal deployment experiences
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- SSD journal deployment experiences
- From: scott@xxxxxxxxxxx (Scott Laird)
- Huge issues with slow requests
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- SSD journal deployment experiences
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- resizing the OSD
- From: chibi@xxxxxxx (Christian Balzer)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- ceph cluster inconsistency keyvaluestore
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- ceph osd unexpected error
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- ceph osd unexpected error
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- SSD journal deployment experiences
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Huge issues with slow requests
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: josef@xxxxxxxxxxx (Josef Johansson)
- SSD journal deployment experiences
- From: chibi@xxxxxxx (Christian Balzer)
- Good way to monitor detailed latency/throughput
- From: chibi@xxxxxxx (Christian Balzer)
- resizing the OSD
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Ceph Filesystem - Production?
- From: jshah2005@xxxxxx (JIten Shah)
- resizing the OSD
- From: jshah2005@xxxxxx (JIten Shah)
- region creation is failing
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- ceph add flag hashspool
- From: frantisek.drabecky@xxxxxxxxxxxxxxxx (Frantisek Drabecky)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Warren_Wang@xxxxxxxxxxxxxxxxx (Wang, Warren)
- Good way to monitor detailed latency/throughput
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Fwd: Ceph Filesystem - Production?
- From: fxmulder@xxxxxxxxx (James Devine)
- Need help : MDS cluster completely dead !
- From: florent@xxxxxxxxxxx (Florent Bautista)
- Need help : MDS cluster completely dead !
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Huge issues with slow requests
- From: luis.periquito@xxxxxxxxx (Luis Periquito)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- SSD journal deployment experiences
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: david@xxxxxxxxxx (David)
- SSD journal deployment experiences
- From: nigel.d.williams@xxxxxxxxx (Nigel Williams)
- region creation is failing
- From: santhosh.fernandes@xxxxxxxxx (Santhosh Fernandes)
- Need help : MDS cluster completely dead !
- From: florent@xxxxxxxxxxx (Florent Bautista)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- [Ceph-community] Ceph Day Paris Schedule Posted
- From: loic@xxxxxxxxxxx (Loic Dachary)
- ceph osd unexpected error
- From: Derek@xxxxxxxxx (廖建锋)
- Huge issues with slow requests
- From: chibi@xxxxxxx (Christian Balzer)
- How to replace an node in ceph?
- From: chibi@xxxxxxx (Christian Balzer)
- Huge issues with slow requests
- From: david@xxxxxxxxxx (David)
- Ceph Day Paris Schedule Posted
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- ceph -s error
- From: santhosh.fernandes@xxxxxxxxx (Santhosh Fernandes)
- How to replace an node in ceph?
- From: dingdinghua85@xxxxxxxxx (Ding Dinghua)
- ceph -s error
- From: santhosh.fernandes@xxxxxxxxx (Santhosh Fernandes)
- ceph -s error
- From: Sahana.Lokeshappa@xxxxxxxxxxx (Sahana Lokeshappa)
- Huge issues with slow requests
- From: martin@xxxxxxxxxxx (Martin B Nielsen)
- Fwd: Ceph Filesystem - Production?
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- ceph -s error
- From: santhosh.fernandes@xxxxxxxxx (Santhosh Fernandes)
- SSD journal deployment experiences
- From: martin@xxxxxxxxxxx (Martin B Nielsen)
- How to replace an node in ceph?
- From: chibi@xxxxxxx (Christian Balzer)
- How to replace an node in ceph?
- From: dingdinghua85@xxxxxxxxx (Ding Dinghua)
- How to replace an node in ceph?
- From: dingdinghua85@xxxxxxxxx (Ding Dinghua)
- How to replace an node in ceph?
- From: dingdinghua85@xxxxxxxxx (Ding Dinghua)
- Getting error trying to activate the first OSD
- From: jshah2005@xxxxxx (JIten Shah)
- [no subject]
- How to replace an node in ceph?
- From: chn.kei@xxxxxxxxx (Jason King)
- =?gb18030?b?u9i4tKO6ICBDYWNoZSBwb29sIGFuZCB1c2luZyBi?==?gb18030?q?trfs_for_ssd_osds?=
- From: 908429812@xxxxxx (=?gb18030?b?ZGVyZWs=?=)
- Cache pool and using btrfs for ssd osds
- From: andrew@xxxxxxxxxxxxxxxxx (Andrew Thrift)
- osd unexpected error by leveldb
- From: 908429812@xxxxxx (derek)
- SSD journal deployment experiences
- From: chibi@xxxxxxx (Christian Balzer)
- Fwd: Ceph Filesystem - Production?
- From: fxmulder@xxxxxxxxx (James Devine)
- Huge issues with slow requests
- From: chibi@xxxxxxx (Christian Balzer)
- Simple Math?
- From: chibi@xxxxxxx (Christian Balzer)
- Fwd: Ceph Filesystem - Production?
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- Simple Math?
- From: Josh.Zojonc@xxxxxxxxxxxxxxx (Zojonc, Josh)
- SSD journal deployment experiences
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- SSD journal deployment experiences
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- SSD journal deployment experiences
- From: martin@xxxxxxxxxxx (Martin B Nielsen)
- Map-view of PGs
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Fwd: Ceph Filesystem - Production?
- From: fxmulder@xxxxxxxxx (James Devine)
- SSD journal deployment experiences
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- Misdirected client messages
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- SSD journal deployment experiences
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe)
- One stuck PG
- From: martin@xxxxxxxxxxx (Martin B Nielsen)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- SSD journal deployment experiences
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- SSD journal deployment experiences
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- Ceph object back up details
- From: swamireddy@xxxxxxxxx (M Ranga Swami Reddy)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- SSD journal deployment experiences
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- Ceph Day Paris Schedule Posted
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- SSD journal deployment experiences
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- How to replace an node in ceph?
- From: chibi@xxxxxxx (Christian Balzer)
- How to replace an node in ceph?
- From: loic@xxxxxxxxxxx (Loic Dachary)
- How to replace an node in ceph?
- From: dingdinghua85@xxxxxxxxx (Ding Dinghua)
- Cache pool and using btrfs for ssd osds
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Cache pool - step by step guide
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Huge issues with slow requests
- From: david@xxxxxxxxxx (David)
- ceph data consistency
- From: xmdxcxz@xxxxxxxxx (池信泽)
- ceph data consistency
- From: 545640272@xxxxxx (=?gb18030?b?0vjW8tChycg=?=)
- Need help : MDS cluster completely dead !
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- One stuck PG
- From: ceph@xxxxxxxxxxxxxxxxx (Erwin Lubbers)
- Cache pool - step by step guide
- From: vadikgo@xxxxxxxxx (Vladislav Gorbunov)
- I fail to add a monitor in a ceph cluster
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Cache pool - step by step guide
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Ceph monitor load, low performance
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- Ceph monitor load, low performance
- From: pawel.orzechowski@xxxxxxxxxxx (pawel.orzechowski at budikom.net)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- Misdirected client messages
- From: maros.vegh@xxxxxxxxxxxxxxxx (Maros Vegh)
- Misdirected client messages
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Misdirected client messages
- From: maros.vegh@xxxxxxxxxxxxxxxx (Maros Vegh)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- ceph can not repair itself after accidental power down, half of pgs are peering
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Uneven OSD usage
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Fixing mark_unfound_lost revert failure
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Misdirected client messages
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Install from alternate repo
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Install from alternate repo
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- Need help : MDS cluster completely dead !
- From: florent@xxxxxxxxxxx (Florent Bautista)
- script for commissioning a node with multiple osds, added to cluster as a whole
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- Need help : MDS cluster completely dead !
- From: john.spray@xxxxxxxxxx (John Spray)
- Misdirected client messages
- From: maros.vegh@xxxxxxxxxxxxxxxx (Maros Vegh)
- Rebuilding OSD in firefly
- From: xchenum@xxxxxxxxx (Xu (Simon) Chen)
- ceph cluster inconsistency keyvaluestore
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- docker + coreos + ceph
- From: lorieri@xxxxxxxxx (Lorieri)
- [ceph-calamari] RFC: A preliminary Chinese version of Calamari
- From: gmeno@xxxxxxxxxx (Gregory Meno)
- script for commissioning a node with multiple osds, added to cluster as a whole
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- Need help : MDS cluster completely dead !
- From: bautista.florent@xxxxxxxxx (Florent Bautista)
- docker + coreos + ceph
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- docker + coreos + ceph
- From: marco@xxxxxxxxx (Marco Garcês)
- docker + coreos + ceph
- From: dmsimard@xxxxxxxx (David Moreau Simard)
- docker + coreos + ceph
- From: lorieri@xxxxxxxxx (Lorieri)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: j.david.lists@xxxxxxxxx (J David)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: konrad.gutkowski@xxxxxx (Konrad Gutkowski)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: j.david.lists@xxxxxxxxx (J David)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: konrad.gutkowski@xxxxxx (Konrad Gutkowski)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: j.david.lists@xxxxxxxxx (J David)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Warren_Wang@xxxxxxxxxxxxxxxxx (Wang, Warren)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- Questions regarding Crush Map
- From: jakesjohn12345@xxxxxxxxx (Jakes John)
- Questions regarding Crush Map
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Questions regarding Crush Map
- From: jakesjohn12345@xxxxxxxxx (Jakes John)
- ceph-deploy with --release (--stable) for dumpling?
- From: Warren_Wang@xxxxxxxxxxxxxxxxx (Wang, Warren)
- Fixing mark_unfound_lost revert failure
- From: loic@xxxxxxxxxxx (Loic Dachary)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: c.lemarchand@xxxxxxxxxxx (Cédric Lemarchand)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Best practises for network settings.
- From: mateusz.skala@xxxxxxxxxxx (Mateusz Skała)
- Questions regarding Crush Map
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Berlin Ceph MeetUp: September 22nd, 2014
- From: r.sander@xxxxxxxxxxxxxxxxxxx (Robert Sander)
- Questions regarding Crush Map
- From: jakesjohn12345@xxxxxxxxx (Jakes John)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- I fail to add a monitor in a ceph cluster
- From: pgs@xxxxxxxxxxxx (Pascal GREGIS)
- kvm guest with rbd-disks are unaccesible after app. 3h afterwards one OSD node fails
- From: ulembke@xxxxxxxxxxxx (Udo Lembke)
- ceph cluster inconsistency keyvaluestore
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Librbd log and configuration
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- ceph cluster inconsistency keyvaluestore
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- [Ceph-community] Paris Ceph meetup : september 18th, 2014
- From: dmsimard@xxxxxxxx (David Moreau Simard)
- Paris Ceph meetup : september 18th, 2014
- From: loic@xxxxxxxxxxx (Loic Dachary)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- ceph cluster inconsistency keyvaluestore
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- ceph.com 403 forbidden
- From: carrot99@xxxxxxxx (박선규)
- question about monitor and paxos relationship
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe - Profihost AG)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- About IOPS num
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Librbd log and configuration
- From: dingdinghua85@xxxxxxxxx (Ding Dinghua)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: amberzhang86@xxxxxxxxx (Jian Zhang)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: amberzhang86@xxxxxxxxx (Jian Zhang)
- About IOPS num
- From: chn.kei@xxxxxxxxx (Jason King)
- Difference between "object rm" and "object unlink" ?
- From: chn.kei@xxxxxxxxx (Jason King)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- question about monitor and paxos relationship
- From: scott@xxxxxxxxxxx (Scott Laird)
- Fixing mark_unfound_lost revert failure
- From: loic@xxxxxxxxxxx (Loic Dachary)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Uneven OSD usage
- From: chibi@xxxxxxx (Christian Balzer)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: chibi@xxxxxxx (Christian Balzer)
- question about monitor and paxos relationship
- From: joao.luis@xxxxxxxxxxx (Joao Eduardo Luis)
- Asked for emperor, got firefly. (You can't take the sky from me?)
- From: j.david.lists@xxxxxxxxx (J David)
- Uneven OSD usage
- From: j.david.lists@xxxxxxxxx (J David)
- question about monitor and paxos relationship
- From: joao.luis@xxxxxxxxxxx (Joao Eduardo Luis)
- question about monitor and paxos relationship
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- question about monitor and paxos relationship
- From: joao.luis@xxxxxxxxxxx (Joao Eduardo Luis)
- question about monitor and paxos relationship
- From: j.david.lists@xxxxxxxxx (J David)
- question about monitor and paxos relationship
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- script for commissioning a node with multiple osds, added to cluster as a whole
- From: olivier.delhomme@xxxxxxxxxxxxxxxxxx (Olivier DELHOMME)
- 'incomplete' PGs: what does it mean?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- script for commissioning a node with multiple osds, added to cluster as a whole
- From: konrad.gutkowski@xxxxxx (Konrad Gutkowski)
- script for commissioning a node with multiple osds, added to cluster as a whole
- From: cwseys@xxxxxxxxxxxxxxxx (Chad Seys)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe - Profihost AG)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Fwd: Ceph Filesystem - Production?
- From: fxmulder@xxxxxxxxx (James Devine)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: matt@xxxxxxxxxxxx (Matt W. Benjamin)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: dieter.kasper@xxxxxxxxxxxxxx (Kasper Dieter)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Difference between "object rm" and "object unlink" ?
- From: zhu_qiang_ws@xxxxxxxxxxx (zhu qiang)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- RFC: A preliminary Chinese version of Calamari
- From: liwang@xxxxxxxxxxxxxxx (Li Wang)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: andrey@xxxxxxx (Andrey Korolyov)
- About IOPS num
- From: lixuehui@xxxxxxxxxxxxxxxxx (lixuehui at chinacloud.com.cn)
- Uneven OSD usage
- From: chibi@xxxxxxx (Christian Balzer)
- 'incomplete' PGs: what does it mean?
- From: john@xxxxxxxxxxx (John Morris)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Uneven OSD usage
- From: j.david.lists@xxxxxxxxx (J David)
- question about monitor and paxos relationship
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: matt@xxxxxxxxxxxx (Matt W. Benjamin)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Uneven OSD usage
- From: chibi@xxxxxxx (Christian Balzer)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Fwd: Ceph Filesystem - Production?
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- Fwd: Ceph Filesystem - Production?
- From: fxmulder@xxxxxxxxx (James Devine)
- Uneven OSD usage
- From: j.david.lists@xxxxxxxxx (J David)
- Uneven OSD usage
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Best practice K/M-parameters EC pool
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Best practice K/M-parameters EC pool
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: andrey@xxxxxxx (Andrey Korolyov)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- MSWin CephFS
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: dmsimard@xxxxxxxx (David Moreau Simard)
- Ceph Filesystem - Production?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- RAID underlying a Ceph config
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- MSWin CephFS
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Best practice K/M-parameters EC pool
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Ceph Filesystem - Production?
- From: bhuffman@xxxxxxxxxxxxxxxxxxx (Brian C. Huffman)
- RAID underlying a Ceph config
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- Uneven OSD usage
- From: j.david.lists@xxxxxxxxx (J David)
- [Single OSD performance on SSD] Can't go over 3, 2K IOPS
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Best practice K/M-parameters EC pool
- From: chibi@xxxxxxx (Christian Balzer)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Best practice K/M-parameters EC pool
- From: blair.bethwaite@xxxxxxxxx (Blair Bethwaite)
- what does monitor data directory include?
- From: joao.luis@xxxxxxxxxxx (Joao Eduardo Luis)
- Best practice K/M-parameters EC pool
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Unable to create swift type sub user in Rados Gateway :: Ceph Firefly 0.85
- From: karan.singh@xxxxxx (Karan Singh)
- what does monitor data directory include?
- From: fastsync@xxxxxxx (yuelongguang)
- what does monitor data directory include?
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- what does monitor data directory include?
- From: fastsync@xxxxxxx (yuelongguang)
- ceph can not repair itself after accidental power down, half of pgs are peering
- From: fastsync@xxxxxxx (yuelongguang)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Prioritize Heartbeat packets
- From: daniel.swarbrick@xxxxxxxxxxxxxxxx (Daniel Swarbrick)
- how to store radosgw operations logging data to other storage backend?
- From: zhu_qiang_ws@xxxxxxxxxxx (zhu qiang)
- Best practice K/M-parameters EC pool
- From: chibi@xxxxxxx (Christian Balzer)
- Prioritize Heartbeat packets
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- Prioritize Heartbeat packets
- From: matt@xxxxxxxxxxxx (Matt W. Benjamin)
- MDS dying on Ceph 0.67.10
- From: tientienminh080590@xxxxxxxxx (MinhTien MinhTien)
- Prioritize Heartbeat packets
- From: sweil@xxxxxxxxxx (Sage Weil)
- Prioritize Heartbeat packets
- From: matt@xxxxxxxxxxxx (Matt W. Benjamin)
- Prioritize Heartbeat packets
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- Prioritize Heartbeat packets
- From: sweil@xxxxxxxxxx (Sage Weil)
- Prioritize Heartbeat packets
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- do RGW have billing feature? If have, how do we use it ?
- From: yehuda@xxxxxxxxxx (Yehuda Sadeh)
- do RGW have billing feature? If have, how do we use it ?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Best practice K/M-parameters EC pool
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- 'incomplete' PGs: what does it mean?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- error ioctl(BTRFS_IOC_SNAP_CREATE) failed: (17) File exists
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- active+remapped after remove osd via ceph osd out
- From: dominikmostowiec@xxxxxxxxx (Dominik Mostowiec)
- Ceph-fuse fails to mount
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- Ceph-fuse fails to mount
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- Ceph monitor load, low performance
- From: szablowska.patrycja@xxxxxxxxx (Patrycja Szabłowska)
- Ceph monitor load, low performance
- From: pawel.orzechowski@xxxxxxxxxxx (pawel.orzechowski at budikom.net)
- Cephfs: sporadic damages uploaded files
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- Cephfs: sporadic damages uploaded files
- From: michael.kolomiets@xxxxxxxxx (Michael Kolomiets)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Cephfs: sporadic damages uploaded files
- From: michael.kolomiets@xxxxxxxxx (Michael Kolomiets)
- Two osds are spaming dmesg every 900 seconds
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Cephfs: sporadic damages uploaded files
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- Cephfs: sporadic damages uploaded files
- From: michael.kolomiets@xxxxxxxxx (Michael Kolomiets)
- MDS dying on Ceph 0.67.10
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- 'incomplete' PGs: what does it mean?
- From: john@xxxxxxxxxxx (John Morris)
- error ioctl(BTRFS_IOC_SNAP_CREATE) failed: (17) File exists
- From: john@xxxxxxxxxxx (John Morris)
- MDS dying on Ceph 0.67.10
- From: tientienminh080590@xxxxxxxxx (MinhTien MinhTien)
- Best practice K/M-parameters EC pool
- From: chibi@xxxxxxx (Christian Balzer)
- ceph-deploy with --release (--stable) for dumpling?
- From: nigel.d.williams@xxxxxxxxx (Nigel Williams)
- Best practice K/M-parameters EC pool
- From: chibi@xxxxxxx (Christian Balzer)
- do RGW have billing feature? If have, how do we use it ?
- From: baijiaruo@xxxxxxx (baijiaruo at 126.com)
- Fresh Firefly install degraded without modified default tunables
- From: ripal@xxxxxxxxxxx (Ripal Nathuji)
- Ceph-fuse fails to mount
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- [Ceph-community] ceph replication and striping
- From: aarontc@xxxxxxxxxxx (Aaron Ten Clay)
- MDS dying on Ceph 0.67.10
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Ceph-fuse fails to mount
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Fresh Firefly install degraded without modified default tunables
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Two osds are spaming dmesg every 900 seconds
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- slow read speeds from kernel rbd (Firefly 0.80.4)
- From: sma310@xxxxxxxxxx (Steve Anthony)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Ceph monitor load, low performance
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Best practice K/M-parameters EC pool
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- ceph can not repair itself after accidental power down, half of pgs are peering
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- MDS dying on Ceph 0.67.10
- From: tientienminh080590@xxxxxxxxx (MinhTien MinhTien)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- ceph can not repair itself after accidental power down, half of pgs are peering
- From: fastsync@xxxxxxx (yuelongguang)
- enrich ceph test methods, what is your concern about ceph. thanks
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- enrich ceph test methods, what is your concern about ceph. thanks
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- enrich ceph test methods, what is your concern about ceph. thanks
- From: fastsync@xxxxxxx (yuelongguang)
- Ceph monitor load, low performance
- From: pawel.orzechowski@xxxxxxxxxxx (pawel.orzechowski at budikom.net)
- Ceph monitor load, low performance
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- Best practice K/M-parameters EC pool
- From: chibi@xxxxxxx (Christian Balzer)
- Ceph monitor load, low performance
- From: mateusz.skala@xxxxxxxxxxx (Mateusz Skała)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Ceph monitor load, low performance
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- v0.84 released
- From: stijn.deweirdt@xxxxxxxx (Stijn De Weirdt)
- ceph cluster inconsistency?
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- enrich ceph test methods, what is your concern about ceph. thanks
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- Ceph monitor load, low performance
- From: mateusz.skala@xxxxxxxxxxx (Mateusz Skała)
- enrich ceph test methods, what is your concern about ceph. thanks
- From: fastsync@xxxxxxx (yuelongguang)
- question about getting rbd.ko and ceph.ko
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- question about getting rbd.ko and ceph.ko
- From: fastsync@xxxxxxx (yuelongguang)
- ceph-deploy with --release (--stable) for dumpling?
- From: konrad.gutkowski@xxxxxx (Konrad Gutkowski)
- ceph-deploy with --release (--stable) for dumpling?
- From: nigel.d.williams@xxxxxxxxx (Nigel Williams)
- rbd export poor performance
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- ceph-deploy with mount option like discard, noatime
- From: dan.mick@xxxxxxxxxxx (Dan Mick)
- ceph-deploy with mount option like discard, noatime
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Best practice K/M-parameters EC pool
- From: blair.bethwaite@xxxxxxxxx (Blair Bethwaite)
- ceph-deploy with mount option like discard, noatime
- From: dan.mick@xxxxxxxxxxx (Dan Mick)
- ceph-deploy with mount option like discard, noatime
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Ceph-fuse fails to mount
- From: richardnixonshead@xxxxxxxxx (Sean Crosby)
- Ceph-fuse fails to mount
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- Fresh Firefly install degraded without modified default tunables
- From: ripal@xxxxxxxxxxx (Ripal Nathuji)
- Two osds are spaming dmesg every 900 seconds
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Installing a ceph cluster from scratch
- From: jshah2005@xxxxxx (JIten Shah)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Fresh Firefly install degraded without modified default tunables
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Monitor/OSD report tuning question
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Installing a ceph cluster from scratch
- From: stephenjahl@xxxxxxxxx (Stephen Jahl)
- Ceph monitor load, low performance
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Ceph monitor load, low performance
- From: mateusz.skala@xxxxxxxxxxx (Mateusz Skała)
- ceph rbd image checksums
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- ceph rbd image checksums
- From: damoxc@xxxxxxxxx (Damien Churchill)
- ceph rbd image checksums
- From: wido@xxxxxxxx (Wido den Hollander)
- rbd clones and export / export-diff functions
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Monitor/OSD report tuning question
- From: chibi@xxxxxxx (Christian Balzer)
- question about how to incrementally rebuild an image out of cluster
- From: jaychj@xxxxxxxxxx (小杰)
- question about how to incrementally rebuild an image out of cluster
- From: jaychj@xxxxxxxxxx (小杰)
- ceph rbd image checksums
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- One Mon log huge and this Mon down often
- From: onlydebian@xxxxxxxxx (debian Only)
- One Mon log huge and this Mon down often
- From: joao.luis@xxxxxxxxxxx (Joao Eduardo Luis)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- One Mon log huge and this Mon down often
- From: onlydebian@xxxxxxxxx (debian Only)
- Monitor/OSD report tuning question
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Monitor/OSD report tuning question
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Is it safe to enable rbd cache with qemu?
- From: sweil@xxxxxxxxxx (Sage Weil)
- Is it safe to enable rbd cache with qemu?
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Installing a ceph cluster from scratch
- From: jshah2005@xxxxxx (JIten Shah)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Turn of manage function in calamari
- From: yumima@xxxxxxxxx (Yuming Ma (yumima))
- pool with cache pool and rbd export
- From: sweil@xxxxxxxxxx (Sage Weil)
- pool with cache pool and rbd export
- From: sweil@xxxxxxxxxx (Sage Weil)
- pool with cache pool and rbd export
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- pool with cache pool and rbd export
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Non hierarchical privilèges between S3 bucket and its underlying keys
- From: clement@xxxxxxxxxxxxxxx (Clement Game)
- Non hierarchical privilèges between S3 bucket and its underlying keys
- From: clement@xxxxxxxxxxxxxxx (Clement Game)
- pool with cache pool and rbd export
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Is it safe to enable rbd cache with qemu?
- From: yufang521247@xxxxxxxxx (Yufang)
- Is it safe to enable rbd cache with qemu?
- From: yufang521247@xxxxxxxxx (Yufang)
- Is it safe to enable rbd cache with qemu?
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Is it safe to enable rbd cache with qemu?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- One Mon log huge and this Mon down often
- From: onlydebian@xxxxxxxxx (debian Only)
- One Mon log huge and this Mon down often
- From: joao.luis@xxxxxxxxxxx (Joao Eduardo Luis)
- Is it safe to enable rbd cache with qemu?
- From: yufang521247@xxxxxxxxx (Yufang Zhang)
- One Mon log huge and this Mon down often
- From: onlydebian@xxxxxxxxx (debian Only)
- How to calculate necessary disk amount
- From: idezebi@xxxxxxxxx (idzzy)
- How to calculate necessary disk amount
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- How to calculate necessary disk amount
- From: idezebi@xxxxxxxxx (idzzy)
- How to calculate necessary disk amount
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- How to calculate necessary disk amount
- From: idezebi@xxxxxxxxx (idzzy)
- Ceph Cinder Capabilities reports wrong free size
- From: jens-christian.fischer@xxxxxxxxx (Jens-Christian Fischer)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: Michael.Riederer@xxxxx (Riederer, Michael)
- Ceph Cinder Capabilities reports wrong free size
- From: contacto@xxxxxxxxxxxxxxxxxx (Contacto)
- How to calculate necessary disk amount
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- How to calculate necessary disk amount
- From: idezebi@xxxxxxxxx (idzzy)
- Question on OSD node failure recovery
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Problem setting tunables for ceph firefly
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- MON running 'ceph -w' doesn't see OSD's booting
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- fail to upload file from RadosGW by Python+S3
- From: onlydebian@xxxxxxxxx (debian Only)
- Ceph Cinder Capabilities reports wrong free size
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- fail to upload file from RadosGW by Python+S3
- From: onlydebian@xxxxxxxxx (debian Only)
- MON running 'ceph -w' doesn't see OSD's booting
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- fail to upload file from RadosGW by Python+S3
- From: onlydebian@xxxxxxxxx (debian Only)
- Hanging ceph client
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Ceph Cinder Capabilities reports wrong free size
- From: jens-christian.fischer@xxxxxxxxx (Jens-Christian Fischer)
- [radosgw] unable to perform any operation using s3 api
- From: onlydebian@xxxxxxxxx (debian Only)
- Problem setting tunables for ceph firefly
- From: gerd@xxxxxxxxxxxxx (Gerd Jakobovitsch)
- MON running 'ceph -w' doesn't see OSD's booting
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- active+remapped after remove osd via ceph osd out
- From: dominikmostowiec@xxxxxxxxx (Dominik Mostowiec)
- fail to upload file from RadosGW by Python+S3
- From: onlydebian@xxxxxxxxx (debian Only)
- Hanging ceph client
- From: damoxc@xxxxxxxxx (Damien Churchill)
- Question on OSD node failure recovery
- From: Sean.Noonan@xxxxxxxxxxxx (Sean Noonan)
- Question on OSD node failure recovery
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- Ceph + Qemu cache=writethrough
- From: pawel.sadowski@xxxxxxxxx (Paweł Sadowski)
- ceph-users@xxxxxxxxxxxxxx
- From: ceph@xxxxxxxxx (Paweł Sadowski)
- Serious performance problems with small file writes
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Serious performance problems with small file writes
- From: h.r.mills@xxxxxxxxxxxxx (Hugo Mills)
- Serious performance problems with small file writes
- From: h.r.mills@xxxxxxxxxxxxx (Hugo Mills)
- question about how to incrementally rebuild an image out of cluster
- From: jaychj@xxxxxxxxxx (=?gb18030?b?0KG93A==?=)
- MON running 'ceph -w' doesn't see OSD's booting
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Serious performance problems with small file writes
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- Serious performance problems with small file writes
- From: chibi@xxxxxxx (Christian Balzer)
- MON running 'ceph -w' doesn't see OSD's booting
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Serious performance problems with small file writes
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Translating a RadosGW object name into a filename on disk
- From: sweil@xxxxxxxxxx (Sage Weil)
- Translating a RadosGW object name into a filename on disk
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Best Practice to Copy/Move Data Across Clusters
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Best Practice to Copy/Move Data Across Clusters
- From: larryliugml@xxxxxxxxx (Larry Liu)
- mds isn't working anymore after osd's running full
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Serious performance problems with small file writes
- From: h.r.mills@xxxxxxxxxxxxx (Hugo Mills)
- Serious performance problems with small file writes
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Serious performance problems with small file writes
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Serious performance problems with small file writes
- From: h.r.mills@xxxxxxxxxxxxx (Hugo Mills)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- Starting Ceph OSD
- From: marcpons@xxxxxxxxxxxxxxxx (Pons)
- Problem when building&running cuttlefish from source on Ubuntu 14.04 Server
- From: notexist@xxxxxxxxx (NotExist)
- some pgs active+remapped, Ceph can not recover itself.
- From: onlydebian@xxxxxxxxx (debian Only)
- mds isn't working anymore after osd's running full
- From: jasper.siero@xxxxxxxxxxxxxxxxx (Jasper Siero)
- how radosgw recycle bucket index object and bucket meta object
- From: baijiaruo@xxxxxxx (baijiaruo at 126.com)
- Deadlock in ceph journal
- From: sweil@xxxxxxxxxx (Sage Weil)
- Translating a RadosGW object name into a filename on disk
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- how radosgw recycle bucket index object and bucket meta object
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- some pgs active+remapped, Ceph can not recover itself.
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- how radosgw recycle bucket index object and bucket meta object
- From: baijiaruo@xxxxxxx (baijiaruo at 126.com)
- Deadlock in ceph journal
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Deadlock in ceph journal
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- stale+incomplete pgs on new cluster
- From: rbsmith@xxxxxxxxx (Randy Smith)
- stale+incomplete pgs on new cluster
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- stale+incomplete pgs on new cluster
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- stale+incomplete pgs on new cluster
- From: rbsmith@xxxxxxxxx (Randy Smith)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Musings
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- Musings
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Musings
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- setfattr ... does not work anymore for pools
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- RadosGW problems
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- v0.84 released
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- mds isn't working anymore after osd's running full
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Problem when building&running cuttlefish from source on Ubuntu 14.04 Server
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- help to confirm if journal includes everything a OP has
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Musings
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Translating a RadosGW object name into a filename on disk
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- rados bench no clean cleanup
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- v0.84 released
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- policy cache pool
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- what are these files for mon?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- v0.84 released
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- v0.84 released
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- v0.84 released
- From: sweil@xxxxxxxxxx (Sage Weil)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- v0.84 released
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- some pgs active+remapped, Ceph can not recover itself.
- From: onlydebian@xxxxxxxxx (debian Only)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- Calamari redirect
- From: mail@xxxxxxxxxxxxxxxxx (Johan Kooijman)
- Calamari redirect
- From: john.spray@xxxxxxxxxx (John Spray)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- ceph cluster inconsistency?
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Calamari redirect
- From: mail@xxxxxxxxxxxxxxxxx (Johan Kooijman)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: Michael.Riederer@xxxxx (Riederer, Michael)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Fresh Firefly install degraded without modified default tunables
- From: ripal@xxxxxxxxxxx (Ripal Nathuji)
- v0.84 released
- From: sweil@xxxxxxxxxx (Sage Weil)
- v0.84 released
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- active+remapped after remove osd via ceph osd out
- From: dominikmostowiec@xxxxxxxxx (Dominik Mostowiec)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: john@xxxxxxxxxxx (John Morris)
- [radosgw-admin] bilog list confusion
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- setfattr ... works after 'ceph mds add_data_pool'
- From: dieter.kasper@xxxxxxxxxxxxxx (Kasper Dieter)
- setfattr ... does not work anymore for pools
- From: dieter.kasper@xxxxxxxxxxxxxx (Kasper Dieter)
- cephfs set_layout / setfattr ... does not work anymore for pools
- From: sweil@xxxxxxxxxx (Sage Weil)
- cephfs set_layout / setfattr ... does not work anymore for pools
- From: dieter.kasper@xxxxxxxxxxxxxx (Kasper Dieter)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: john@xxxxxxxxxxx (John Morris)
- v0.84 released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Managing OSDs on twin machines
- From: jharley@xxxxxxxxxx (Jason Harley)
- Managing OSDs on twin machines
- From: pierre@xxxxxxxx (Pierre Jaury)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: sweil@xxxxxxxxxx (Sage Weil)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: john@xxxxxxxxxxx (John Morris)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- ceph-deploy error
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[xfs]
[Linux SCSI]
[Ceph Large]
[Samba]
[Yosemite Forum]