CEPH Filesystem Users
[Prev Page][Next Page]
- enrich ceph test methods, what is your concern about ceph. thanks
- From: fastsync@xxxxxxx (yuelongguang)
- question about getting rbd.ko and ceph.ko
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- question about getting rbd.ko and ceph.ko
- From: fastsync@xxxxxxx (yuelongguang)
- ceph-deploy with --release (--stable) for dumpling?
- From: konrad.gutkowski@xxxxxx (Konrad Gutkowski)
- ceph-deploy with --release (--stable) for dumpling?
- From: nigel.d.williams@xxxxxxxxx (Nigel Williams)
- rbd export poor performance
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- ceph-deploy with mount option like discard, noatime
- From: dan.mick@xxxxxxxxxxx (Dan Mick)
- ceph-deploy with mount option like discard, noatime
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Best practice K/M-parameters EC pool
- From: blair.bethwaite@xxxxxxxxx (Blair Bethwaite)
- ceph-deploy with mount option like discard, noatime
- From: dan.mick@xxxxxxxxxxx (Dan Mick)
- ceph-deploy with mount option like discard, noatime
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Ceph-fuse fails to mount
- From: richardnixonshead@xxxxxxxxx (Sean Crosby)
- Ceph-fuse fails to mount
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- Fresh Firefly install degraded without modified default tunables
- From: ripal@xxxxxxxxxxx (Ripal Nathuji)
- Two osds are spaming dmesg every 900 seconds
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Installing a ceph cluster from scratch
- From: jshah2005@xxxxxx (JIten Shah)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Fresh Firefly install degraded without modified default tunables
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Monitor/OSD report tuning question
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Installing a ceph cluster from scratch
- From: stephenjahl@xxxxxxxxx (Stephen Jahl)
- Ceph monitor load, low performance
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Ceph monitor load, low performance
- From: mateusz.skala@xxxxxxxxxxx (Mateusz Skała)
- ceph rbd image checksums
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- ceph rbd image checksums
- From: damoxc@xxxxxxxxx (Damien Churchill)
- ceph rbd image checksums
- From: wido@xxxxxxxx (Wido den Hollander)
- rbd clones and export / export-diff functions
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Monitor/OSD report tuning question
- From: chibi@xxxxxxx (Christian Balzer)
- question about how to incrementally rebuild an image out of cluster
- From: jaychj@xxxxxxxxxx (小杰)
- question about how to incrementally rebuild an image out of cluster
- From: jaychj@xxxxxxxxxx (小杰)
- ceph rbd image checksums
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- One Mon log huge and this Mon down often
- From: onlydebian@xxxxxxxxx (debian Only)
- One Mon log huge and this Mon down often
- From: joao.luis@xxxxxxxxxxx (Joao Eduardo Luis)
- osd_heartbeat_grace set to 30 but osd's still fail for grace > 20
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- One Mon log huge and this Mon down often
- From: onlydebian@xxxxxxxxx (debian Only)
- Monitor/OSD report tuning question
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Monitor/OSD report tuning question
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Is it safe to enable rbd cache with qemu?
- From: sweil@xxxxxxxxxx (Sage Weil)
- Is it safe to enable rbd cache with qemu?
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Installing a ceph cluster from scratch
- From: jshah2005@xxxxxx (JIten Shah)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Turn of manage function in calamari
- From: yumima@xxxxxxxxx (Yuming Ma (yumima))
- pool with cache pool and rbd export
- From: sweil@xxxxxxxxxx (Sage Weil)
- pool with cache pool and rbd export
- From: sweil@xxxxxxxxxx (Sage Weil)
- pool with cache pool and rbd export
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- pool with cache pool and rbd export
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Non hierarchical privilèges between S3 bucket and its underlying keys
- From: clement@xxxxxxxxxxxxxxx (Clement Game)
- Non hierarchical privilèges between S3 bucket and its underlying keys
- From: clement@xxxxxxxxxxxxxxx (Clement Game)
- pool with cache pool and rbd export
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- pool with cache pool and rbd export
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Is it safe to enable rbd cache with qemu?
- From: yufang521247@xxxxxxxxx (Yufang)
- Is it safe to enable rbd cache with qemu?
- From: yufang521247@xxxxxxxxx (Yufang)
- Is it safe to enable rbd cache with qemu?
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Is it safe to enable rbd cache with qemu?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- One Mon log huge and this Mon down often
- From: onlydebian@xxxxxxxxx (debian Only)
- One Mon log huge and this Mon down often
- From: joao.luis@xxxxxxxxxxx (Joao Eduardo Luis)
- Is it safe to enable rbd cache with qemu?
- From: yufang521247@xxxxxxxxx (Yufang Zhang)
- One Mon log huge and this Mon down often
- From: onlydebian@xxxxxxxxx (debian Only)
- How to calculate necessary disk amount
- From: idezebi@xxxxxxxxx (idzzy)
- How to calculate necessary disk amount
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- How to calculate necessary disk amount
- From: idezebi@xxxxxxxxx (idzzy)
- How to calculate necessary disk amount
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- How to calculate necessary disk amount
- From: idezebi@xxxxxxxxx (idzzy)
- Ceph Cinder Capabilities reports wrong free size
- From: jens-christian.fischer@xxxxxxxxx (Jens-Christian Fischer)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: Michael.Riederer@xxxxx (Riederer, Michael)
- Ceph Cinder Capabilities reports wrong free size
- From: contacto@xxxxxxxxxxxxxxxxxx (Contacto)
- How to calculate necessary disk amount
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- How to calculate necessary disk amount
- From: idezebi@xxxxxxxxx (idzzy)
- Question on OSD node failure recovery
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Problem setting tunables for ceph firefly
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- MON running 'ceph -w' doesn't see OSD's booting
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- fail to upload file from RadosGW by Python+S3
- From: onlydebian@xxxxxxxxx (debian Only)
- Ceph Cinder Capabilities reports wrong free size
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- fail to upload file from RadosGW by Python+S3
- From: onlydebian@xxxxxxxxx (debian Only)
- MON running 'ceph -w' doesn't see OSD's booting
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- fail to upload file from RadosGW by Python+S3
- From: onlydebian@xxxxxxxxx (debian Only)
- Hanging ceph client
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Ceph Cinder Capabilities reports wrong free size
- From: jens-christian.fischer@xxxxxxxxx (Jens-Christian Fischer)
- [radosgw] unable to perform any operation using s3 api
- From: onlydebian@xxxxxxxxx (debian Only)
- Problem setting tunables for ceph firefly
- From: gerd@xxxxxxxxxxxxx (Gerd Jakobovitsch)
- MON running 'ceph -w' doesn't see OSD's booting
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- active+remapped after remove osd via ceph osd out
- From: dominikmostowiec@xxxxxxxxx (Dominik Mostowiec)
- fail to upload file from RadosGW by Python+S3
- From: onlydebian@xxxxxxxxx (debian Only)
- Hanging ceph client
- From: damoxc@xxxxxxxxx (Damien Churchill)
- Question on OSD node failure recovery
- From: Sean.Noonan@xxxxxxxxxxxx (Sean Noonan)
- Question on OSD node failure recovery
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- Ceph + Qemu cache=writethrough
- From: pawel.sadowski@xxxxxxxxx (Paweł Sadowski)
- ceph-users@xxxxxxxxxxxxxx
- From: ceph@xxxxxxxxx (Paweł Sadowski)
- Serious performance problems with small file writes
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Serious performance problems with small file writes
- From: h.r.mills@xxxxxxxxxxxxx (Hugo Mills)
- Serious performance problems with small file writes
- From: h.r.mills@xxxxxxxxxxxxx (Hugo Mills)
- question about how to incrementally rebuild an image out of cluster
- From: jaychj@xxxxxxxxxx (=?gb18030?b?0KG93A==?=)
- MON running 'ceph -w' doesn't see OSD's booting
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Serious performance problems with small file writes
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- Serious performance problems with small file writes
- From: chibi@xxxxxxx (Christian Balzer)
- MON running 'ceph -w' doesn't see OSD's booting
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Serious performance problems with small file writes
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Translating a RadosGW object name into a filename on disk
- From: sweil@xxxxxxxxxx (Sage Weil)
- Translating a RadosGW object name into a filename on disk
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Best Practice to Copy/Move Data Across Clusters
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Best Practice to Copy/Move Data Across Clusters
- From: larryliugml@xxxxxxxxx (Larry Liu)
- mds isn't working anymore after osd's running full
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Serious performance problems with small file writes
- From: h.r.mills@xxxxxxxxxxxxx (Hugo Mills)
- Serious performance problems with small file writes
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Serious performance problems with small file writes
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Serious performance problems with small file writes
- From: h.r.mills@xxxxxxxxxxxxx (Hugo Mills)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- Starting Ceph OSD
- From: marcpons@xxxxxxxxxxxxxxxx (Pons)
- Problem when building&running cuttlefish from source on Ubuntu 14.04 Server
- From: notexist@xxxxxxxxx (NotExist)
- some pgs active+remapped, Ceph can not recover itself.
- From: onlydebian@xxxxxxxxx (debian Only)
- mds isn't working anymore after osd's running full
- From: jasper.siero@xxxxxxxxxxxxxxxxx (Jasper Siero)
- how radosgw recycle bucket index object and bucket meta object
- From: baijiaruo@xxxxxxx (baijiaruo at 126.com)
- Deadlock in ceph journal
- From: sweil@xxxxxxxxxx (Sage Weil)
- Translating a RadosGW object name into a filename on disk
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- how radosgw recycle bucket index object and bucket meta object
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- some pgs active+remapped, Ceph can not recover itself.
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- how radosgw recycle bucket index object and bucket meta object
- From: baijiaruo@xxxxxxx (baijiaruo at 126.com)
- Deadlock in ceph journal
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Deadlock in ceph journal
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- stale+incomplete pgs on new cluster
- From: rbsmith@xxxxxxxxx (Randy Smith)
- stale+incomplete pgs on new cluster
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- stale+incomplete pgs on new cluster
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- stale+incomplete pgs on new cluster
- From: rbsmith@xxxxxxxxx (Randy Smith)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Musings
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- Musings
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Musings
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- setfattr ... does not work anymore for pools
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- RadosGW problems
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- v0.84 released
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- mds isn't working anymore after osd's running full
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Problem when building&running cuttlefish from source on Ubuntu 14.04 Server
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- help to confirm if journal includes everything a OP has
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Musings
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Translating a RadosGW object name into a filename on disk
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- rados bench no clean cleanup
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- v0.84 released
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- policy cache pool
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- what are these files for mon?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- v0.84 released
- From: James.LaBarre@xxxxxxxxx (LaBarre, James (CTR) A6IT)
- v0.84 released
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- v0.84 released
- From: sweil@xxxxxxxxxx (Sage Weil)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- v0.84 released
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- some pgs active+remapped, Ceph can not recover itself.
- From: onlydebian@xxxxxxxxx (debian Only)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- Calamari redirect
- From: mail@xxxxxxxxxxxxxxxxx (Johan Kooijman)
- Calamari redirect
- From: john.spray@xxxxxxxxxx (John Spray)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- ceph cluster inconsistency?
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Calamari redirect
- From: mail@xxxxxxxxxxxxxxxxx (Johan Kooijman)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: Michael.Riederer@xxxxx (Riederer, Michael)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Fresh Firefly install degraded without modified default tunables
- From: ripal@xxxxxxxxxxx (Ripal Nathuji)
- v0.84 released
- From: sweil@xxxxxxxxxx (Sage Weil)
- v0.84 released
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- active+remapped after remove osd via ceph osd out
- From: dominikmostowiec@xxxxxxxxx (Dominik Mostowiec)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: john@xxxxxxxxxxx (John Morris)
- [radosgw-admin] bilog list confusion
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- setfattr ... works after 'ceph mds add_data_pool'
- From: dieter.kasper@xxxxxxxxxxxxxx (Kasper Dieter)
- setfattr ... does not work anymore for pools
- From: dieter.kasper@xxxxxxxxxxxxxx (Kasper Dieter)
- cephfs set_layout / setfattr ... does not work anymore for pools
- From: sweil@xxxxxxxxxx (Sage Weil)
- cephfs set_layout / setfattr ... does not work anymore for pools
- From: dieter.kasper@xxxxxxxxxxxxxx (Kasper Dieter)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: john@xxxxxxxxxxx (John Morris)
- v0.84 released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Managing OSDs on twin machines
- From: jharley@xxxxxxxxxx (Jason Harley)
- Managing OSDs on twin machines
- From: pierre@xxxxxxxx (Pierre Jaury)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: sweil@xxxxxxxxxx (Sage Weil)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: john@xxxxxxxxxxx (John Morris)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- ceph-deploy error
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: john@xxxxxxxxxxx (John Morris)
- Ceph-Deploy Install Error
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Ceph Days are back with a vengeance!
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- mds isn't working anymore after osd's running full
- From: jasper.siero@xxxxxxxxxxxxxxxxx (Jasper Siero)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- RadosGW problems
- From: Kurt.Bachelder@xxxxxxxxxxxxxxxx (Bachelder, Kurt)
- [radosgw-admin] bilog list confusion
- From: szablowska.patrycja@xxxxxxxxx (Patrycja Szabłowska)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: Michael.Riederer@xxxxx (Riederer, Michael)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: Michael.Riederer@xxxxx (Riederer, Michael)
- ceph cluster inconsistency?
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- ceph cluster inconsistency?
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- pools with latest master
- From: Varada.Kari@xxxxxxxxxxx (Varada Kari)
- pools with latest master
- From: Varada.Kari@xxxxxxxxxxx (Varada Kari)
- Cache tiering and target_max_bytes
- From: ceph@xxxxxxxxx (Paweł Sadowski)
- cache pools on hypervisor servers
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- RadosGW problems
- From: linux.chips@xxxxxxxxx (Linux Chips)
- RadosGW problems
- From: Kurt.Bachelder@xxxxxxxxxxxxxxxx (Bachelder, Kurt)
- active+remapped after remove osd via ceph osd out
- From: dominikmostowiec@xxxxxxxxx (Dominik Mostowiec)
- Cache tiering and CRUSH map
- From: michael.kolomiets@xxxxxxxxx (Michael Kolomiets)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- pools with latest master
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- can osd start up if journal is lost and it has not been replayed?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- The Kraken has been released!
- From: dotalton@xxxxxxxxx (Don Talton (dotalton))
- RadosGW problems
- From: marco@xxxxxxxxx (Marco Garcês)
- Dependency issues in fresh ceph/CentOS 7 install
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Issue with OSD Snaps
- From: jacobgodin@xxxxxxxxx (Jacob Godin)
- Dependency issues in fresh ceph/CentOS 7 install
- From: brian.lovett@xxxxxxxxxxxxxx (Brian)
- Dependency issues in fresh ceph/CentOS 7 install
- From: brian.lovett@xxxxxxxxxxxxxx (Brian)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Best practice K/M-parameters EC pool
- From: erik@xxxxxxxxxxxxx (Erik Logtenberg)
- ceph cluster inconsistency?
- From: sweil@xxxxxxxxxx (Sage Weil)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Best practice K/M-parameters EC pool
- From: erik@xxxxxxxxxxxxx (Erik Logtenberg)
- Best practice K/M-parameters EC pool
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- Best practice K/M-parameters EC pool
- From: wido@xxxxxxxx (Wido den Hollander)
- delete performance
- From: luis.periquito@xxxxxxxxx (Luis Periquito)
- Best practice K/M-parameters EC pool
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Best practice K/M-parameters EC pool
- From: erik@xxxxxxxxxxxxx (Erik Logtenberg)
- CRUSH map advice
- From: chibi@xxxxxxx (Christian Balzer)
- Tracking the system calls for OSD write
- From: xinxin.shu@xxxxxxxxx (Shu, Xinxin)
- How to create multiple OSD's per host?
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- How to create multiple OSD's per host?
- From: chn.kei@xxxxxxxxx (Jason King)
- How to create multiple OSD's per host?
- From: chn.kei@xxxxxxxxx (Jason King)
- How to create multiple OSD's per host?
- From: chn.kei@xxxxxxxxx (Jason King)
- help to confirm if journal includes everything a OP has
- From: fastsync@xxxxxxx (yuelongguang)
- Tracking the system calls for OSD write
- From: xinxin.shu@xxxxxxxxx (Shu, Xinxin)
- can osd start up if journal is lost and it has not been replayed?
- From: fastsync@xxxxxxx (yuelongguang)
- How to create multiple OSD's per host?
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Cache tiering and target_max_bytes
- From: sweil@xxxxxxxxxx (Sage Weil)
- Cache tiering and target_max_bytes
- From: ceph@xxxxxxxxx (Paweł Sadowski)
- ceph --status Missing keyring
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- librados: client.admin authentication error
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- Musings
- From: robert@xxxxxxxxxxxxx (Robert LeBlanc)
- CRUSH map advice
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Performance really drops from 700MB/s to 10MB/s
- From: ganders@xxxxxxxxxxxx (German Anders)
- How to create multiple OSD's per host?
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Performance really drops from 700MB/s to 10MB/s
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Translating a RadosGW object name into a filename on disk
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- cache pools on hypervisor servers
- From: sweil@xxxxxxxxxx (Sage Weil)
- OSD disk replacement best practise
- From: f.wiessner@xxxxxxxxxxxxxxxxxxxxx (Smart Weblications GmbH - Florian Wiessner)
- cache pools on hypervisor servers
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: dmsimard@xxxxxxxx (David Moreau Simard)
- Cache tiering and target_max_bytes
- From: sweil@xxxxxxxxxx (Sage Weil)
- rados bench no clean cleanup
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Performance really drops from 700MB/s to 10MB/s
- From: ganders@xxxxxxxxxxxx (German Anders)
- Performance really drops from 700MB/s to 10MB/s
- From: ganders@xxxxxxxxxxxx (German Anders)
- Cache tiering and target_max_bytes
- From: ceph@xxxxxxxxx (Paweł Sadowski)
- Performance really drops from 700MB/s to 10MB/s
- From: mariusz.gronczewski@xxxxxxxxxxxx (Mariusz Gronczewski)
- osd pool stats
- From: luis.periquito@xxxxxxxxx (Luis Periquito)
- running Firefly client (0.80.1) against older version (dumpling 0.67.10) cluster?
- From: sweil@xxxxxxxxxx (Sage Weil)
- OSD disk replacement best practise
- From: yguang11@xxxxxxxxxxx (Guang Yang)
- Problem when building&running cuttlefish from source on Ubuntu 14.04 Server
- From: notexist@xxxxxxxxx (NotExist)
- ceph cluster inconsistency?
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- cache pools on hypervisor servers
- From: Robert.vanLeeuwen@xxxxxxxxxxxxx (Robert van Leeuwen)
- CRUSH map advice
- From: chibi@xxxxxxx (Christian Balzer)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: chibi@xxxxxxx (Christian Balzer)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: Michael.Riederer@xxxxx (Riederer, Michael)
- Tracking the system calls for OSD write
- From: rajesh.sudarsan@xxxxxxxxx (Sudarsan, Rajesh)
- running Firefly client (0.80.1) against older version (dumpling 0.67.10) cluster?
- From: wido@xxxxxxxx (Wido den Hollander)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: john@xxxxxxxxxxx (John Morris)
- ceph cluster expansion
- From: chibi@xxxxxxx (Christian Balzer)
- running Firefly client (0.80.1) against older version (dumpling 0.67.10) cluster?
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- running Firefly client (0.80.1) against older version (dumpling 0.67.10) cluster?
- From: nigel.d.williams@xxxxxxxxx (Nigel Williams)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: chibi@xxxxxxx (Christian Balzer)
- Fixed all active+remapped PGs stuck forever (but I have no clue why)
- From: dmsimard@xxxxxxxx (David Moreau Simard)
- Ceph with OpenNebula - Down OSD leads to kernel errors
- From: marcpons@xxxxxxxxxxxxxxxx (Pons)
- ceph cluster inconsistency?
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Set erasure as default
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Introductions
- From: mcluseau@xxxxxx (Mikaël Cluseau)
- can osd start up if journal is lost and it has not been replayed?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Performance really drops from 700MB/s to 10MB/s
- From: ganders@xxxxxxxxxxxx (German Anders)
- Set erasure as default
- From: shayansaeed93@xxxxxxxxx (Shayan Saeed)
- http://ceph.com/rpm-firefly/el7/noarch/ceph-release-1-0.el7.noarch.rpm - The requested URL returned error: 404 Not Found
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- http://ceph.com/rpm-firefly/el7/noarch/ceph-release-1-0.el7.noarch.rpm - The requested URL returned error: 404 Not Found
- From: Wilson.Ojwang@xxxxxxxxxxxxxxxxxx (Ojwang, Wilson O (Wilson))
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- http://ceph.com/rpm-firefly/el7/noarch/ceph-release-1-0.el7.noarch.rpm - The requested URL returned error: 404 Not Found
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- cache pools on hypervisor servers
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- ceph cluster inconsistency?
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- http://ceph.com/rpm-firefly/el7/noarch/ceph-release-1-0.el7.noarch.rpm - The requested URL returned error: 404 Not Found
- From: Wilson.Ojwang@xxxxxxxxxxxxxxxxxx (Ojwang, Wilson O (Wilson))
- Performance really drops from 700MB/s to 10MB/s
- From: ganders@xxxxxxxxxxxx (German Anders)
- Performance really drops from 700MB/s to 10MB/s
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- ceph cluster expansion
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- [ANN] ceph-deploy 1.5.11 released
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Performance really drops from 700MB/s to 10MB/s
- From: ganders@xxxxxxxxxxxx (German Anders)
- osd objectstore with ceph-deploy
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- ceph cluster expansion
- From: chibi@xxxxxxx (Christian Balzer)
- could you tell the call flow of pg state migration from log
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- howto limit snaphot rollback priority
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Can't export cephfs via nfs
- From: micha@xxxxxxxxxx (Micha Krause)
- Can't export cephfs via nfs
- From: sweil@xxxxxxxxxx (Sage Weil)
- ceph cluster expansion
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- Can't export cephfs via nfs
- From: micha@xxxxxxxxxx (Micha Krause)
- howto limit snaphot rollback priority
- From: dietmar@xxxxxxxxxxx (Dietmar Maurer)
- ceph cluster expansion
- From: chibi@xxxxxxx (Christian Balzer)
- could you tell the call flow of pg state migration from log
- From: fastsync@xxxxxxx (yuelongguang)
- Sometimes Monitors failed to join the cluster
- From: liucheng1@xxxxxxxxxx (Liucheng (B))
- Can't export cephfs via nfs
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- Can't export cephfs via nfs
- From: micha@xxxxxxxxxx (Micha Krause)
- ceph cluster expansion
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- cache pools on hypervisor servers
- From: Robert.vanLeeuwen@xxxxxxxxxxxxx (Robert van Leeuwen)
- Power Outage
- From: hjcho616@xxxxxxxxx (hjcho616)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: Michael.Riederer@xxxxx (Riederer, Michael)
- can osd start up if journal is lost and it has not been replayed?
- From: fastsync@xxxxxxx (yuelongguang)
- Issues with installing 2 node system
- From: Wilson.Ojwang@xxxxxxxxxxxxxxxxxx (Ojwang, Wilson O (Wilson))
- cache pools on hypervisor servers
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Power Outage
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Power Outage
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Power Outage
- From: hjcho616@xxxxxxxxx (hjcho616)
- v0.67.10 Dumpling released
- From: sweil@xxxxxxxxxx (Sage Weil)
- HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- CRUSH map advice
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Issues with installing 2 node system
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- ceph-disk: Error: ceph osd start failed: Command '['/sbin/service', 'ceph', 'start', 'osd.5']' returned non-zero exit status 1
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph-Deploy Install Error
- From: scjoshk@xxxxxxxxx (joshua Kay)
- [Ceph-community] working ceph.conf file?
- From: xarses@xxxxxxxxx (Andrew Woodward)
- ceph-deploy error
- From: scjoshk@xxxxxxxxx (joshua Kay)
- Integrating ceph with cinder-backup
- From: gsushma@xxxxxxxxx (Sushma R)
- OSD Issue
- From: jacobgodin@xxxxxxxxx (Jacob Godin)
- OSD Issue
- From: jacobgodin@xxxxxxxxx (Jacob Godin)
- Issues with installing 2 node system
- From: Wilson.Ojwang@xxxxxxxxxxxxxxxxxx (Ojwang, Wilson O (Wilson))
- OSD Issue
- From: jacobgodin@xxxxxxxxx (Jacob Godin)
- Moving Journal to SSD
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- Moving Journal to SSD
- From: dane.elwell@xxxxxxxxx (Dane Elwell)
- [Ceph-community] working ceph.conf file?
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Can't export cephfs via nfs
- From: micha@xxxxxxxxxx (Micha Krause)
- Can't export cephfs via nfs
- From: pierre.blondeau@xxxxxxxxxx (Pierre BLONDEAU)
- Can't export cephfs via nfs
- From: micha@xxxxxxxxxx (Micha Krause)
- ceph-disk: Error: ceph osd start failed: Command '['/sbin/service', 'ceph', 'start', 'osd.5']' returned non-zero exit status 1
- From: willierjyt@xxxxxxxxx (Yitao Jiang)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Fw: external monitoring tools for processes
- From: erik@xxxxxxxxxxxxx (Erik Logtenberg)
- Show IOps per VM/client to find heavy users...
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Fresh deploy of ceph 0.83 has OSD down
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Using Valgrind with Teuthology
- From: 2639431@xxxxxxxxx (Sarang G)
- mounting RBD in linux containers
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- Fw: external monitoring tools for processes
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- Fw: single node installation
- From: abhishek.lekshmanan@xxxxxxxxx (Abhishek L)
- Fw: single node installation
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- Fw: single node installation
- From: abhishek.lekshmanan@xxxxxxxxx (Abhishek L)
- Fw: single node installation
- From: lorieri@xxxxxxxxx (Lorieri)
- Fw: external monitoring tools for processes
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- Fw: single node installation
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- docker + coreos + ceph
- From: lorieri@xxxxxxxxx (Lorieri)
- Introductions
- From: zach@xxxxxxxxxxxxxx (Zach Hill)
- mounting RBD in linux containers
- From: lorieri@xxxxxxxxx (Lorieri)
- [Ceph-community] working ceph.conf file?
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Introductions
- From: mcluseau@xxxxxx (Mikaël Cluseau)
- single node installation
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- external monitoring tools for processes
- From: prag_2648@xxxxxxxxxxx (pragya jain)
- CRUSH map advice
- From: john@xxxxxxxxxxx (John Morris)
- Introductions
- From: onlydebian@xxxxxxxxx (debian Only)
- [Ceph-community] working ceph.conf file?
- From: matt@xxxxxxxxxxx (Matt Harlum)
- Can't start OSD
- From: matt@xxxxxxxxxxx (Matt Harlum)
- Introductions
- From: zach@xxxxxxxxxxxxxx (Zach Hill)
- [Ceph-community] OSD won't restart after system boot
- From: xarses@xxxxxxxxx (Andrew Woodward)
- [Ceph-community] working ceph.conf file?
- From: xarses@xxxxxxxxx (Andrew Woodward)
- PGs stuck creating
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- PGs stuck creating
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Apache on Trusty
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph runs great then falters
- From: ckitzmiller@xxxxxxxxxxxxx (Chris Kitzmiller)
- Can't start OSD
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Can't start OSD
- From: ganders@xxxxxxxxxxxx (German Anders)
- nf_conntrack overflow crashes OSDs
- From: kc@xxxxxxxxxx (Christian Kauhaus)
- Show IOps per VM/client to find heavy users...
- From: wido@xxxxxxxx (Wido den Hollander)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Show IOps per VM/client to find heavy users...
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Can't start OSD
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Show IOps per VM/client to find heavy users...
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Show IOps per VM/client to find heavy users...
- From: wido@xxxxxxxx (Wido den Hollander)
- nf_conntrack overflow crashes OSDs
- From: Robert.vanLeeuwen@xxxxxxxxxxxxx (Robert van Leeuwen)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Show IOps per VM/client to find heavy users...
- From: wido@xxxxxxxx (Wido den Hollander)
- Show IOps per VM/client to find heavy users...
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- nf_conntrack overflow crashes OSDs
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- nf_conntrack overflow crashes OSDs
- From: kc@xxxxxxxxxx (Christian Kauhaus)
- Can't start OSD
- From: karan.singh@xxxxxx (Karan Singh)
- In what scenario, cache tier agent will evict the objetcs.
- From: dongx@xxxxxxxxx (dongx at neunn.com)
- rados bench no clean cleanup
- From: zhu_qiang_ws@xxxxxxxxxxx (zhu qiang)
- Ceph can't seem to forget
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- ceph-deploy activate actually didn't activate the OSD
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Start clients during boot
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Can't start OSD
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Ceph can't seem to forget
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- ceph-deploy activate actually didn't activate the OSD
- From: ganders@xxxxxxxxxxxx (German Anders)
- Is there a Ceph quick reference card?
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Regarding cache tier understanding
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- Openstack Havana root fs resize don't work
- From: Hauke-Bruno.Wollentin@xxxxxxxxxxxxxxx (Hauke Bruno Wollentin)
- Regarding cache tier understanding
- From: sweil@xxxxxxxxxx (Sage Weil)
- What is difference in storing data between rbd and rados ?
- From: sweil@xxxxxxxxxx (Sage Weil)
- Ceph writes stall for long perioids with no disk/network activity
- From: chibi@xxxxxxx (Christian Balzer)
- Ceph writes stall for long perioids with no disk/network activity
- From: mariusz.gronczewski@xxxxxxxxxxxx (Mariusz Gronczewski)
- ceph rbd volume can't remove because image still has watchers
- From: karan.singh@xxxxxx (Karan Singh)
- slow OSD brings down the cluster
- From: luis.periquito@xxxxxxxxx (Luis Periquito)
- Regarding cache tier understanding
- From: Somnath.Roy@xxxxxxxxxxx (Somnath Roy)
- What is difference in storing data between rbd and rados ?
- From: onlydebian@xxxxxxxxx (debian Only)
- ceph rbd volume can't remove because image still has watchers
- From: yangwanyuan8861@xxxxxxxxx (杨万元)
- Dependency issues in fresh ceph/CentOS 7 install
- From: kyle.bader@xxxxxxxxx (Kyle Bader)
- Fresh deploy of ceph 0.83 has OSD down
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Ceph can't seem to forget.
- From: lookcrabs@xxxxxxxxx (Sean Sullivan)
- Ceph can't seem to forget
- From: lookcrabs@xxxxxxxxx (Sean Sullivan)
- Openstack Havana root fs resize don't work
- From: jeremy.hanmer@xxxxxxxxxxxxx (Jeremy Hanmer)
- 10th Anniversary T-Shirts for Contributors
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- ceph-deploy disk activate error msg
- From: ganders@xxxxxxxxxxxx (German Anders)
- ceph-deploy disk activate error msg
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- slow OSD brings down the cluster
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- ceph-deploy disk activate error msg
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- librbd tuning?
- From: tbayly@xxxxxxxxxxxx (Tregaron Bayly)
- ceph-deploy disk activate error msg
- From: ganders@xxxxxxxxxxxx (German Anders)
- librbd tuning?
- From: chibi@xxxxxxx (Christian Balzer)
- librbd tuning?
- From: sweil@xxxxxxxxxx (Sage Weil)
- librbd tuning?
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- librados: client.admin authentication error
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Ceph writes stall for long perioids with no disk/network activity
- From: chibi@xxxxxxx (Christian Balzer)
- slow OSD brings down the cluster
- From: sweil@xxxxxxxxxx (Sage Weil)
- [Ceph-community] Remote replication
- From: sweil@xxxxxxxxxx (Sage Weil)
- ceph --status Missing keyring
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Is possible to use Ramdisk for Ceph journal ?
- From: daniel.swarbrick@xxxxxxxxxxxxxxxx (Daniel Swarbrick)
- Ceph writes stall for long perioids with no disk/network activity
- From: ckitzmiller@xxxxxxxxxxxxx (Chris Kitzmiller)
- Install Ceph nodes without network proxy access
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Is possible to use Ramdisk for Ceph journal ?
- From: onlydebian@xxxxxxxxx (debian Only)
- What is difference in storing data between rbd and rados ?
- From: onlydebian@xxxxxxxxx (debian Only)
- Problems during first install
- From: dennisml@xxxxxxxxxxxx (Dennis Jacobfeuerborn)
- slow OSD brings down the cluster
- From: luis.periquito@xxxxxxxxx (Luis Periquito)
- slow OSD brings down the cluster
- From: wido@xxxxxxxx (Wido den Hollander)
- rados bench no clean cleanup
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- slow OSD brings down the cluster
- From: luis.periquito@xxxxxxxxx (Luis Periquito)
- Openstack Havana root fs resize don't work
- From: Hauke-Bruno.Wollentin@xxxxxxxxxxxxxxx (Hauke Bruno Wollentin)
- Problems during first install
- From: chibi@xxxxxxx (Christian Balzer)
- Problems during first install
- From: tijn@xxxxxxxx (Tijn Buijs)
- Install Ceph nodes without network proxy access
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Using Crucial MX100 for journals or cache pool
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Using Crucial MX100 for journals or cache pool
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Using Crucial MX100 for journals or cache pool
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- [Ceph-community] Remote replication
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- librbd tuning?
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Openstack Havana root fs resize don't work
- From: jeremy.hanmer@xxxxxxxxxxxxx (Jeremy Hanmer)
- Openstack Havana root fs resize don't work
- From: dinuvlad13@xxxxxxxxx (Dinu Vlad)
- what are these files for mon?
- From: jlu@xxxxxxxxxxxxx (Jimmy Lu)
- Install Ceph nodes without network proxy access
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Install Ceph nodes without network proxy access
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Install Ceph nodes without network proxy access
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- OSD daemon code in /var/lib/ceph/osd/ceph-2/ "dissapears" after creating pool/rbd -
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Install Ceph nodes without network proxy access
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Install Ceph nodes without network proxy access
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Install Ceph nodes without network proxy access
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Install Ceph nodes without network proxy access
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Install Ceph nodes without network proxy access
- From: Daniel.OReilly@xxxxxxxx (O'Reilly, Dan)
- Ceph writes stall for long perioids with no disk/network activity
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- Vote for Ceph Talks at OpenStack Paris
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- Erroneous stats output (ceph df) after increasing PG number
- From: sweil@xxxxxxxxxx (Sage Weil)
- Erroneous stats output (ceph df) after increasing PG number
- From: sweil@xxxxxxxxxx (Sage Weil)
- Concurrent database with or on top of librados
- From: wido@xxxxxxxx (Wido den Hollander)
- Erroneous stats output (ceph df) after increasing PG number
- From: kostikas@xxxxxxxx (Konstantinos Tompoulidis)
- v0.83 released
- From: sweil@xxxxxxxxxx (Sage Weil)
- osd disk location - comment field
- From: sweil@xxxxxxxxxx (Sage Weil)
- v0.83 released
- From: sweil@xxxxxxxxxx (Sage Weil)
- Ceph writes stall for long perioids with no disk/network activity
- From: mariusz.gronczewski@xxxxxxxxxxxx (Mariusz Gronczewski)
- Concurrent database with or on top of librados
- From: gergely.horvath@xxxxxxxxxx (Gergely Horváth)
- librbd tuning?
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- osd disk location - comment field
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- librbd tuning?
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- Problems during first install
- From: pratik.rupala@xxxxxxxxxxxxxx (Pratik Rupala)
- Placement groups forever in "creating" state and dont map to OSD
- From: ksharma@xxxxxxxx (Kapil Sharma)
- Openstack Havana root fs resize don't work
- From: Hauke-Bruno.Wollentin@xxxxxxxxxxxxxxx (Hauke Bruno Wollentin)
- librbd tuning?
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Problems during first install
- From: tijn@xxxxxxxx (Tijn Buijs)
- v0.83 released
- From: onlydebian@xxxxxxxxx (debian Only)
- Ceph runs great then falters
- From: chibi@xxxxxxx (Christian Balzer)
- Some questions of radosgw
- From: agedosier@xxxxxxxxx (Osier Yang)
- OSD daemon code in /var/lib/ceph/osd/ceph-2/ "dissapears" after creating pool/rbd -
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Firefly OSDs stuck in creating state forever
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Some questions of radosgw
- From: yehuda@xxxxxxxxxx (Yehuda Sadeh)
- build ceph from tar.gz proc
- From: ganders@xxxxxxxxxxxx (German Anders)
- Ceph writes stall for long perioids with no disk/network activity
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- Ceph writes stall for long perioids with nodisk/network activity
- From: ganders@xxxxxxxxxxxx (German Anders)
- Ceph writes stall for long perioids with no disk/network activity
- From: ckitzmiller@xxxxxxxxxxxxx (Chris Kitzmiller)
- Erroneous stats output (ceph df) after increasing PG number
- From: kostikas@xxxxxxxx (Konstantinos Tompoulidis)
- Firefly OSDs stuck in creating state forever
- From: sweil@xxxxxxxxxx (Sage Weil)
- [no subject]
- Ceph runs great then falters
- From: ckitzmiller@xxxxxxxxxxxxx (Chris Kitzmiller)
- Firefly OSDs stuck in creating state forever
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- [Ceph-community] Remote replication
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- Firefly OSDs stuck in creating state forever
- From: sweil@xxxxxxxxxx (Sage Weil)
- Firefly OSDs stuck in creating state forever
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- librbd tuning?
- From: tbayly@xxxxxxxxxxxx (Tregaron Bayly)
- Problems during first install
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- Erronous stats output (ceph df) after increasing PG number
- From: sweil@xxxxxxxxxx (Sage Weil)
- cache questions
- From: sweil@xxxxxxxxxx (Sage Weil)
- Using Valgrind with Teuthology
- From: sweil@xxxxxxxxxx (Sage Weil)
- Erronous stats output (ceph df) after increasing PG number
- From: kostikas@xxxxxxxx (Konstantinos Tompoulidis)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Problems during first install
- From: pratik.rupala@xxxxxxxxxxxxxx (Pratik Rupala)
- Problems during first install
- From: cabrillo@xxxxxxxxxxxxxx (Iban Cabrillo)
- Problems during first install
- From: tijn@xxxxxxxx (Tijn Buijs)
- Placement groups forever in "creating" state and dont map to OSD
- From: Yogesh_Devi@xxxxxxxx (Yogesh_Devi at Dell.com)
- Placement groups forever in "creating" state and dont map to OSD
- From: ksharma@xxxxxxxx (Kapil Sharma)
- Placement groups forever in "creating" state and dont map to OSD
- From: Yogesh_Devi@xxxxxxxx (Yogesh_Devi at Dell.com)
- Placement groups forever in "creating" state and dont map to OSD
- From: ksharma@xxxxxxxx (Kapil Sharma)
- cache questions
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Placement groups forever in "creating" state and dont map to OSD
- From: Yogesh_Devi@xxxxxxxx (Yogesh_Devi at Dell.com)
- cache pools on hypervisor servers
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Using Valgrind with Teuthology
- From: 2639431@xxxxxxxxx (Sarang G)
- Placement groups forever in "creating" state and dont map to OSD
- From: matt@xxxxxxxxxxx (Matt Harlum)
- Placement groups forever in "creating" state and dont map to OSD
- From: Yogesh_Devi@xxxxxxxx (Yogesh_Devi at Dell.com)
- GPF kernel panics
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- blocked requests question
- From: chibi@xxxxxxx (Christian Balzer)
- what is collection(COLL) and cid
- From: fastsync@xxxxxxx (yuelongguang)
- blocked requests question
- From: duron800@xxxxxx (飞)
- Is possible to use Ramdisk for Ceph journal ?
- From: onlydebian@xxxxxxxxx (debian Only)
- Firefly OSDs stuck in creating state forever
- From: sweil@xxxxxxxxxx (Sage Weil)
- Firefly OSDs stuck in creating state forever
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Firefly OSDs stuck in creating state forever
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- Firefly OSDs stuck in creating state forever
- From: sweil@xxxxxxxxxx (Sage Weil)
- Firefly OSDs stuck in creating state forever
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: cjo@xxxxxxxxxxxxxx (Christopher O'Connell)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: cjo@xxxxxxxxxxxxxx (Christopher O'Connell)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Ceph and my use case - is it a fit?
- From: dev.matan@xxxxxxxxx (Matan Safriel)
- Ceph runs great then falters
- From: chibi@xxxxxxx (Christian Balzer)
- Firefly OSDs stuck in creating state forever
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Firefly OSDs stuck in creating state forever
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: cjo@xxxxxxxxxxxxxx (Christopher O'Connell)
- Firefly OSDs stuck in creating state forever
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: cjo@xxxxxxxxxxxxxx (Christopher O'Connell)
- Firefly OSDs stuck in creating state forever
- From: Bruce.McFarland@xxxxxxxxxxxxxxxx (Bruce McFarland)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: larryliugml@xxxxxxxxx (Larry Liu)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Free LinuxCon/CloudOpen Pass
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: larryliugml@xxxxxxxxx (Larry Liu)
- Ceph runs great then falters
- From: ckitzmiller@xxxxxxxxxxxxx (Chris Kitzmiller)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: sweil@xxxxxxxxxx (Sage Weil)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Ceph writes stall for long perioids with no disk/network activity
- From: mariusz.gronczewski@xxxxxxxxxxxx (Mariusz Gronczewski)
- Some questions of radosgw
- From: agedosier@xxxxxxxxx (Osier Yang)
- Some questions of radosgw
- From: agedosier@xxxxxxxxx (Osier Yang)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: larryliugml@xxxxxxxxx (Larry Liu)
- Ιnstrumenting RADOS with Zipkin + LTTng
- From: marioskogias@xxxxxxxxx (Marios-Evaggelos Kogias)
- cache pool osds crashing when data is evicting to underlying storage pool
- From: sweil@xxxxxxxxxx (Sage Weil)
- Placement groups forever in "creating" state and dont map to OSD
- From: Yogesh_Devi@xxxxxxxx (Yogesh_Devi at Dell.com)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: larryliugml@xxxxxxxxx (Larry Liu)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: larryliugml@xxxxxxxxx (Larry Liu)
- [ANN] ceph-deploy 1.5.10 released
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: cjo@xxxxxxxxxxxxxx (Christopher O'Connell)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ganders@xxxxxxxxxxxx (German Anders)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Using Ramdisk wi
- From: onlydebian@xxxxxxxxx (debian Only)
- Using Crucial MX100 for journals or cache pool
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Persistent Error on osd activation
- From: onlydebian@xxxxxxxxx (debian Only)
- Using Crucial MX100 for journals or cache pool
- From: chibi@xxxxxxx (Christian Balzer)
- cache pool osds crashing when data is evicting to underlying storage pool
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Using Crucial MX100 for journals or cache pool
- From: david@xxxxxxxxxx (David)
- Using Crucial MX100 for journals or cache pool
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Fwd: kworker makes fio testing rbd show 0 iops
- From: cofol1986@xxxxxxxxx (Tim Zhang)
- kworker makes fio testing rbd show 0 iops
- From: cofol1986@xxxxxxxxx (Tim Zhang)
- Fwd: kworker makes fio testing rbd show 0 iops
- From: cofol1986@xxxxxxxxx (Tim Zhang)
- GPF kernel panics
- From: bhubbard@xxxxxxxxxx (Brad Hubbard)
- GPF kernel panics
- From: sweil@xxxxxxxxxx (Sage Weil)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ganders@xxxxxxxxxxxx (German Anders)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- 0.80.5-1precise Not Able to Map RBD & CephFS
- From: larryliugml@xxxxxxxxx (Larry Liu)
- New Ceph mirror on the east coast
- From: dmsimard@xxxxxxxx (David Moreau Simard)
- ceph journal - integrity and performance questions
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- question about ApplyManager, SubmitManager and FileJournal
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- ceph journal - integrity and performance questions
- From: xtnega@xxxxxxxxx (David Graham)
- Is possible to use Ramdisk for Ceph journal ?
- From: onlydebian@xxxxxxxxx (debian Only)
- question about ApplyManager, SubmitManager and FileJournal
- From: fastsync@xxxxxxx (yuelongguang)
- Ceph and my use case - is it a fit?
- From: john.spray@xxxxxxxxxx (John Spray)
- cache pool osds crashing when data is evicting to underlying storage pool
- From: sweil@xxxxxxxxxx (Sage Weil)
- cache pool osds crashing when data is evicting to underlying storage pool
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Problems during first install
- From: tijn@xxxxxxxx (Tijn Buijs)
- Problems during first install
- From: tijn@xxxxxxxx (Tijn Buijs)
- Problems during first install
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Problems during first install
- From: piiv@xxxxxxx (Vincenzo Pii)
- GPF kernel panics
- From: eric0e@xxxxxxx (Eric Eastman)
- Problems during first install
- From: tijn@xxxxxxxx (Tijn Buijs)
- OSDs for 2 different pools on a single host
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- how ceph store xattr
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- OSDs for 2 different pools on a single host
- From: cd@xxxxxxxxxxxxxxxx (Christian Doering)
- Swift not creating container rados gateway
- From: yamashita@xxxxxxxxxx (山下 良民)
- GPF kernel panics
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- [ceph-users] Throttle pool pg_num/pgp_num increase impact
- From: kostikas@xxxxxxxxx (Konstantinos Tompoulidis)
- kworker makes fio testing rbd show 0 iops
- From: cofol1986@xxxxxxxxx (Tim Zhang)
- Swift not creating container rados gateway
- From: mail.ashishchandra@xxxxxxxxx (Ashish Chandra)
- Swift not creating container rados gateway
- From: mail.ashishchandra@xxxxxxxxx (Ashish Chandra)
- GPF kernel panics
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- GPF kernel panics
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- GPF kernel panics
- From: chibi@xxxxxxx (Christian Balzer)
- GPF kernel panics
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- GPF kernel panics
- From: chibi@xxxxxxx (Christian Balzer)
- GPF kernel panics
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- Swift not creating container rados gateway
- From: yamashita@xxxxxxxxxx (山下 良民)
- GPF kernel panics
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- GPF kernel panics
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- Paddles Setup
- From: ksharma@xxxxxxxx (Kapil Sharma)
- how ceph store xattr
- From: fastsync@xxxxxxx (yuelongguang)
- Adding OSDs without ceph-deploy
- From: mail.ashishchandra@xxxxxxxxx (Ashish Chandra)
- Paddles Setup
- From: 2639431@xxxxxxxxx (Sarang G)
- flashcache from fb and dm-cache??
- From: blacker1981@xxxxxxx (lijian)
- Using Ramdisk wi
- From: chibi@xxxxxxx (Christian Balzer)
- Need help with Ceph Firefly install
- From: almightybeeij@xxxxxxxxx (Barclay Jameson)
- Paddles Setup
- From: ksharma@xxxxxxxx (Kapil Sharma)
- Calamari Goes Open Source
- From: john.spray@xxxxxxxxxx (John Spray)
- Adding OSDs without ceph-deploy
- From: lists@xxxxxxxxxxxx (John Nielsen)
- Radosgw bucket index (bilog) and multi part upload - strange behaviour
- From: yehuda@xxxxxxxxxx (Yehuda Sadeh)
- Using Ramdisk wi
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Ceph and my use case - is it a fit?
- From: dev.matan@xxxxxxxxx (Matan Safriel)
- Radosgw bucket index (bilog) and multi part upload - strange behaviour
- From: szablowska.patrycja@xxxxxxxxx (Patrycja Szabłowska)
- Calamari Goes Open Source
- From: larryliugml@xxxxxxxxx (Larry Liu)
- Using Ramdisk wi
- From: chibi@xxxxxxx (Christian Balzer)
- anti-cephalopod question
- From: chibi@xxxxxxx (Christian Balzer)
- Using Ramdisk wi
- From: ganders@xxxxxxxxxxxx (German Anders)
- flashcache from fb and dm-cache??
- From: konrad.gutkowski@xxxxxx (Konrad Gutkowski)
- flashcache from fb and dm-cache??
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- Calamari Goes Open Source
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- Using Ramdisk wi
- From: chibi@xxxxxxx (Christian Balzer)
- Calamari Goes Open Source
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- flashcache from fb and dm-cache??
- From: ganders@xxxxxxxxxxxx (German Anders)
- Calamari Goes Open Source
- From: ganders@xxxxxxxxxxxx (German Anders)
- Calamari Goes Open Source
- From: larryliugml@xxxxxxxxx (Larry Liu)
- Adding OSDs without ceph-deploy
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Using Ramdisk wi
- From: ganders@xxxxxxxxxxxx (German Anders)
- Concurrent database with or on top of librados
- From: wido@xxxxxxxx (Wido den Hollander)
- Using Ramdisk wi
- From: wido@xxxxxxxx (Wido den Hollander)
- Using Ramdisk wi
- From: ganders@xxxxxxxxxxxx (German Anders)
- Concurrent database with or on top of librados
- From: gergely.horvath@xxxxxxxxxx (Gergely Horváth)
- Adding OSDs without ceph-deploy
- From: alex@xxxxxxxxxxx (Alex Bligh)
- anti-cephalopod question
- From: robertfantini@xxxxxxxxx (Robert Fantini)
- Not able to upload object using Horizon(Openstack Dashboard) to Ceph
- From: mail.ashishchandra@xxxxxxxxx (Ashish Chandra)
- anti-cephalopod question
- From: chibi@xxxxxxx (Christian Balzer)
- v0.83 released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Force CRUSH to select specific osd as primary
- From: szzacher@xxxxxxxxx (Szymon Zacher)
- v0.80.5 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Force CRUSH to select specific osd as primary
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Dependency issues in fresh ceph/CentOS 7 install
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- [SOLVED] MON segfaulting when setting a crush ruleset to a pool (firefly 0.80.4)
- From: olivier.delhomme@xxxxxxxxxxxxxxxxxx (Olivier DELHOMME)
- Force CRUSH to select specific osd as primary
- From: szzacher@xxxxxxxxx (Szymon Zacher)
- [SOLVED] MON segfaulting when setting a crush ruleset to a pool (firefly 0.80.4)
- From: olivier.delhomme@xxxxxxxxxxxxxxxxxx (Olivier DELHOMME)
- Dependency issues in fresh ceph/CentOS 7 install
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Dependency issues in fresh ceph/CentOS 7 install
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- anti-cephalopod question
- From: robertfantini@xxxxxxxxx (Robert Fantini)
- firefly osds stuck in state booting
- From: t10tennn@xxxxxxxxx (10 minus)
- Optimal OSD Configuration for 45 drives?
- From: chibi@xxxxxxx (Christian Balzer)
- anti-cephalopod question
- From: chibi@xxxxxxx (Christian Balzer)
- anti-cephalopod question
- From: robertfantini@xxxxxxxxx (Robert Fantini)
- Deployment scenario with 2 hosts
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- anti-cephalopod question
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- Desktop Ceph Cluster up for grabs!
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- Pool size 2 min_size 1 Advisability?
- From: erhvks@xxxxxxx (Edward Huyer)
- fs as btrfs and ceph journal
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- Pool size 2 min_size 1 Advisability?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[xfs]
[Linux SCSI]
[Ceph Large]
[Samba]
[Yosemite Forum]