CEPH Filesystem Users
[Prev Page][Next Page]
- Re: ceph inconsistent pg missing ec object
- From: Stijn De Weirdt <stijn.deweirdt@xxxxxxxx>
- RBD on ec pool with compression.
- From: Cassiano Pilipavicius <cassiano@xxxxxxxxxxx>
- Re: Slow requests
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Slow requests
- From: J David <j.david.lists@xxxxxxxxx>
- Re: Two CEPHFS Issues
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Two CEPHFS Issues
- From: Daniel Pryor <dpryor@xxxxxxxxxxxxx>
- Ceph Upstream @The Pub in Prague
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: Slow requests
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Not able to start OSD
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: ceph inconsistent pg missing ec object
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph iSCSI login failed due to authorization failure
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Ceph iSCSI login failed due to authorization failure
- From: Tyler Bishop <tyler.bishop@xxxxxxxxxxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Christian Balzer <chibi@xxxxxxx>
- Re: [filestore][journal][prepare_entry] rebuild data_align is 4086, maybe a bug
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- Re: Backup VM (Base image + snapshot)
- From: Oscar Segarra <oscar.segarra@xxxxxxxxx>
- Re: Erasure code failure
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Ceph delete files and status
- From: nigel davies <nigdav007@xxxxxxxxx>
- Re: Ceph delete files and status
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Erasure code failure
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Erasure code failure
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Ceph delete files and status
- From: nigel davies <nigdav007@xxxxxxxxx>
- Re: Erasure code failure
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Jean-Charles Lopez <jelopez@xxxxxxxxxx>
- Re: Erasure code failure
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- Erasure code failure
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- Re: RBD-image permissions
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Not able to start OSD
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- RBD-image permissions
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Bluestore compression and existing CephFS filesystem
- From: Michael Sudnick <michael.sudnick@xxxxxxxxx>
- Re: Not able to start OSD
- From: Jean-Charles Lopez <jelopez@xxxxxxxxxx>
- Not able to start OSD
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: Luminous can't seem to provision more than 32 OSDs per server
- From: Sean Sullivan <lookcrabs@xxxxxxxxx>
- Re: auth error with ceph-deploy on jewel to luminous upgrade
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Slow requests
- From: J David <j.david.lists@xxxxxxxxx>
- Re: Ceph delete files and status
- From: Ronny Aasen <ronny+ceph-users@xxxxxxxx>
- Re: Ceph delete files and status
- From: Jamie Fargen <jfargen@xxxxxxxxxx>
- Ceph delete files and status
- From: nigel davies <nigdav007@xxxxxxxxx>
- Re: Erasure code settings
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: how does recovery work
- From: Richard Hesketh <richard.hesketh@xxxxxxxxxxxx>
- Re: Erasure code settings
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: Is it possible to recover from block.db failure?
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Erasure code settings
- From: Denes Dolhay <denke@xxxxxxxxxxxx>
- Re: Is it possible to recover from block.db failure?
- From: Caspar Smit <casparsmit@xxxxxxxxxxx>
- Erasure code settings
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: Is it possible to recover from block.db failure?
- From: David Turner <drakonstein@xxxxxxxxx>
- Is it possible to recover from block.db failure?
- From: Caspar Smit <casparsmit@xxxxxxxxxxx>
- PG's stuck unclean active+remapped
- From: Roel de Rooy <RdeRooy@xxxxxxxx>
- Re: OSD crashed while reparing inconsistent PG luminous
- From: Ana Aviles <ana@xxxxxxxxxxxx>
- how does recovery work
- From: Dennis Benndorf <dennis.benndorf@xxxxxxxxxxxxxx>
- Re: Slow requests
- From: Ольга Ухина <olga.uhina@xxxxxxxxx>
- Re: ceph inconsistent pg missing ec object
- From: Stijn De Weirdt <stijn.deweirdt@xxxxxxxx>
- Re: Slow requests
- From: Sean Purdy <s.purdy@xxxxxxxxxxxxxxxx>
- Re: High mem with Luminous/Bluestore
- From: Hans van den Bogert <hansbogert@xxxxxxxxx>
- [filestore][journal][prepare_entry] rebuild data_align is 4086, maybe a bug
- From: zhaomingyue <zhao.mingyue@xxxxxxx>
- Re: Luminous can't seem to provision more than 32 OSDs per server
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- Luminous can't seem to provision more than 32 OSDs per server
- From: Sean Sullivan <lookcrabs@xxxxxxxxx>
- Re: Thick provisioning
- From: Adrian Saul <Adrian.Saul@xxxxxxxxxxxxxxxxx>
- Re: OSD crashed while reparing inconsistent PG luminous
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: [Jewel] Crash Osd with void Hit_set_trim
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Osd FAILED assert(p.same_interval_since)
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: ceph inconsistent pg missing ec object
- From: Denes Dolhay <denke@xxxxxxxxxxxx>
- Re: ceph inconsistent pg missing ec object
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Thick provisioning
- Re: How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: cephfs ceph-fuse performance
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- Re: auth error with ceph-deploy on jewel to luminous upgrade
- From: Gary Molenkamp <molenkam@xxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: OSD crashed while reparing inconsistent PG luminous
- From: Mart van Santen <mart@xxxxxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- auth error with ceph-deploy on jewel to luminous upgrade
- From: Gary molenkamp <molenkam@xxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- [Jewel] Crash Osd with void Hit_set_trim
- From: "pascal.pucci@xxxxxxxxxxxxxxx" <pascal.pucci@xxxxxxxxxxxxxxx>
- Re: High mem with Luminous/Bluestore
- From: Sage Weil <sage@xxxxxxxxxxxx>
- ceph inconsistent pg missing ec object
- From: Stijn De Weirdt <stijn.deweirdt@xxxxxxxx>
- Re: High mem with Luminous/Bluestore
- From: Wido den Hollander <wido@xxxxxxxx>
- Slow requests
- From: Ольга Ухина <olga.uhina@xxxxxxxxx>
- Re: High mem with Luminous/Bluestore
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: High mem with Luminous/Bluestore
- From: Hans van den Bogert <hansbogert@xxxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: Help with full osd and RGW not responsive
- From: Webert de Souza Lima <webert.boss@xxxxxxxxx>
- Re: OSD crashed while reparing inconsistent PG luminous
- From: Ana Aviles <ana@xxxxxxxxxxxx>
- Re: Thick provisioning
- From: Samuel Soulard <samuel.soulard@xxxxxxxxx>
- Re: High mem with Luminous/Bluestore
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: OSD are marked as down after jewel -> luminous upgrade
- From: Daniel Carrasco <d.carrasco@xxxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Joao Eduardo Luis <joao@xxxxxxx>
- High mem with Luminous/Bluestore
- From: Hans van den Bogert <hansbogert@xxxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Joao Eduardo Luis <joao@xxxxxxx>
- cephfs ceph-fuse performance
- From: Ashley Merrick <ashley@xxxxxxxxxxxxxx>
- Re: Bluestore OSD_DATA, WAL & DB
- From: "Marco Baldini - H.S. Amiata" <mbaldini@xxxxxxxxxxx>
- Re: Bluestore OSD_DATA, WAL & DB
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Thick provisioning
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: To check RBD cache enabled
- From: Jean-Charles Lopez <jelopez@xxxxxxxxxx>
- Re: How to stop using (unmount) a failed OSD with BlueStore ?
- From: Alejandro Comisario <alejandro@xxxxxxxxxxx>
- Re: How to stop using (unmount) a failed OSD with BlueStore ?
- From: Alejandro Comisario <alejandro@xxxxxxxxxxx>
- Re: How to stop using (unmount) a failed OSD with BlueStore ?
- From: Jamie Fargen <jfargen@xxxxxxxxxx>
- Re: To check RBD cache enabled
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: How to stop using (unmount) a failed OSD with BlueStore ?
- From: Alejandro Comisario <alejandro@xxxxxxxxxxx>
- Re: OSD crashed while reparing inconsistent PG luminous
- From: Mart van Santen <mart@xxxxxxxxxxxx>
- Re: To check RBD cache enabled
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: To check RBD cache enabled
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: To check RBD cache enabled
- From: Jean-Charles Lopez <jelopez@xxxxxxxxxx>
- Re: Luminous : 3 clients failing to respond to cache pressure
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: OSD crashed while reparing inconsistent PG luminous
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Efficient storage of small objects / bulk erasure coding
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: To check RBD cache enabled
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: How to increase the size of requests written to a ceph image
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: To check RBD cache enabled
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- How to increase the size of requests written to a ceph image
- From: Russell Glaue <rglaue@xxxxxxxx>
- Re: How to stop using (unmount) a failed OSD with BlueStore ?
- From: Jamie Fargen <jfargen@xxxxxxxxxx>
- Re: Help with full osd and RGW not responsive
- From: Bryan Banister <bbanister@xxxxxxxxxxxxxxx>
- Re: Help with full osd and RGW not responsive
- From: Andreas Calminder <andreas.calminder@xxxxxxxxxx>
- Re: How to stop using (unmount) a failed OSD with BlueStore ?
- From: Alejandro Comisario <alejandro@xxxxxxxxxxx>
- Re: To check RBD cache enabled
- From: Jean-Charles Lopez <jelopez@xxxxxxxxxx>
- To check RBD cache enabled
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Efficient storage of small objects / bulk erasure coding
- From: Jiri Horky <jiri.horky@xxxxxxxxx>
- Re: Thick provisioning
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Help with full osd and RGW not responsive
- From: Bryan Banister <bbanister@xxxxxxxxxxxxxxx>
- Re: OSD are marked as down after jewel -> luminous upgrade
- From: Daniel Carrasco <d.carrasco@xxxxxxxxx>
- Re: OSD crashed while reparing inconsistent PG luminous
- From: Cassiano Pilipavicius <cassiano@xxxxxxxxxxx>
- OSD crashed while reparing inconsistent PG luminous
- From: Ana Aviles <ana@xxxxxxxxxxxx>
- Re: OSD are marked as down after jewel -> luminous upgrade
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- OSD are marked as down after jewel -> luminous upgrade
- From: Daniel Carrasco <d.carrasco@xxxxxxxxx>
- Re: Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Luminous : 3 clients failing to respond to cache pressure
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Unstable clock
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Luminous : 3 clients failing to respond to cache pressure
- From: Wido den Hollander <wido@xxxxxxxx>
- Luminous : 3 clients failing to respond to cache pressure
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Unstable clock
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Unstable clock
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Unstable clock
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Bluestore OSD_DATA, WAL & DB
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: cephfs: some metadata operations take seconds to complete
- From: Tyanko Aleksiev <tyanko.alexiev@xxxxxxxxx>
- Re: Ceph-ISCSI
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: Ceph-ISCSI
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Retrieve progress of volume flattening using RBD python library
- From: Xavier Trilla <xavier.trilla@xxxxxxxxxxxxxxxx>
- Re: Ceph-ISCSI
- From: Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>
- Re: Rbd resize, refresh rescan
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- Re: Bluestore OSD_DATA, WAL & DB
- From: "Marco Baldini - H.S. Amiata" <mbaldini@xxxxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: How to get current min-compat-client setting
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Bluestore OSD_DATA, WAL & DB
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Osd FAILED assert(p.same_interval_since)
- From: Dejan Lesjak <dejan.lesjak@xxxxxx>
- Re: Osd FAILED assert(p.same_interval_since)
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Osd FAILED assert(p.same_interval_since)
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Osd FAILED assert(p.same_interval_since)
- From: Dejan Lesjak <dejan.lesjak@xxxxxx>
- Re: 答复: assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Anthony Verevkin <anthony@xxxxxxxxxxx>
- Re: Osd FAILED assert(p.same_interval_since)
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: cephfs: some metadata operations take seconds to complete
- From: Linh Vu <vul@xxxxxxxxxxxxxx>
- Re: rados export/import fail
- From: Nagy Ákos <nagy.akos@xxxxxxxxxxxxxx>
- Re: osd max scrubs not honored?
- From: J David <j.david.lists@xxxxxxxxx>
- Re: rados export/import fail
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: How to get current min-compat-client setting
- From: Hans van den Bogert <hansbogert@xxxxxxxxx>
- Re: rados export/import fail
- From: Nagy Ákos <nagy.akos@xxxxxxxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: Anthony Verevkin <anthony@xxxxxxxxxxx>
- cephfs: some metadata operations take seconds to complete
- From: Tyanko Aleksiev <tyanko.alexiev@xxxxxxxxx>
- How to stop using (unmount) a failed OSD with BlueStore ?
- From: Alejandro Comisario <alejandro@xxxxxxxxxxx>
- Re: Bluestore OSD_DATA, WAL & DB
- From: Richard Hesketh <richard.hesketh@xxxxxxxxxxxx>
- Re: Osd FAILED assert(p.same_interval_since)
- From: Dejan Lesjak <dejan.lesjak@xxxxxx>
- Re: Ceph not recovering after osd/host failure
- From: Anthony Verevkin <anthony@xxxxxxxxxxx>
- Thick provisioning
- Re: rados export/import fail
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Bluestore OSD_DATA, WAL & DB
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: rados export/import fail
- From: Wido den Hollander <wido@xxxxxxxx>
- Osd FAILED assert(p.same_interval_since)
- From: Dejan Lesjak <dejan.lesjak@xxxxxx>
- [ocata] [cinder] cinder-volume causes high cpu load
- From: Eugen Block <eblock@xxxxxx>
- Re: rados export/import fail
- From: Nagy Ákos <nagy.akos@xxxxxxxxxxxxxx>
- Re: rados export/import fail
- From: John Spray <jspray@xxxxxxxxxx>
- rados export/import fail
- From: Nagy Ákos <nagy.akos@xxxxxxxxxxxxxx>
- Re: How to get current min-compat-client setting
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Bluestore "separate" WAL and DB
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Backup VM (Base image + snapshot)
- From: Richard Hesketh <richard.hesketh@xxxxxxxxxxxx>
- Re: Ceph not recovering after osd/host failure
- From: Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx>
- Re: Ceph not recovering after osd/host failure
- From: Matteo Dacrema <mdacrema@xxxxxxxx>
- Ceph not recovering after osd/host failure
- From: Matteo Dacrema <mdacrema@xxxxxxxx>
- Re: list admin issues
- From: Christian Balzer <chibi@xxxxxxx>
- Re: list admin issues
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: Creating a custom cluster name using ceph-deploy
- From: Bogdan SOLGA <bogdan.solga@xxxxxxxxx>
- Re: list admin issues
- From: Christian Balzer <chibi@xxxxxxx>
- Re: list admin issues
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: list admin issues
- From: Christian Balzer <chibi@xxxxxxx>
- list admin issues
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: Backup VM (Base image + snapshot)
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: Creating a custom cluster name using ceph-deploy
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: Creating a custom cluster name using ceph-deploy
- From: Erik McCormick <emccormick@xxxxxxxxxxxxxxx>
- Creating a custom cluster name using ceph-deploy
- From: Bogdan SOLGA <bogdan.solga@xxxxxxxxx>
- Re: Backup VM (Base image + snapshot)
- From: Oscar Segarra <oscar.segarra@xxxxxxxxx>
- Re: Backup VM (Base image + snapshot)
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: osd max scrubs not honored?
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: osd max scrubs not honored?
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: dE <de.techno@xxxxxxxxx>
- Re: osd max scrubs not honored?
- From: J David <j.david.lists@xxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: Denes Dolhay <denke@xxxxxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: dE <de.techno@xxxxxxxxx>
- Re: Ceph iSCSI login failed due to authorization failure
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: Ceph iSCSI login failed due to authorization failure
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Backup VM (Base image + snapshot)
- From: Oscar Segarra <oscar.segarra@xxxxxxxxx>
- Ceph iSCSI login failed due to authorization failure
- From: Kashif Mumtaz <kashif.mumtaz@xxxxxxxxx>
- 答复: assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: zhaomingyue <zhao.mingyue@xxxxxxx>
- Re: osd max scrubs not honored?
- From: David Turner <drakonstein@xxxxxxxxx>
- [JEWEL] OSD Crash - Tier Cache
- From: "pascal.pucci@xxxxxxxxxxxxxxx" <pascal.pucci@xxxxxxxxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: using Bcache on blueStore
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Questions about bluestore
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Questions about bluestore
- From: Mario Giammarco <mgiammarco@xxxxxxxxx>
- Re: How dead is my ec pool?
- From: Brady Deetz <bdeetz@xxxxxxxxx>
- How dead is my ec pool?
- From: Brady Deetz <bdeetz@xxxxxxxxx>
- Re: osd max scrubs not honored?
- From: J David <j.david.lists@xxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: dE <de.techno@xxxxxxxxx>
- Re: using Bcache on blueStore
- From: Kjetil Joergensen <kjetil@xxxxxxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Mehmet <ceph@xxxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: "Gerhard W. Recher" <gerhard.recher@xxxxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: dE <de.techno@xxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: dE <de.techno@xxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: Ronny Aasen <ronny+ceph-users@xxxxxxxx>
- Re: objects degraded higher than 100%
- From: David Zafman <dzafman@xxxxxxxxxx>
- Re: Brand new cluster -- pg is stuck inactive
- From: Michael Kuriger <mk7193@xxxxxxxxx>
- Re: objects degraded higher than 100%
- From: Florian Haas <florian@xxxxxxxxxxx>
- Brand new cluster -- pg is stuck inactive
- From: dE <de.techno@xxxxxxxxx>
- Re: assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: objects degraded higher than 100%
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: How to get current min-compat-client setting
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: windows server 2016 refs3.1 veeam syntetic backup with fast block clone
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: CephFS metadata pool to SSDs
- From: Reed Dier <reed.dier@xxxxxxxxxxx>
- Re: objects degraded higher than 100%
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: CephFs kernel client metadata caching
- From: Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: CephFs kernel client metadata caching
- From: Denes Dolhay <denke@xxxxxxxxxxxx>
- Re: CephFs kernel client metadata caching
- From: Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- CephFs kernel client metadata caching
- From: Denes Dolhay <denke@xxxxxxxxxxxx>
- Re: assertion error trying to start mds server
- From: John Spray <jspray@xxxxxxxxxx>
- How to get current min-compat-client setting
- From: Hans van den Bogert <hansbogert@xxxxxxxxx>
- assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert
- From: zhaomingyue <zhao.mingyue@xxxxxxx>
- Re: cephx
- From: Ashley Merrick <ashley@xxxxxxxxxxxxxx>
- windows server 2016 refs3.1 veeam syntetic backup with fast block clone
- From: Ronny Aasen <ronny+ceph-users@xxxxxxxx>
- Re: cephx
- From: Ashley Merrick <ashley@xxxxxxxxxxxxxx>
- Re: cephx
- From: Shinobu Kinjo <skinjo@xxxxxxxxxx>
- cephx
- From: Ashley Merrick <ashley@xxxxxxxxxxxxxx>
- Re: assertion error trying to start mds server
- From: Bill Sharer <bsharer@xxxxxxxxxxxxxx>
- Re: using Bcache on blueStore
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: using Bcache on blueStore
- From: Marek Grzybowski <marek.grzybowski@xxxxxxxxx>
- Re: Flattening loses sparseness
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: CephFS metadata pool to SSDs
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: CephFS metadata pool to SSDs
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Erasure coding with RBD
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Flattening loses sparseness
- From: "Massey, Kevin" <kmassey@xxxxxxxxxx>
- Re: Erasure coding with RBD
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: Erasure coding with RBD
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- CephFS metadata pool to SSDs
- From: Reed Dier <reed.dier@xxxxxxxxxxx>
- Re: Erasure coding with RBD
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Erasure coding with RBD
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Erasure coding with RBD
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- using Bcache on blueStore
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Erasure coding with RBD
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: BlueStore Cache Ratios
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Erasure coding with RBD
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Erasure coding with RBD
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: objects degraded higher than 100%
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: objects degraded higher than 100%
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: objects degraded higher than 100%
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: min_size & hybrid OSD latency
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: MGR Dahhsboard hostname missing
- From: Richard Hesketh <richard.hesketh@xxxxxxxxxxxx>
- MGR Dahhsboard hostname missing
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Re : general protection fault: 0000 [#1] SMP
- From: Luis Henriques <lhenriques@xxxxxxxx>
- Re: Ceph-ISCSI
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- Cephalocon 2018?
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- FOSDEM Call for Participation: Software Defined Storage devroom
- From: Jan Fajerski <jfajerski@xxxxxxxx>
- Re: general protection fault: 0000 [#1] SMP
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: objects degraded higher than 100%
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: ceph auth doesn't work on cephfs?
- From: Frank Yu <flyxiaoyu@xxxxxxxxx>
- Re: Ceph-ISCSI
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: ceph auth doesn't work on cephfs?
- From: John Spray <jspray@xxxxxxxxxx>
- ceph auth doesn't work on cephfs?
- From: Frank Yu <flyxiaoyu@xxxxxxxxx>
- Re: assertion error trying to start mds server
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph-ISCSI
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: Ceph-ISCSI
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re : general protection fault: 0000 [#1] SMP
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: general protection fault: 0000 [#1] SMP
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Crush Map for test lab
- From: Stefan Kooman <stefan@xxxxxx>
- Crush Map for test lab
- From: Ashley Merrick <ashley@xxxxxxxxxxxxxx>
- Re: Ceph-ISCSI
- From: Adrian Saul <Adrian.Saul@xxxxxxxxxxxxxxxxx>
- Re: Ceph-ISCSI
- From: Samuel Soulard <samuel.soulard@xxxxxxxxx>
- Re: Ceph-ISCSI
- From: Adrian Saul <Adrian.Saul@xxxxxxxxxxxxxxxxx>
- Re: assertion error trying to start mds server
- From: Bill Sharer <bsharer@xxxxxxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Enrico Kern <enrico.kern@xxxxxxxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Enrico Kern <enrico.kern@xxxxxxxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Enrico Kern <enrico.kern@xxxxxxxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Enrico Kern <enrico.kern@xxxxxxxxxxxxxxx>
- Re: RGW flush_read_list error
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Enrico Kern <enrico.kern@xxxxxxxxxxxxxxx>
- Re: Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: David Turner <drakonstein@xxxxxxxxx>
- Luminous 12.2.1 - RadosGW Multisite doesnt replicate multipart uploads
- From: Enrico Kern <enrico.kern@xxxxxxxxxxxxxxx>
- Re: Ceph-ISCSI
- From: Samuel Soulard <samuel.soulard@xxxxxxxxx>
- Re: Ceph-ISCSI
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Ceph-ISCSI
- From: Samuel Soulard <samuel.soulard@xxxxxxxxx>
- Re: Ceph-ISCSI
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Ceph-ISCSI
- From: Samuel Soulard <samuel.soulard@xxxxxxxxx>
- Re: RGW flush_read_list error
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Ceph-ISCSI
- From: David Disseldorp <ddiss@xxxxxxx>
- Re: ceph osd disk full (partition 100% used)
- From: Webert de Souza Lima <webert.boss@xxxxxxxxx>
- Re: ceph osd disk full (partition 100% used)
- From: David Turner <drakonstein@xxxxxxxxx>
- ceph osd disk full (partition 100% used)
- From: Webert de Souza Lima <webert.boss@xxxxxxxxx>
- Re: Ceph-ISCSI
- From: Jake Young <jak3kaj@xxxxxxxxx>
- Re: min_size & hybrid OSD latency
- From: Reed Dier <reed.dier@xxxxxxxxxxx>
- Re: min_size & hybrid OSD latency
- From: David Turner <drakonstein@xxxxxxxxx>
- general protection fault: 0000 [#1] SMP
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: advice on number of objects per OSD
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: right way to recover a failed OSD (disk) when using BlueStore ?
- From: Alejandro Comisario <alejandro@xxxxxxxxxxx>
- Re: right way to recover a failed OSD (disk) when using BlueStore ?
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: advice on number of objects per OSD
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: BlueStore Cache Ratios
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: BlueStore Cache Ratios
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Ceph-ISCSI
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: BlueStore Cache Ratios
- From: Mohamad Gebai <mgebai@xxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: assertion error trying to start mds server
- From: John Spray <jspray@xxxxxxxxxx>
- Ceph-ISCSI
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: <ian.johnson@xxxxxxxxxx>
- Re: A new SSD for journals - everything sucks?
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- A new SSD for journals - everything sucks?
- From: Josef Zelenka <josef.zelenka@xxxxxxxxxxxxxxxx>
- Re: All replicas of pg 5.b got placed on the same host - how to correct?
- From: Konrad Riedel <it@xxxxxxxxxxxxxx>
- assertion error trying to start mds server
- From: Bill Sharer <bsharer@xxxxxxxxxxxxxx>
- Re: min_size & hybrid OSD latency
- From: Christian Balzer <chibi@xxxxxxx>
- RGW flush_read_list error
- From: Travis Nielsen <Travis.Nielsen@xxxxxxxxxxx>
- min_size & hybrid OSD latency
- From: Jack <ceph@xxxxxxxxxxxxxx>
- Re: right way to recover a failed OSD (disk) when using BlueStore ?
- From: Alejandro Comisario <alejandro@xxxxxxxxxxx>
- Re: Unable to restrict a CephFS client to a subdirectory
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Unable to restrict a CephFS client to a subdirectory
- From: Shawfeng Dong <shaw@xxxxxxxx>
- Re: All replicas of pg 5.b got placed on the same host - how to correct?
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- Re: All replicas of pg 5.b got placed on the same host - how to correct?
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- All replicas of pg 5.b got placed on the same host - how to correct?
- From: Konrad Riedel <it@xxxxxxxxxxxxxx>
- Re: Ceph-mgr summarize recovery counters
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph-mgr summarize recovery counters
- From: Benjeman Meekhof <bmeekhof@xxxxxxxxx>
- Re: rgw resharding operation seemingly won't end
- From: Ryan Leimenstoll <rleimens@xxxxxxxxxxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: ceph-volume: migration and disk partition support
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: how to debug (in order to repair) damaged MDS (rank)?
- From: Daniel Baumann <daniel.baumann@xxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Willem Jan Withagen <wjw@xxxxxxxxxxx>
- Re: how to debug (in order to repair) damaged MDS (rank)?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: how to debug (in order to repair) damaged MDS (rank)?
- From: Daniel Baumann <daniel.baumann@xxxxxx>
- BlueStore Cache Ratios
- From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
- Re: Unable to restrict a CephFS client to a subdirectory
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Unable to restrict a CephFS client to a subdirectory
- From: John Spray <jspray@xxxxxxxxxx>
- Re: how to debug (in order to repair) damaged MDS (rank)?
- From: John Spray <jspray@xxxxxxxxxx>
- advice on number of objects per OSD
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- 1 MDSs behind on trimming (was Re: clients failing to advance oldest client/flush tid)
- From: John Spray <jspray@xxxxxxxxxx>
- how to debug (in order to repair) damaged MDS (rank)?
- From: Daniel Baumann <daniel.baumann@xxxxxx>
- Re: ceph-volume: migration and disk partition support
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: ceph-volume: migration and disk partition support
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Unable to restrict a CephFS client to a subdirectory
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: clients failing to advance oldest client/flush tid
- From: Nigel Williams <nigel.williams@xxxxxxxxxxx>
- Re: installing specific version of ceph-common
- From: Ben Hines <bhines@xxxxxxxxx>
- Unable to restrict a CephFS client to a subdirectory
- From: Shawfeng Dong <shaw@xxxxxxxx>
- Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Snapshot space
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Snapshot space
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: rgw resharding operation seemingly won't end
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Snapshot space
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Snapshot space
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- rgw resharding operation seemingly won't end
- From: Ryan Leimenstoll <rleimens@xxxxxxxxxxxxxx>
- Re: Ceph mirrors
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- Re: clients failing to advance oldest client/flush tid
- From: John Spray <jspray@xxxxxxxxxx>
- Re: clients failing to advance oldest client/flush tid
- From: Jake Grimmett <jog@xxxxxxxxxxxxxxxxx>
- Re: Ceph cache pool full
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: clients failing to advance oldest client/flush tid
- From: John Spray <jspray@xxxxxxxxxx>
- killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Snapshot space
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: cephfs: how to repair damaged mds rank?
- From: Daniel Baumann <daniel.baumann@xxxxxx>
- Re: cephfs: how to repair damaged mds rank?
- From: John Spray <jspray@xxxxxxxxxx>
- clients failing to advance oldest client/flush tid
- From: Jake Grimmett <jog@xxxxxxxxxxxxxxxxx>
- Re: cephfs: how to repair damaged mds rank?
- From: Daniel Baumann <daniel.baumann@xxxxxx>
- cephfs: how to repair damaged mds rank?
- From: Daniel Baumann <daniel.baumann@xxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: kefu chai <tchaikov@xxxxxxxxx>
- Re: [MONITOR SEGFAULT] Luminous cluster stuck when adding monitor
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: [CLUSTER STUCK] Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Snapshot space
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- blustore - howto remove object that is crashing osd
- From: Marek Grzybowski <marek.grzybowski@xxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: [CLUSTER STUCK] Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Дробышевский, Владимир <vlad@xxxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- Re: Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- Re: Configuring Ceph using multiple networks
- From: Sinan Polat <sinan@xxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Sinan Polat <sinan@xxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Дробышевский, Владимир <vlad@xxxxxxxxxx>
- PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)
- From: Peter Linder <peter.linder@xxxxxxxxxxxxxx>
- Configuring Ceph using multiple networks
- From: Kashif Mumtaz <kashif.mumtaz@xxxxxxxxx>
- Re: Real disk usage of clone images
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Real disk usage of clone images
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- Re: Ceph cache pool full
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Ceph cache pool full
- From: Shawfeng Dong <shaw@xxxxxxxx>
- Re: what does associating ceph pool to application do?
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: Ceph cache pool full
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: what does associating ceph pool to application do?
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- v10.2.10 Jewel released
- From: Abhishek Lekshmanan <abhishek@xxxxxxxx>
- ceph-volume: migration and disk partition support
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: "ceph osd status" fails
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph cache pool full
- From: Shawfeng Dong <shaw@xxxxxxxx>
- Re: what does associating ceph pool to application do?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph cache pool full
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Ceph cache pool full
- From: Shawfeng Dong <shaw@xxxxxxxx>
- Re: Ceph cache pool full
- From: Christian Balzer <chibi@xxxxxxx>
- Re: what does associating ceph pool to application do?
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- what does associating ceph pool to application do?
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: Ceph cache pool full
- From: Shawfeng Dong <shaw@xxxxxxxx>
- Re: Ceph cache pool full
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Ceph cache pool full
- From: Shawfeng Dong <shaw@xxxxxxxx>
- "ceph osd status" fails
- From: Richard Hesketh <richard.hesketh@xxxxxxxxxxxx>
- Re: Ceph cache pool full
- From: Luis Periquito <periquito@xxxxxxxxx>
- Re: Ceph cache pool full
- From: Shawfeng Dong <shaw@xxxxxxxx>
- Re : Re : Re : bad crc/signature errors
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: Ceph cache pool full
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: What's about release-note for 10.2.10?
- From: Henrik Korkuc <lists@xxxxxxxxx>
- Re: Ceph mgr influx module on luminous
- From: John Spray <jspray@xxxxxxxxxx>
- What's about release-note for 10.2.10?
- From: ulembke@xxxxxxxxxxxx
- Re: Ceph mirrors
- From: Sander Smeenk <ssmeenk@xxxxxxxxxxxx>
- Re: Ceph mirrors
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Ceph cache pool full
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: Ceph cache pool full
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Ceph cache pool full
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: 1 osd Segmentation fault in test cluster
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: RBD Mirror between two separate clusters named ceph
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: RBD Mirror between two separate clusters named ceph
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Ceph cache pool full
- From: Shawfeng Dong <shaw@xxxxxxxx>
- RBD Mirror between two separate clusters named ceph
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Ceph mgr influx module on luminous
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph stuck creating pool
- From: Guilherme Lima <guilherme.lima@xxxxxxxxxxxx>
- Re: Ceph manager documentation missing from network config reference
- From: John Spray <jspray@xxxxxxxxxx>
- Ceph manager documentation missing from network config reference
- From: Stefan Kooman <stefan@xxxxxx>
- Re: bad crc/signature errors
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Ceph mirrors
- From: Stefan Kooman <stefan@xxxxxx>
- Re: _committed_osd_maps shutdown OSD via async signal, bug or feature?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: _committed_osd_maps shutdown OSD via async signal, bug or feature?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: erasure-coded with overwrites versus erasure-coded with cache tiering
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re : Re : bad crc/signature errors
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: bad crc/signature errors
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Ceph mirrors
- From: Stefan Kooman <stefan@xxxxxx>
- Re: TLS for tracker.ceph.com
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: bad crc/signature errors
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- TLS for tracker.ceph.com
- From: Stefan Kooman <stefan@xxxxxx>
- _committed_osd_maps shutdown OSD via async signal, bug or feature?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph monitoring
- From: Lenz Grimmer <lenz@xxxxxxxxxxx>
- Re: Ceph-mgr summarize recovery counters
- From: John Spray <jspray@xxxxxxxxxx>
- Re: [Ceph-maintainers] Mimic timeline
- From: Lars Marowsky-Bree <lmb@xxxxxxxx>
- Re: Ceph monitoring
- From: Jasper Spaans <ceph-users@xxxxxxxxxxxxxxxxx>
- Re : Re : bad crc/signature errors
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: Ceph monitoring
- From: Lenz Grimmer <lenz@xxxxxxxxxxx>
- Re : bad crc/signature errors
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: bad crc/signature errors
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: bad crc/signature errors
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Xen & Ceph bad crc
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: tunable question
- From: mj <lists@xxxxxxxxxxxxx>
- Xen & Ceph bad crc
- From: Osama Hasebou <osama.hasebou@xxxxxx>
- Re : bad crc/signature errors
- From: Olivier Bonvalet <ceph.list@xxxxxxxxx>
- Re: inconsistent pg on erasure coded pool
- From: Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>
- Re: bad crc/signature errors
- From: Adrian Saul <Adrian.Saul@xxxxxxxxxxxxxxxxx>
- Re: Luminous cluster stuck when adding monitor
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Mimic timeline
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Mimic timeline
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Luminous cluster stuck when adding monitor
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- ceph multi active mds and failover with ceph version 12.2.1
- From: "Pavan, Krish" <Krish.Pavan@xxxxxxxxxx>
- Re: Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: bad crc/signature errors
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: bad crc/signature errors
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: inconsistent pg on erasure coded pool
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Luminous cluster stuck when adding monitor
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph-mgr summarize recovery counters
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Ceph-mgr summarize recovery counters
- From: Benjeman Meekhof <bmeekhof@xxxxxxxxx>
- bad crc/signature errors
- From: Josy <josy@xxxxxxxxxxxxxxxxxxxxx>
- inconsistent pg on erasure coded pool
- From: Kenneth Waegeman <kenneth.waegeman@xxxxxxxx>
- Luminous cluster stuck when adding monitor
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: MDS crashes shortly after startup while trying to purge stray files.
- From: Micha Krause <micha@xxxxxxxxxx>
- Re: why sudden (and brief) HEALTH_ERR
- From: lists <lists@xxxxxxxxxxxxx>
- Re: why sudden (and brief) HEALTH_ERR
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: why sudden (and brief) HEALTH_ERR
- From: Piotr Dałek <piotr.dalek@xxxxxxxxxxxx>
- why sudden (and brief) HEALTH_ERR
- From: lists <lists@xxxxxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: 1 osd Segmentation fault in test cluster
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: How to use rados_aio_write correctly?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: radosgw notify on creation/deletion of file in bucket
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Ceph stuck creating pool
- From: Guilherme Lima <guilherme.lima@xxxxxxxxxxxx>
- Re: radosgw notify on creation/deletion of file in bucket
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Ceph stuck creating pool
- From: David Turner <drakonstein@xxxxxxxxx>
- radosgw notify on creation/deletion of file in bucket
- From: Sean Purdy <s.purdy@xxxxxxxxxxxxxxxx>
- Re: tunable question
- From: lists <lists@xxxxxxxxxxxxx>
- Re: Ceph stuck creating pool
- From: Guilherme Lima <guilherme.lima@xxxxxxxxxxxx>
- Re: Ceph stuck creating pool
- From: Webert de Souza Lima <webert.boss@xxxxxxxxx>
- Ceph stuck creating pool
- From: Guilherme Lima <guilherme.lima@xxxxxxxxxxxx>
- Re: BlueStore questions about workflow and performance
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: tunable question
- From: Jake Young <jak3kaj@xxxxxxxxx>
- Re: BlueStore questions about workflow and performance
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: decreasing number of PGs
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: BlueStore questions about workflow and performance
- From: Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
- Re: decreasing number of PGs
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: tunable question
- From: lists <lists@xxxxxxxxxxxxx>
- How to use rados_aio_write correctly?
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: Ceph monitoring
- From: John Spray <jspray@xxxxxxxxxx>
- Re: Ceph monitoring
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- Re: RGW how to delete orphans
- From: Andreas Calminder <andreas.calminder@xxxxxxxxxx>
- Re: zone, zonegroup and resharding bucket on luminous
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Ceph monitoring
- From: Jasper Spaans <ceph-users@xxxxxxxxxxxxxxxxx>
- Re: Discontiune of cn.ceph.com
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: RGW how to delete orphans
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Ceph on ARM meeting canceled
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: BlueStore questions about workflow and performance
- From: Sam Huracan <nowitzki.sammy@xxxxxxxxx>
- Re: MDS crashes shortly after startup while trying to purge stray files.
- From: Patrick Donnelly <pdonnell@xxxxxxxxxx>
- Re: decreasing number of PGs
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: decreasing number of PGs
- From: Jack <ceph@xxxxxxxxxxxxxx>
- Re: Ceph monitoring
- From: Reed Dier <reed.dier@xxxxxxxxxxx>
- decreasing number of PGs
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: Ceph monitoring
- From: Erik McCormick <emccormick@xxxxxxxxxxxxxxx>
- Re: Ceph monitoring
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- Re: Ceph monitoring
- From: German Anders <ganders@xxxxxxxxxxxx>
- Discontiune of cn.ceph.com
- From: Shengjing Zhu <i@xxxxxxx>
- Re: Ceph monitoring
- From: David <dclistslinux@xxxxxxxxx>
- Re: 1 osd Segmentation fault in test cluster
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph OSD on Hardware RAID
- From: Vincent Godin <vince.mlist@xxxxxxxxx>
- Re: RGW how to delete orphans
- From: Webert de Souza Lima <webert.boss@xxxxxxxxx>
- Re: [Ceph-announce] Luminous v12.2.1 released
- From: Fabian Grünbichler <f.gruenbichler@xxxxxxxxxxx>
- Ceph monitoring
- From: Osama Hasebou <osama.hasebou@xxxxxx>
- BlueStore questions about workflow and performance
- From: Sam Huracan <nowitzki.sammy@xxxxxxxxx>
- Re: tunable question
- From: Manuel Lausch <manuel.lausch@xxxxxxxx>
- Re: zone, zonegroup and resharding bucket on luminous
- From: Orit Wasserman <owasserm@xxxxxxxxxx>
- Re: Ceph luminous repo not working on Ubuntu xenial
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph luminous repo not working on Ubuntu xenial
- From: Kashif Mumtaz <kashif.mumtaz@xxxxxxxxx>
- Re: rados_read versus rados_aio_read performance
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: nfs-ganesha / cephfs issues
- From: David <dclistslinux@xxxxxxxxx>
- Re: rados_read versus rados_aio_read performance
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: erasure-coded with overwrites versus erasure-coded with cache tiering
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: erasure-coded with overwrites versus erasure-coded with cache tiering
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: right way to recover a failed OSD (disk) when using BlueStore ?
- From: David Turner <drakonstein@xxxxxxxxx>
- right way to recover a failed OSD (disk) when using BlueStore ?
- From: Alejandro Comisario <alejandro@xxxxxxxxxxx>
- nfs-ganesha / cephfs issues
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- Re: erasure-coded with overwrites versus erasure-coded with cache tiering
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: erasure-coded with overwrites versus erasure-coded with cache tiering
- From: David Turner <drakonstein@xxxxxxxxx>
- erasure-coded with overwrites versus erasure-coded with cache tiering
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- 1 osd Segmentation fault in test cluster
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- (no subject)
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- Re: Backup VM images stored in ceph to another datacenter
- From: Jack <ceph@xxxxxxxxxxxxxx>
- Backup VM images stored in ceph to another datacenter
- From: Kashif Mumtaz <kashif.mumtaz@xxxxxxxxx>
- Re: osd create returns duplicate ID's
- From: "Anthony D'Atri" <aad@xxxxxxxxxxxxxx>
- Re: Ceph OSD on Hardware RAID
- From: "Anthony D'Atri" <aad@xxxxxxxxxxxxxx>
- Re: Bareos and libradosstriper works only for 4M sripe_unit size
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Get rbd performance stats
- From: Matthew Stroud <mattstroud@xxxxxxxxxxxxx>
- Re: Large amount of files - cephfs?
- From: Josef Zelenka <josef.zelenka@xxxxxxxxxxxxxxxx>
- Re: Get rbd performance stats
- From: Jason Dillaman <jdillama@xxxxxxxxxx>
- Re: Get rbd performance stats
- From: Matthew Stroud <mattstroud@xxxxxxxxxxxxx>
- Re: Get rbd performance stats
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Get rbd performance stats
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: Ceph OSD on Hardware RAID
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: Get rbd performance stats
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: New OSD missing from part of osd crush tree
- From: Sean Purdy <s.purdy@xxxxxxxxxxxxxxxx>
- Re: Ceph OSD on Hardware RAID
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Ceph OSD get blocked and start to make inconsistent pg from time to time
- From: David Turner <drakonstein@xxxxxxxxx>
- Ceph OSD on Hardware RAID
- From: Hauke Homburg <hhomburg@xxxxxxxxxxxxxx>
- Get rbd performance stats
- From: Matthew Stroud <mattstroud@xxxxxxxxxxxxx>
- Re: osd max scrubs not honored?
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: Cephfs : security questions?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- zone, zonegroup and resharding bucket on luminous
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Cephfs : security questions?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Cephfs : security questions?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Objecter and librados logs on rbd image operations
- From: "Chamarthy, Mahati" <mahati.chamarthy@xxxxxxxxx>
- Re: Cephfs : security questions?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: rados_read versus rados_aio_read performance
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph luminous repo not working on Ubuntu xenial
- From: Ronny Aasen <ronny+ceph-users@xxxxxxxx>
- Re: Ceph luminous repo not working on Ubuntu xenial
- From: Kashif Mumtaz <kashif.mumtaz@xxxxxxxxx>
- Re: ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Cephfs : security questions?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: osd create returns duplicate ID's
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- Re: RGW how to delete orphans
- From: Andreas Calminder <andreas.calminder@xxxxxxxxxx>
- rados_read versus rados_aio_read performance
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: Cephfs : security questions?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Cephfs : security questions?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: osd create returns duplicate ID's
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: osd create returns duplicate ID's
- From: Luis Periquito <periquito@xxxxxxxxx>
- Ceph OSD get blocked and start to make inconsistent pg from time to time
- From: Gonzalo Aguilar Delgado <gaguilar@xxxxxxxxxxxxxxxxxx>
- Re: osd max scrubs not honored?
- From: Stefan Kooman <stefan@xxxxxx>
- Bareos and libradosstriper works only for 4M sripe_unit size
- From: Alexander Kushnirenko <kushnirenko@xxxxxxxxx>
- Re: Cephfs : security questions?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Cephfs : security questions?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: osd create returns duplicate ID's
- From: Adrian Saul <Adrian.Saul@xxxxxxxxxxxxxxxxx>
- Re: Cephfs : security questions?
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- osd create returns duplicate ID's
- From: Luis Periquito <periquito@xxxxxxxxx>
- Re: Cephfs : security questions?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- Re: Cephfs : security questions?
- From: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
- Cephfs : security questions?
- From: Yoann Moulin <yoann.moulin@xxxxxxx>
- OpenStack Sydney Forum - Ceph BoF proposal
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: Ceph luminous repo not working on Ubuntu xenial
- From: Stefan Kooman <stefan@xxxxxx>
- Re: osd max scrubs not honored?
- From: Christian Balzer <chibi@xxxxxxx>
- Re: ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Ceph 12.2.0 on 32bit?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: osd max scrubs not honored?
- From: David Turner <drakonstein@xxxxxxxxx>
- Re: osd max scrubs not honored?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: RGW how to delete orphans
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: MDS crashes shortly after startup while trying to purge stray files.
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Ceph luminous repo not working on Ubuntu xenial
- From: Kashif Mumtaz <kashif.mumtaz@xxxxxxxxx>
- Re: RGW how to delete orphans
- From: Webert de Souza Lima <webert.boss@xxxxxxxxx>
- Re: Power outages!!! help!
- From: Ronny Aasen <ronny+ceph-users@xxxxxxxx>
- Re: Power outages!!! help!
- From: hjcho616 <hjcho616@xxxxxxxxx>
- Re: RGW how to delete orphans
- From: Andreas Calminder <andreas.calminder@xxxxxxxxxx>
- Luminous v12.2.1 released
- From: Abhishek <abhishek@xxxxxxxx>
- Openstack (pike) Ceilometer-API deprecated. RadosGW stats?
- From: "magicboiz@xxxxxxxxx" <magicboiz@xxxxxxxxx>
- Re: ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)
- From: Sean Purdy <s.purdy@xxxxxxxxxxxxxxxx>
- Re: RGW how to delete orphans
- From: Webert de Souza Lima <webert.boss@xxxxxxxxx>
- RGW how to delete orphans
- From: Andreas Calminder <andreas.calminder@xxxxxxxxxx>
- Re: MDS crashes shortly after startup while trying to purge stray files.
- From: Micha Krause <micha@xxxxxxxxxx>
- ceph/systemd startup bug (was Re: Some OSDs are down after Server reboot)
- From: Matthew Vernon <mv3@xxxxxxxxxxxx>
- Re: "ceph fs" commands hang forever and kill monitors
- From: John Spray <jspray@xxxxxxxxxx>
- Re: "ceph fs" commands hang forever and kill monitors
- From: Shinobu Kinjo <skinjo@xxxxxxxxxx>
- Re: "ceph fs" commands hang forever and kill monitors
- From: Richard Hesketh <richard.hesketh@xxxxxxxxxxxx>
- Re: RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3
- From: "Gerhard W. Recher" <gerhard.recher@xxxxxxxxxxx>
- Re: tunable question
- From: mj <lists@xxxxxxxxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Joao Eduardo Luis <joao@xxxxxxx>
- Re: Ceph Developers Monthly - October
- From: Shinobu Kinjo <skinjo@xxxxxxxxxx>
- Re: Need some help/advice upgrading Hammer to Jewel - HEALTH_ERR shutting down OSD
- From: Eric van Blokland <ericvanblokland@xxxxxxxxx>
- Re: PG in active+clean+inconsistent, but list-inconsistent-obj doesn't show it
- From: Ronny Aasen <ronny+ceph-users@xxxxxxxx>
- Re: tunable question
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- PG in active+clean+inconsistent, but list-inconsistent-obj doesn't show it
- From: Olivier Migeot <olivier.migeot@xxxxxxxxx>
- tunable question
- From: mj <lists@xxxxxxxxxxxxx>
- Re: Large amount of files - cephfs?
- From: Henrik Korkuc <lists@xxxxxxxxx>
- Ceph Developers Monthly - October
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Ceph Tech Talk - September
- From: Leonardo Vaz <lvaz@xxxxxxxxxx>
- Re: RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3
- From: Haomai Wang <haomai@xxxxxxxx>
- Re: Need some help/advice upgrading Hammer to Jewel - HEALTH_ERR shutting down OSD
- From: David Turner <drakonstein@xxxxxxxxx>
- Need some help/advice upgrading Hammer to Jewel - HEALTH_ERR shutting down OSD
- From: Eric van Blokland <ericvanblokland@xxxxxxxxx>
- Re: Large amount of files - cephfs?
- From: Deepak Naidu <dnaidu@xxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[xfs]
[Linux SCSI]
[Ceph Large]
[Samba]
[Yosemite Forum]