CEPH Filesystem Users
[Prev Page][Next Page]
- cephadm host maintenance
- From: Steven Goodliff <Steven.Goodliff@xxxxxxxxxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: David Orman <ormandj@xxxxxxxxxxxx>
- Re: size=1 min_size=0 any way to set?
- From: "huxiaoyu@xxxxxxxxxxxx" <huxiaoyu@xxxxxxxxxxxx>
- Re: size=1 min_size=0 any way to set?
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: size=1 min_size=0 any way to set?
- From: "huxiaoyu@xxxxxxxxxxxx" <huxiaoyu@xxxxxxxxxxxx>
- MGR permissions question
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- size=1 min_size=0 any way to set?
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: CephFS snapshots with samba shadowcopy
- From: Sebastian Knust <sknust@xxxxxxxxxxxxxxxxxxxxxxx>
- Re: rados df vs ls
- From: "stuart.anderson" <anderson@xxxxxxxxxxxxxxxx>
- CephFS snapshots with samba shadowcopy
- From: "Bailey Allison" <ballison@xxxxxxxxxxxx>
- Re: ceph-fs crashes on getfattr
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: ceph-fs crashes on getfattr
- From: Andras Pataki <apataki@xxxxxxxxxxxxxxxxxxxxx>
- RGW error Coundn't init storage provider (RADOS)
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- Re: Moving MGR from a node to another
- From: Malte Stroem <malte.stroem@xxxxxxxxx>
- Re: Quincy recovery load
- From: Sridhar Seshasayee <sseshasa@xxxxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: Igor Fedotov <igor.fedotov@xxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Re: Quincy recovery load
- From: Chris Palmer <chris.palmer@xxxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: Igor Fedotov <igor.fedotov@xxxxxxxx>
- Re: ceph-fs crashes on getfattr
- From: Frank Schilder <frans@xxxxxx>
- Moving MGR from a node to another
- From: Aristide Bekroundjo <bekroundjo@xxxxxxx>
- Re: Status occurring several times a day: CEPHADM_REFRESH_FAILED
- From: E Taka <0etaka0@xxxxxxxxx>
- "Low-hanging-fruit" trackers wanted for Grace Hopper Open Source Day, 2022
- From: Laura Flores <lflores@xxxxxxxxxx>
- Re: OSD not created after replacing failed disk
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- Re: ceph-fs crashes on getfattr
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Quincy recovery load
- From: Chris Palmer <chris.palmer@xxxxxxxxx>
- Re: ceph-fs crashes on getfattr
- From: Stefan Kooman <stefan@xxxxxx>
- Re: ceph-fs crashes on getfattr
- From: Stefan Kooman <stefan@xxxxxx>
- ceph-fs crashes on getfattr
- From: Frank Schilder <frans@xxxxxx>
- Re: Quincy recovery load
- From: Chris Palmer <chris.palmer@xxxxxxxxx>
- Ceph / Debian 11 guest / corrupted file system
- From: Hermann Himmelbauer <hermann@xxxxxxx>
- Re: version inconsistency after migrating to cephadm from 16.2.9 package-based
- From: Stéphane Caminade <stephane.caminade@xxxxxxxxxxxxx>
- Re: OSD not created after replacing failed disk
- From: Malte Stroem <malte.stroem@xxxxxxxxx>
- Re: 50% performance drop after disk failure
- From: Michael Eichenberger <michael.eichenberger@xxxxxxxxxxxxxxxxx>
- Re: version inconsistency after migrating to cephadm from 16.2.9 package-based
- From: Adam King <adking@xxxxxxxxxx>
- Re: CephPGImbalance: deviates by more than 30%
- From: Christophe BAILLON <cb@xxxxxxx>
- Re: rbd live migration recovery
- From: koukou73gr <koukou73gr@xxxxxxxxx>
- rbd live migration recovery
- From: koukou73gr <koukou73gr@xxxxxxxxx>
- Re: 50% performance drop after disk failure
- From: Michael Eichenberger <michael.eichenberger@xxxxxxxxxxxxxxxxx>
- Re: 50% performance drop after disk failure
- From: Joachim Kraftmayer <joachim.kraftmayer@xxxxxxxxx>
- Re: 50% performance drop after disk failure
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: 50% performance drop after disk failure
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- 50% performance drop after disk failure
- From: Michael Eichenberger <michael.eichenberger@xxxxxxxxxxxxxxxxx>
- Re: OSD not created after replacing failed disk
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- ceph orch device ls extents
- From: Curt <lightspd@xxxxxxxxx>
- Re: runaway mon DB
- Re: OSD not created after replacing failed disk
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- Re: version inconsistency after migrating to cephadm from 16.2.9 package-based
- From: Stéphane Caminade <stephane.caminade@xxxxxxxxxxxxx>
- Can't setup Basic Ceph Client
- From: Jean-Marc FONTANA <jean-marc.fontana@xxxxxxx>
- Re: cephfs mounting multiple filesystems
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- Re: Status occurring several times a day: CEPHADM_REFRESH_FAILED
- From: Adam King <adking@xxxxxxxxxx>
- Re: cephfs mounting multiple filesystems
- From: Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- cephfs mounting multiple filesystems
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- Re: MDS demons failing
- From: Venky Shankar <vshankar@xxxxxxxxxx>
- MDS demons failing
- From: Santhosh Alugubelly <spamsanthosh219@xxxxxxxxx>
- Status occurring several times a day: CEPHADM_REFRESH_FAILED
- From: E Taka <0etaka0@xxxxxxxxx>
- Re: OSD not created after replacing failed disk
- From: Malte Stroem <malte.stroem@xxxxxxxxx>
- OSD not created after replacing failed disk
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- Re: [ext] Re: snap_schedule MGR module not available after upgrade to Quincy
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Re: MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability release
- From: Thomas Hukkelberg <thomas@xxxxxxxxxxxxxxxxx>
- Re: Performance in Proof-of-Concept cluster
- From: Marc <Marc@xxxxxxxxxxxxxxxxx>
- Re: Performance in Proof-of-Concept cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: Performance in Proof-of-Concept cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: pacific doesn't defer small writes for pre-pacific hdd osds
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: [ext] Re: snap_schedule MGR module not available after upgrade to Quincy
- From: Andreas Teuchert <a.teuchert@xxxxxxxxxxxx>
- Re: Performance in Proof-of-Concept cluster
- From: Hans van den Bogert <hansbogert@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Frank Schilder <frans@xxxxxx>
- which tools can test compression performance
- From: "Feng, Hualong" <hualong.feng@xxxxxxxxx>
- Re: snap_schedule MGR module not available after upgrade to Quincy
- From: Venky Shankar <vshankar@xxxxxxxxxx>
- Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri)
- From: Vinh Nguyen Duc <vinhducnguyen1708@xxxxxxxxx>
- Ceph Leadership Team Meeting Minutes (2022-07-06)
- From: David Orman <ormandj@xxxxxxxxxxxx>
- Re: Performance in Proof-of-Concept cluster
- From: Marc <Marc@xxxxxxxxxxxxxxxxx>
- Re: Performance in Proof-of-Concept cluster
- From: Marc <Marc@xxxxxxxxxxxxxxxxx>
- rados df vs ls
- From: "stuart.anderson" <anderson@xxxxxxxxxxxxxxxx>
- Re: [ext] Re: snap_schedule MGR module not available after upgrade to Quincy
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Re: Quincy recovery load
- From: Jimmy Spets <jimmy@xxxxxxxxx>
- Get filename from oid?
- From: Toby Darling <toby@xxxxxxxxxxxxxxxxx>
- Re: snap_schedule MGR module not available after upgrade to Quincy
- From: Andreas Teuchert <a.teuchert@xxxxxxxxxxxx>
- Re: Quincy recovery load
- From: Jimmy Spets <jimmy@xxxxxxxxx>
- Re: Quincy recovery load
- From: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
- Re: Performance in Proof-of-Concept cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: Quincy recovery load
- From: Sridhar Seshasayee <sseshasa@xxxxxxxxxx>
- Re: RFC: (deep-)scrub manager module
- From: Rasha Shoaib <rshoaib@xxxxxxxxxxx>
- Quincy recovery load
- From: Jimmy Spets <jimmy@xxxxxxxxx>
- Performance in Proof-of-Concept cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: CephPGImbalance: deviates by more than 30%
- From: Joffrey <joff.au@xxxxxxxxx>
- Possible customer impact on resharding radosgw bucket indexes?
- From: Boris Behrens <bb@xxxxxxxxx>
- Re: MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability release
- From: Alexander Sporleder <asporleder@xxxxxxxxxx>
- Re: MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability release
- From: Frank Schilder <frans@xxxxxx>
- Re: MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability release
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability release
- From: Frank Schilder <frans@xxxxxx>
- Re: CephPGImbalance: deviates by more than 30%
- From: Tatjana Dehler <tdehler@xxxxxxxx>
- Re: CephPGImbalance: deviates by more than 30%
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: CephPGImbalance: deviates by more than 30%
- From: Joffrey <joff.au@xxxxxxxxx>
- Re: multi-site replication not syncing metadata
- From: Michael Gugino <michael.gugino@xxxxxxxxxx>
- CephPGImbalance: deviates by more than 30%
- From: Joffrey <joff.au@xxxxxxxxx>
- Re: CephFS Mirroring Extended ACL/Attribute Support
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: Is Ceph with rook ready for production?
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: Is Ceph with rook ready for production?
- From: "norman.kern" <norman.kern@xxxxxxx>
- Re: multi-site replication not syncing metadata
- From: Matthew Darwin <bugs@xxxxxxxxxx>
- CephFS Mirroring Extended ACL/Attribute Support
- From: "Austin Axworthy" <aaxworthy@xxxxxxxxxxxx>
- Re: Next (last) octopus point release
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Is Ceph with rook ready for production?
- From: Ali Akil <ali-akil@xxxxxx>
- Re: Is Ceph with rook ready for production?
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: Broken PTR record for new Ceph Redmine IP
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Re: Is Ceph with rook ready for production?
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: Is Ceph with rook ready for production?
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: Is Ceph with rook ready for production?
- From: Etienne Menguy <etienne.menguy@xxxxxxxxxxx>
- Re: Is Ceph with rook ready for production?
- From: Marc <Marc@xxxxxxxxxxxxxxxxx>
- Is Ceph with rook ready for production?
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: bunch of " received unsolicited reservation grant from osd" messages in log
- From: Ronen Friedman <rfriedma@xxxxxxxxxx>
- Any known bugs on Luminous 12.2.12 multisite replication
- From: "huxiaoyu@xxxxxxxxxxxx" <huxiaoyu@xxxxxxxxxxxx>
- Re: SWAP or not to swap
- From: Frank Schilder <frans@xxxxxx>
- Re: SWAP or not to swap
- From: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
- Re: Next (last) octopus point release
- From: Laura Flores <lflores@xxxxxxxxxx>
- Re: Conversion to Cephadm
- From: "Brent Kennedy" <bkennedy@xxxxxxxxxx>
- Re: Next (last) octopus point release
- From: Laura Flores <lflores@xxxxxxxxxx>
- SWAP or not to swap
- From: Götz Reinicke <goetz.reinicke@xxxxxxxxxxxxxxx>
- Re: Broken PTR record for new Ceph Redmine IP
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: Broken PTR record for new Ceph Redmine IP
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ext] Re: cephadm orch thinks hosts are offline
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- snap_schedule MGR module not available after upgrade to Quincy
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Broken PTR record for new Ceph Redmine IP
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Re: Orchestrator informations wrong and outdated
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: bunch of " received unsolicited reservation grant from osd" messages in log
- From: Denis Polom <denispolom@xxxxxxxxx>
- Quincy upgrade note - comments
- From: Chris Palmer <chris.palmer@xxxxxxxxx>
- Re: persistent write-back cache and quemu
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- persistent write-back cache and quemu
- From: Ansgar Jazdzewski <a.jazdzewski@xxxxxxxxxxxxxx>
- Re: Ceph mon cannot join to cluster during upgrade
- From: Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>
- Error opening missing snapshot from missing (deleted) rbd image.
- From: Luke Hall <luke@xxxxxxxxxxxxxxxxxxxxx>
- Re: snapshot delete after upgrade from nautilus to octopus/pacific
- From: Manuel Lausch <manuel.lausch@xxxxxxxx>
- Re: Ceph mon cannot join to cluster during upgrade
- From: Eugen Block <eblock@xxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Ceph mon cannot join to cluster during upgrade
- From: Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>
- Re: Ceph recovery network speed
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph recovery network speed
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: ceph nfs-ganesha - Unable to mount Ceph cluster
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- ceph nfs-ganesha - Unable to mount Ceph cluster
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: [ext] Re: cephadm orch thinks hosts are offline
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Re: unknown daemon type cephadm-exporter
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- Re: Ceph mon cannot join to cluster during upgrade
- From: Eugen Block <eblock@xxxxxx>
- Re: scrubbing+deep+repair PGs since Upgrade
- From: Stefan Kooman <stefan@xxxxxx>
- Ceph mon cannot join to cluster during upgrade
- From: Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>
- Re: Ceph recovery network speed
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph recovery network speed
- From: Stefan Kooman <stefan@xxxxxx>
- CephFS, ACLs, NFS and SMB
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Best value for "mds_cache_memory_limit" for large (more than 10 Po) cephfs
- From: Robert Gallop <robert.gallop@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Frank Schilder <frans@xxxxxx>
- Re: Best value for "mds_cache_memory_limit" for large (more than 10 Po) cephfs
- From: Eugen Block <eblock@xxxxxx>
- Re: Ceph recovery network speed
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Recommended number of mons in a cluster
- From: Stefan Kooman <stefan@xxxxxx>
- Re: cephadm orch thinks hosts are offline
- From: Thomas Roth <t.roth@xxxxxx>
- Re: Recommended number of mons in a cluster
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Re: All older OSDs corrupted after Quincy upgrade
- From: Hector Martin <marcan@xxxxxxxxx>
- Re: Difficulty with fixing an inconsistent PG/object
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Difficulty with fixing an inconsistent PG/object
- From: Lennart van Gijtenbeek | Routz <lennart.vangijtenbeek@xxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Frank Schilder <frans@xxxxxx>
- Re: All older OSDs corrupted after Quincy upgrade
- From: Stefan Kooman <stefan@xxxxxx>
- Orchestrator informations wrong and outdated
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Ceph FS outage after blocked_op + mk_snap
- From: Frank Schilder <frans@xxxxxx>
- version inconsistency after migrating to cephadm from 16.2.9 package-based
- From: Stéphane Caminade <stephane.caminade@xxxxxxxxxxxxx>
- All older OSDs corrupted after Quincy upgrade
- From: Hector Martin <marcan@xxxxxxxxx>
- Re: recovery from catastrophic mon and mds failure after reboot and ip address change
- From: Robert Gallop <robert.gallop@xxxxxxxxx>
- Re: unknown daemon type cephadm-exporter
- From: Adam King <adking@xxxxxxxxxx>
- unknown daemon type cephadm-exporter
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- Recommended number of mons in a cluster
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- Re: recovery from catastrophic mon and mds failure after reboot and ip address change
- From: Florian Jonas <florian.jonas@xxxxxxx>
- Refill snaptrim queue after triggering bug #54396
- From: Kári Bertilsson <karibertils@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: recovery from catastrophic mon and mds failure after reboot and ip address change
- From: Eugen Block <eblock@xxxxxx>
- Re: recovery from catastrophic mon and mds failure after reboot and ip address change
- From: Florian Jonas <florian.jonas@xxxxxxx>
- Re: recovery from catastrophic mon and mds failure after reboot and ip address change
- From: Eugen Block <eblock@xxxxxx>
- Re: Ceph recovery network speed
- From: Frank Schilder <frans@xxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- calling ceph command from a crush_location_hook - fails to find sys.stdin.isatty()
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Ceph recovery network speed
- From: Frank Schilder <frans@xxxxxx>
- Balancer problems with Erasure Coded pool
- From: Thomas Hukkelberg <thomas@xxxxxxxxxxxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Robert Gallop <robert.gallop@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: bunch of " received unsolicited reservation grant from osd" messages in log
- From: Neha Ojha <nojha@xxxxxxxxxx>
- Re: Ceph recovery network speed
- From: Frank Schilder <frans@xxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Conversion to Cephadm
- From: Eugen Block <eblock@xxxxxx>
- runaway mon DB
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Set device-class via service specification file
- From: David Orman <ormandj@xxxxxxxxxxxx>
- Re: scrubbing+deep+repair PGs since Upgrade
- From: Josh Baergen <jbaergen@xxxxxxxxxxxxxxxx>
- Set device-class via service specification file
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- Multiple subnet single cluster
- From: Tahder Xunil <codbla@xxxxxxxxx>
- Re: cephadm orch thinks hosts are offline
- From: Thomas Roth <t.roth@xxxxxx>
- recommended Linux distro for Ceph Pacific small cluster
- From: Bobby <italienisch1987@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Frank Schilder <frans@xxxxxx>
- Re: scrubbing+deep+repair PGs since Upgrade
- From: Stefan Kooman <stefan@xxxxxx>
- Re: scrubbing+deep+repair PGs since Upgrade
- From: Marcus Müller <mueller.marcus@xxxxxxxxx>
- recovery from catastrophic mon and mds failure after reboot and ip address change
- From: Florian Jonas <florian.jonas@xxxxxxx>
- Re: Conversion to Cephadm
- From: Redouane Kachach Elhichou <rkachach@xxxxxxxxxx>
- Re: scrubbing+deep+repair PGs since Upgrade
- From: Stefan Kooman <stefan@xxxxxx>
- scrubbing+deep+repair PGs since Upgrade
- From: Marcus Müller <mueller.marcus@xxxxxxxxx>
- Re: Cephadm: how to perform BlueStore repair?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Cephadm: how to perform BlueStore repair?
- From: Stefan Kooman <stefan@xxxxxx>
- multisite bucket sync after rename doesn't work
- From: Christopher Durham <caduceus42@xxxxxxx>
- Conversion to Cephadm
- From: "Brent Kennedy" <bkennedy@xxxxxxxxxx>
- Re: CephFS snaptrim bug?
- From: Kári Bertilsson <karibertils@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: Ceph recovery network speed
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Ceph recovery network speed
- From: Curt <lightspd@xxxxxxxxx>
- Re: How to remove TELEMETRY_CHANGED( Telemetry requires re-opt-in) message
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: How to remove TELEMETRY_CHANGED( Telemetry requires re-opt-in) message
- From: Matthew Darwin <bugs@xxxxxxxxxx>
- Re: How to remove TELEMETRY_CHANGED( Telemetry requires re-opt-in) message
- From: Laura Flores <lflores@xxxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Peter Lieven <pl@xxxxxxx>
- Re: How to remove TELEMETRY_CHANGED( Telemetry requires re-opt-in) message
- From: Yaarit Hatuka <yhatuka@xxxxxxxxxx>
- Re: How to remove TELEMETRY_CHANGED( Telemetry requires re-opt-in) message
- From: Matthew Darwin <bugs@xxxxxxxxxx>
- Cephadm: how to perform BlueStore repair?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: How to remove TELEMETRY_CHANGED( Telemetry requires re-opt-in) message
- From: Yaarit Hatuka <yhatuka@xxxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Pascal Ehlert <pascal@xxxxxxxxxxxx>
- Re: bunch of " received unsolicited reservation grant from osd" messages in log
- From: Kenneth Waegeman <Kenneth.Waegeman@xxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific [EXT]
- From: Pascal Ehlert <pascal@xxxxxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific [EXT]
- From: Dave Holland <dh3@xxxxxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Pascal Ehlert <pascal@xxxxxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Pascal Ehlert <pascal@xxxxxxxxxxxx>
- Re: cephadm permission denied when extending cluster
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Ansgar Jazdzewski <a.jazdzewski@xxxxxxxxxxxxxx>
- Re: How to remove TELEMETRY_CHANGED( Telemetry requires re-opt-in) message
- From: Matthew Darwin <bugs@xxxxxxxxxx>
- How to remove TELEMETRY_CHANGED( Telemetry requires re-opt-in) message
- From: Matthew Darwin <bugs@xxxxxxxxxx>
- v17.2.1 Quincy released
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: use ceph rbd for windows cluster "scsi-3 persistent reservation"
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Re: use ceph rbd for windows cluster "scsi-3 persistent reservation"
- From: Stefan Kooman <stefan@xxxxxx>
- Ceph Zabbix manager module
- From: Stephen Smith6 <esmith@xxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Pascal Ehlert <pascal@xxxxxxxxxxxx>
- Re: cephadm permission denied when extending cluster
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Ansgar Jazdzewski <a.jazdzewski@xxxxxxxxxxxxxx>
- Re: cephadm permission denied when extending cluster
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- Re: cephadm permission denied when extending cluster
- From: Adam King <adking@xxxxxxxxxx>
- Re: cephadm orch thinks hosts are offline
- From: Adam King <adking@xxxxxxxxxx>
- cephadm orch thinks hosts are offline
- From: Thomas Roth <t.roth@xxxxxx>
- Re: lifecycle config minimum time
- From: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
- Re: cephfs client permission restrictions?
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: cephfs client permission restrictions?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- cephadm permission denied when extending cluster
- From: Robert Reihs <robert.reihs@xxxxxxxxx>
- cephfs client permission restrictions?
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: lifecycle config minimum time
- From: Daniel Gryniewicz <dang@xxxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Pascal Ehlert <pascal@xxxxxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Ansgar Jazdzewski <a.jazdzewski@xxxxxxxxxxxxxx>
- Re: Inconsistent PGs after upgrade to Pacific
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: [ext] Re: Rename / change host names set with `ceph orch host add`
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Re: [ext] Re: Rename / change host names set with `ceph orch host add`
- From: Adam King <adking@xxxxxxxxxx>
- Re: Tuning for cephfs backup client?
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Peter Lieven <pl@xxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Ernesto Puerta <epuertat@xxxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: [ext] Re: Rename / change host names set with `ceph orch host add`
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Re: Ceph Stretch Cluster - df pool size (Max Avail)
- From: Kilian Ries <mail@xxxxxxxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Peter Lieven <pl@xxxxxxx>
- Tuning for cephfs backup client?
- From: Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- force file system read-only
- From: "Jose V. Carrion" <burcarjo@xxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Josh Durgin <jdurgin@xxxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Travis Nielsen <tnielsen@xxxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Laura Flores <lflores@xxxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Neha Ojha <nojha@xxxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Laura Flores <lflores@xxxxxxxxxx>
- Inconsistent PGs after upgrade to Pacific
- From: Pascal Ehlert <pascal@xxxxxxxxxxxx>
- Re: Ceph Stretch Cluster - df pool size (Max Avail)
- From: Clyso GmbH - Ceph Foundation Member <joachim.kraftmayer@xxxxxxxxx>
- Ceph Stretch Cluster - df pool size (Max Avail)
- From: Kilian Ries <mail@xxxxxxxxxxxxxx>
- How to Compact, repair, reshard OSD in (docker) container?
- From: Stefan Kooman <stefan@xxxxxx>
- use ceph rbd for windows cluster "scsi-3 persistent reservation"
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: Recovery of OMAP keys
- From: Benoît Knecht <bknecht@xxxxxxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Josh Baergen <jbaergen@xxxxxxxxxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Peter Lieven <pl@xxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: ceph-container: docker restart, mon's unable to join
- From: Kilian Ries <mail@xxxxxxxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Peter Lieven <pl@xxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Janne Johansson <icepic.dz@xxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Peter Lieven <pl@xxxxxxx>
- Best value for "mds_cache_memory_limit" for large (more than 10 Po) cephfs
- From: Arnaud M <arnaud.meauzoone@xxxxxxxxx>
- Re: librbd leaks memory on crushmap updates
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: RFC: (deep-)scrub manager module
- From: Stefan Kooman <stefan@xxxxxx>
- librbd leaks memory on crushmap updates
- From: Peter Lieven <pl@xxxxxxx>
- Re: RFC: (deep-)scrub manager module
- From: Ronen Friedman <rfriedma@xxxxxxxxxx>
- Recovery of OMAP keys
- From: Benoît Knecht <bknecht@xxxxxxxxxxxxx>
- Re: lifecycle config minimum time
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- lifecycle config minimum time
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: RFC: (deep-)scrub manager module
- From: Frank Schilder <frans@xxxxxx>
- Re: Correct procedure to replace RAID0 OSD
- From: Janne Johansson <icepic.dz@xxxxxxxxx>
- Correct procedure to replace RAID0 OSD
- From: Oliver Weinmann <oliver.weinmann@xxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Venky Shankar <vshankar@xxxxxxxxxx>
- multi-site replication not syncing metadata
- From: Matthew Darwin <bugs@xxxxxxxxxx>
- Re: Suggestion to build ceph storage
- From: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
- Re: Suggestion to build ceph storage
- From: Satish Patel <satish.txt@xxxxxxxxx>
- Re: Suggestion to build ceph storage
- From: Satish Patel <satish.txt@xxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Suggestion to build ceph storage
- From: Christophe BAILLON <cb@xxxxxxx>
- Re: Suggestion to build ceph storage
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Suggestion to build ceph storage
- From: Jake Grimmett <jog@xxxxxxxxxxxxxxxxx>
- Re: ceph-container: docker restart, mon's unable to join
- From: Stefan Kooman <stefan@xxxxxx>
- Re: ceph-container: docker restart, mon's unable to join
- From: Kilian Ries <mail@xxxxxxxxxxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: "Joachim Kraftmayer (Clyso GmbH)" <joachim.kraftmayer@xxxxxxxxx>
- Re: rbd resize thick provisioned image
- From: Frank Schilder <frans@xxxxxx>
- Re: RFC: (deep-)scrub manager module
- From: Stefan Kooman <stefan@xxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: Xiubo Li <xiubli@xxxxxxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: Stefan Kooman <stefan@xxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: Arnaud M <arnaud.meauzoone@xxxxxxxxx>
- Re: RFC: (deep-)scrub manager module
- From: Frank Schilder <frans@xxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: Arnaud M <arnaud.meauzoone@xxxxxxxxx>
- Re: What is the max size of cephfs (filesystem)
- From: Serkan Çoban <cobanserkan@xxxxxxxxx>
- What is the max size of cephfs (filesystem)
- From: Arnaud M <arnaud.meauzoone@xxxxxxxxx>
- Re: Suggestion to build ceph storage
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Suggestion to build ceph storage
- From: Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
- Re: active+undersized+degraded due to OSD size differences?
- From: Tyler Stachecki <stachecki.tyler@xxxxxxxxx>
- active+undersized+degraded due to OSD size differences?
- From: Thomas Roth <t.roth@xxxxxx>
- Re: RFC: (deep-)scrub manager module
- From: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
- Re: RFC: (deep-)scrub manager module
- From: Stefan Kooman <stefan@xxxxxx>
- Suggestion to build ceph storage
- From: Satish Patel <satish.txt@xxxxxxxxx>
- ceph-container: docker restart, mon's unable to join
- From: Kilian Ries <mail@xxxxxxxxxxxxxx>
- in v16.2.9 NFS service changes backend port - thus "TCP Port(s) '2049' required for nfs already in use"
- From: Uwe Richter <uwe.richter@xxxxxxxxxxx>
- RFC: (deep-)scrub manager module
- From: Stefan Kooman <stefan@xxxxxx>
- snapshot delete after upgrade from nautilus to octopus/pacific
- From: Manuel Lausch <manuel.lausch@xxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Adam King <adking@xxxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Neha Ojha <nojha@xxxxxxxxxx>
- Re: rfc: Accounts in RGW
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: rfc: Accounts in RGW
- From: Daniel Gryniewicz <dang@xxxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Neha Ojha <nojha@xxxxxxxxxx>
- Re: rfc: Accounts in RGW
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped
- From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
- host disk used by osd container
- From: Tony Liu <tonyliu0592@xxxxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Venky Shankar <vshankar@xxxxxxxxxx>
- Re: rbd resize thick provisioned image
- From: Frank Schilder <frans@xxxxxx>
- Re: rbd resize thick provisioned image
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: rbd resize thick provisioned image
- From: Frank Schilder <frans@xxxxxx>
- Re: rbd resize thick provisioned image
- From: Eugen Block <eblock@xxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Casey Bodley <cbodley@xxxxxxxxxx>
- Re: quincy v17.2.1 QE Validation status
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: Multi-active MDS cache pressure
- From: Eugen Block <eblock@xxxxxx>
- rbd resize thick provisioned image
- From: Frank Schilder <frans@xxxxxx>
- MDS error handle_find_ino_reply failed with -116
- From: Denis Polom <denispolom@xxxxxxxxx>
- ceph.pub not presistent over reboots?
- From: Thomas Roth <t.roth@xxxxxx>
- Re: Announcing go-ceph v0.16.0
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Re: Announcing go-ceph v0.16.0
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Re: OSD crash with "no available blob id" and check for Zombie blobs
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Upgrade and Conversion Issue ( cephadm )
- From: "Brent Kennedy" <bkennedy@xxxxxxxxxx>
- Re: [EXTERNAL] RGW Bucket Notifications and http push-endpoint
- From: Mark Selby <mselby@xxxxxxxxxx>
- RGW Bucket Notifications and http push-endpoint
- From: Mark Selby <mselby@xxxxxxxxxx>
- Re: File access issue with root_squashed fs client
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- Re: Announcing go-ceph v0.16.0
- From: John Mulligan <phlogistonjohn@xxxxxxxxxxxxx>
- Announcing go-ceph v0.16.0
- From: John Mulligan <phlogistonjohn@xxxxxxxxxxxxx>
- Re: Ceph on RHEL 9
- From: "Robert W. Eckert" <rob@xxxxxxxxxxxxxxx>
- Re: Ceph Octopus RGW - files vanished from rados while still in bucket index
- From: Boris Behrens <bb@xxxxxxxxx>
- Re: Possible to recover deleted files from CephFS?
- From: Michael Sherman <shermanm@xxxxxxxxxxxx>
- Re: Possible to recover deleted files from CephFS?
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Ceph Octopus RGW - files vanished from rados while still in bucket index
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- Re: How suitable is CEPH for....
- From: Mark Lehrer <lehrer@xxxxxxxxx>
- Possible to recover deleted files from CephFS?
- From: Michael Sherman <shermanm@xxxxxxxxxxxx>
- Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped
- From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
- How suitable is CEPH for....
- From: Kostadin Bukov <kostadin.bukov@xxxxxxxxxxxx>
- Re: Help needed picking the right amount of PGs for (Cephfs) metadata pool
- From: Frank Schilder <frans@xxxxxx>
- set configuration options in the cephadm age
- From: Thomas Roth <t.roth@xxxxxx>
- Re: something wrong with my monitor database ?
- From: Eric Le Lay <eric.lelay@xxxxxxxxxxxxx>
- Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped
- From: Eugen Block <eblock@xxxxxx>
- Re: OSD crash with "no available blob id" and check for Zombie blobs
- From: tao song <alansong1023@xxxxxxxxx>
- Re: OSD crash with "no available blob id" and check for Zombie blobs
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- OSD crash with "no available blob id" and check for Zombie blobs
- From: tao song <alansong1023@xxxxxxxxx>
- Re: Copying and renaming pools
- From: Eugen Block <eblock@xxxxxx>
- error: _ASSERT_H not a pointer
- From: renjianxinlover <renjianxinlover@xxxxxxx>
- Re: ceph-users Digest, Vol 113, Issue 36
- From: renjianxinlover <renjianxinlover@xxxxxxx>
- Re: Ceph Octopus RGW - files vanished from rados while still in bucket index
- From: Boris Behrens <bb@xxxxxxxxx>
- Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped
- From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
- Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: Ceph Octopus RGW - files vanished from rados while still in bucket index
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- Ceph Octopus RGW - files vanished from rados while still in bucket index
- From: Boris Behrens <bb@xxxxxxxxx>
- Copying and renaming pools
- From: Pardhiv Karri <meher4india@xxxxxxxxx>
- Re: My cluster is down. Two osd:s on different hosts uses all memory on boot and then crashes.
- From: Stefan <slissm@xxxxxxxxxxxxxx>
- Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped
- From: Eugen Block <eblock@xxxxxx>
- Changes to Crush Weight Causing Degraded PGs instead of Remapped
- From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
- Re: something wrong with my monitor database ?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: something wrong with my monitor database ?
- From: Eric Le Lay <eric.lelay@xxxxxxxxxxxxx>
- Re: something wrong with my monitor database ?
- From: Eric Le Lay <eric.lelay@xxxxxxxxxxxxx>
- Experience with scrub tunings?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Feedback/questions regarding cephfs-mirror
- From: Andreas Teuchert <a.teuchert@xxxxxxxxxxxx>
- Re: My cluster is down. Two osd:s on different hosts uses all memory on boot and then crashes.
- From: Mara Sophie Grosch <littlefox@xxxxxxxxxx>
- My cluster is down. Two osd:s on different hosts uses all memory on boot and then crashes.
- From: Stefan <slissm@xxxxxxxxxxxxxx>
- Re: Strange drops in ceph_pool_bytes_used metric
- From: Benoît Knecht <bknecht@xxxxxxxxxxxxx>
- Strange drops in ceph_pool_bytes_used metric
- From: Benoît Knecht <bknecht@xxxxxxxxxxxxx>
- Ceph add-repo Unable to find a match epel-release
- From: Kostadin Bukov <kostadin.bukov@xxxxxxxxxxxx>
- snap-schedule reappearing
- From: "Stolte, Felix" <f.stolte@xxxxxxxxxxxxx>
- Re: Degraded data redundancy: 32 pgs undersized
- From: Stefan Kooman <stefan@xxxxxx>
- Degraded data redundancy: 32 pgs undersized
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- virtual_ips
- From: Dmitriy Trubov <DmitriyT@xxxxxxxxxxxxxx>
- Re: OSDs getting OOM-killed right after startup
- From: Mara Sophie Grosch <littlefox@xxxxxxxxxx>
- Re: Generation of systemd units after nuking /etc/systemd/system
- From: Flemming Frandsen <dren.dk@xxxxxxxxx>
- Re: Feedback/questions regarding cephfs-mirror
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Multisite upgrade ordering
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: Ceph on RHEL 9
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Multisite upgrade ordering
- From: Danny Webb <Danny.Webb@xxxxxxxxxxxxxxx>
- Re: Luminous to Pacific Upgrade with Filestore OSDs
- From: Pardhiv Karri <meher4india@xxxxxxxxx>
- Re: Bug with autoscale-status in 17.2.0 ?
- From: Jake Grimmett <jog@xxxxxxxxxxxxxxxxx>
- Re: OSDs getting OOM-killed right after startup
- From: David Orman <ormandj@xxxxxxxxxxxx>
- Re: Generation of systemd units after nuking /etc/systemd/system
- From: Josef Johansson <josef@xxxxxxxxxxx>
- Re: Ceph pool set min_write_recency_for_promote not working
- From: Eugen Block <eblock@xxxxxx>
- Re: Bug with autoscale-status in 17.2.0 ?
- From: Maximilian Hill <max@xxxxxxxxxx>
- Bug with autoscale-status in 17.2.0 ?
- From: Jake Grimmett <jog@xxxxxxxxxxxxxxxxx>
- Re: OSDs getting OOM-killed right after startup
- From: Marius Leustean <marius.leus@xxxxxxxxx>
- Re: OSDs getting OOM-killed right after startup
- From: Eugen Block <eblock@xxxxxx>
- Re: something wrong with my monitor database ?
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Luminous to Pacific Upgrade with Filestore OSDs
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- something wrong with my monitor database ?
- From: Eric Le Lay <eric.lelay@xxxxxxxxxxxxx>
- Re: Generation of systemd units after nuking /etc/systemd/system
- From: Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: Generation of systemd units after nuking /etc/systemd/system
- From: 胡 玮文 <huww98@xxxxxxxxxxx>
- Re: Generation of systemd units after nuking /etc/systemd/system
- From: Flemming Frandsen <dren.dk@xxxxxxxxx>
- Re: RBD clone size check
- From: Eugen Block <eblock@xxxxxx>
- Re: Generation of systemd units after nuking /etc/systemd/system
- From: 胡 玮文 <huww98@xxxxxxxxxxx>
- Ceph pool set min_write_recency_for_promote not working
- From: Pardhiv Karri <meher4india@xxxxxxxxx>
- Generation of systemd units after nuking /etc/systemd/system
- From: Flemming Frandsen <dren.dk@xxxxxxxxx>
- RBD clone size check
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: Error adding lua packages to rgw
- From: Koldo Aingeru <koldo.aingeru@xxxxxxxxxx>
- Re: OSDs getting OOM-killed right after startup
- From: Janne Johansson <icepic.dz@xxxxxxxxx>
- Re: Ceph on RHEL 9
- From: "Robert W. Eckert" <rob@xxxxxxxxxxxxxxx>
- Re: OSDs getting OOM-killed right after startup
- From: Mara Sophie Grosch <littlefox@xxxxxxxxxx>
- Re: radosgw multisite sync - how to fix data behind shards?
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Ceph User + Dev Monthly June Meetup
- From: Neha Ojha <nojha@xxxxxxxxxx>
- Re: radosgw multisite sync - how to fix data behind shards?
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: radosgw multisite sync - how to fix data behind shards?
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Luminous to Pacific Upgrade with Filestore OSDs
- From: Pardhiv Karri <meher4india@xxxxxxxxx>
- Re: Error adding lua packages to rgw
- From: Yuval Lifshitz <ylifshit@xxxxxxxxxx>
- Re: OpenStack Swift on top of CephFS
- From: Kees Meijs | Nefos <kees@xxxxxxxx>
- Re: Troubleshooting cephadm - not deploying any daemons
- From: Eugen Block <eblock@xxxxxx>
- Re: OpenStack Swift on top of CephFS
- From: David Orman <ormandj@xxxxxxxxxxxx>
- Re: radosgw multisite sync - how to fix data behind shards?
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: OpenStack Swift on top of CephFS
- From: Etienne Menguy <etienne.menguy@xxxxxxxxxxx>
- Error adding lua packages to rgw
- From: Koldo Aingeru <koldo.aingeru@xxxxxxxxxx>
- Re: OpenStack Swift on top of CephFS
- From: Kees Meijs | Nefos <kees@xxxxxxxx>
- Re: OpenStack Swift on top of CephFS
- From: Etienne Menguy <etienne.menguy@xxxxxxxxxxx>
- OpenStack Swift on top of CephFS
- From: Kees Meijs | Nefos <kees@xxxxxxxx>
- Re: Luminous to Pacific Upgrade with Filestore OSDs
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: Troubleshooting cephadm - not deploying any daemons
- From: Redouane Kachach Elhichou <rkachach@xxxxxxxxxx>
- Re: radosgw multisite sync - how to fix data behind shards?
- From: "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
- Re: Troubleshooting cephadm - not deploying any daemons
- From: "Zach Heise (SSCC)" <heise@xxxxxxxxxxxx>
- radosgw multisite sync - how to fix data behind shards?
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Troubleshooting cephadm - not deploying any daemons
- From: Eugen Block <eblock@xxxxxx>
- Re: Troubleshooting cephadm - not deploying any daemons
- From: "Zach Heise (SSCC)" <heise@xxxxxxxxxxxx>
- Re: Troubleshooting cephadm - not deploying any daemons
- From: Dhairya Parmar <dparmar@xxxxxxxxxx>
- Troubleshooting cephadm - not deploying any daemons
- From: "Zach Heise (SSCC)" <heise@xxxxxxxxxxxx>
- Luminous to Pacific Upgrade with Filestore OSDs
- From: Pardhiv Karri <meher4india@xxxxxxxxx>
- Re: Crashing MDS
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Crashing MDS
- From: Dave Schulz <dschulz@xxxxxxxxxxx>
- Re: Crashing MDS
- From: Can Özyurt <acozyurt@xxxxxxxxx>
- Crashing MDS
- From: Dave Schulz <dschulz@xxxxxxxxxxx>
- Re: Ceph Repo Branch Rename - May 24
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm
- From: Christophe BAILLON <cb@xxxxxxx>
- Re: OSDs getting OOM-killed right after startup
- From: Eugen Block <eblock@xxxxxx>
- Re: OSDs getting OOM-killed right after startup
- From: Eugen Block <eblock@xxxxxx>
- Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm
- From: Eugen Block <eblock@xxxxxx>
- Re: rbd deep copy in Luminous
- From: Eugen Block <eblock@xxxxxx>
- Feedback/questions regarding cephfs-mirror
- From: Andreas Teuchert <a.teuchert@xxxxxxxxxxxx>
- rbd deep copy in Luminous
- From: Pardhiv Karri <meher4india@xxxxxxxxx>
- 270.98 GB was requested for block_db_size, but only 270.98 GB can be fulfilled
- From: Torkil Svensgaard <torkil@xxxxxxxx>
- Re: ceph orch: list of scheduled tasks
- From: Adam King <adking@xxxxxxxxxx>
- Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm
- From: Christophe BAILLON <cb@xxxxxxx>
- Re: not so empty bucket
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- Re: unknown object
- From: "J. Eric Ivancich" <ivancich@xxxxxxxxxx>
- ceph orch: list of scheduled tasks
- From: Patrick Vranckx <patrick.vranckx@xxxxxxxxxxxx>
- Ceph config database and comments
- From: Luis Domingues <luis.domingues@xxxxxxxxx>
- Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm
- From: Eugen Block <eblock@xxxxxx>
- Re: Convert existing folder on cephfs into subvolume
- From: Milind Changire <mchangir@xxxxxxxxxx>
- Convert existing folder on cephfs into subvolume
- From: "Stolte, Felix" <f.stolte@xxxxxxxxxxxxx>
- Module 'restful' has failed dependency: module 'typing' has no attribute 'Collection'
- From: "Pukropski, Christine" <cpukrops@xxxxxxxxxx>
- OSDs getting OOM-killed right after startup
- From: Mara Sophie Grosch <littlefox@xxxxxxxxxx>
- Re: v17.2.0 Quincy released
- From: Jozef Rebjak <jozefrebjak@xxxxxxxxxx>
- Many errors about PG deviate more than 30% on a new cluster deployed by cephadm
- From: Christophe BAILLON <cb@xxxxxxx>
- io_uring (bdev_ioring) unstable on newer kernels ?
- From: phandaal <phandaal@xxxxxxxxxxxx>
- unknown object
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: MDS stuck in replay
- From: Magnus HAGDORN <Magnus.Hagdorn@xxxxxxxx>
- Re: MDS stuck in replay
- From: Ramana Venkatesh Raja <rraja@xxxxxxxxxx>
- Re: MDS stuck in replay
- From: Ramana Venkatesh Raja <rraja@xxxxxxxxxx>
- Re: Slow delete speed through the s3 API
- From: J-P Methot <jp.methot@xxxxxxxxxxxxxxxxx>
- Re: radosgw multisite sync /admin/log requests overloading system.
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Slow delete speed through the s3 API
- From: J-P Methot <jp.methot@xxxxxxxxxxxxxxxxx>
- Re: Slow delete speed through the s3 API
- From: J-P Methot <jp.methot@xxxxxxxxxxxxxxxxx>
- Re: Slow delete speed through the s3 API
- From: David Orman <ormandj@xxxxxxxxxxxx>
- Re: Slow delete speed through the s3 API
- From: J-P Methot <jp.methot@xxxxxxxxxxxxxxxxx>
- csi helm installation complains about TokenRequest endpoints
- From: "Sopena Ballesteros Manuel" <manuel.sopena@xxxxxxx>
- Re: Octopus client for Nautilus OSD/MON
- From: Jiatong Shen <yshxxsjt715@xxxxxxxxx>
- Re: Slow delete speed through the s3 API
- From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
- Unable to deploy new manager in octopus
- From: Josef Johansson <josef@xxxxxxxxxxx>
- Re: Slow delete speed through the s3 API
- From: J-P Methot <jp.methot@xxxxxxxxxxxxxxxxx>
- Re: Help needed picking the right amount of PGs for (Cephfs) metadata pool
- From: Stefan Kooman <stefan@xxxxxx>
- Help needed picking the right amount of PGs for (Cephfs) metadata pool
- From: Josef Johansson <josef@xxxxxxxxxxx>
- Re: rbd-mirror stops replaying journal on primary cluster
- From: Josef Johansson <josef@xxxxxxxxxxx>
- Re: Help needed picking the right amount of PGs for (Cephfs) metadata pool
- From: Ramana Venkatesh Raja <rraja@xxxxxxxxxx>
- Help needed picking the right amount of PGs for (Cephfs) metadata pool
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Octopus client for Nautilus OSD/MON
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- Unable to deploy new manager in octopus
- From: Patrick Vranckx <patrick.vranckx@xxxxxxxxxxxx>
- rbd-mirror stops replaying journal on primary cluster
- From: Josef Johansson <josef@xxxxxxxxxxx>
- OSD_FULL raised when osd was not full (octopus 15.2.16)
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Newer linux kernel cephfs clients is more trouble?
- From: Stefan Kooman <stefan@xxxxxx>
- Octopus client for Nautilus OSD/MON
- From: Jiatong Shen <yshxxsjt715@xxxxxxxxx>
- Re: Moving rbd-images across pools?
- From: Jan-Philipp Litza <jpl@xxxxxxxxx>
- Multi-active MDS cache pressure
- From: Eugen Block <eblock@xxxxxx>
- Re: MDS stuck in replay
- From: Magnus HAGDORN <Magnus.Hagdorn@xxxxxxxx>
- Re: MDS stuck in replay
- From: Ramana Venkatesh Raja <rraja@xxxxxxxxxx>
- Re: Ceph Repo Branch Rename - May 24
- From: Rishabh Dave <ridave@xxxxxxxxxx>
- Re: Ceph Repo Branch Rename - May 24
- From: David Galloway <dgallowa@xxxxxxxxxx>
- radosgw multisite sync /admin/log requests overloading system.
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Moving rbd-images across pools?
- From: Angelo Hongens <angelo@xxxxxxxxxx>
- Error CephMgrPrometheusModuleInactive
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: Degraded data redundancy and too many PGs per OSD
- From: Eugen Block <eblock@xxxxxx>
- Re: Adding 2nd RGW zone using cephadm - fail.
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Error deploying iscsi service through cephadm
- From: Heiner Hardt <hhardt1912@xxxxxxxxx>
- Logs in /var/log/messages despite log_to_stderr=false, log_to_file=true
- From: Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>
- Adding 2nd RGW zone using cephadm - fail.
- From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
- Re: Problem with ceph-volume
- From: Christophe BAILLON <cb@xxxxxxx>
- Problem with ceph-volume
- From: Christophe BAILLON <cb@xxxxxxx>
- Re: 2 pools - 513 pgs 100.00% pgs unknown - working cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: [ext] Recover from "Module 'progress' has failed"
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Re: rgw crash when use swift api
- From: Daniel Gryniewicz <dang@xxxxxxxxxx>
- Re: RGW data pool for multiple zones
- From: Dmitry Kvashnin <dm.kvashnin@xxxxxxxxx>
- RGW data pool for multiple zones
- From: Dmitry Kvashnin <dm.kvashnin@xxxxxxxxx>
- Re: Containerized radosgw crashes randomly at startup
- From: Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>
- "outed" 10+ OSDs, recovery was fast (300+Mbps) until it wasn't (<1Mbps)
- From: David Young <davidy@xxxxxxxxxxxxxxxxxx>
- large removed snaps queue
- From: Denis Polom <denispolom@xxxxxxxxx>
- Containerized radosgw crashes randomly at startup
- From: Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>
- Re: Ceph IRC channel linked to Slack
- From: Alvaro Soto <alsotoes@xxxxxxxxx>
- Re: IO of hell with snaptrim
- From: Paul Emmerich <emmerich@xxxxxxxxxx>
- MDS stuck in replay
- From: Magnus HAGDORN <Magnus.Hagdorn@xxxxxxxx>
- Re: IO of hell with snaptrim
- From: Aaron Lauterer <a.lauterer@xxxxxxxxxxx>
- Re: MDS stuck in rejoin
- From: Stefan Kooman <stefan@xxxxxx>
- Re: Maintenance mode?
- From: Janne Johansson <icepic.dz@xxxxxxxxx>
- Re: Release Index and Docker Hub images outdated
- From: Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>
- MDS stuck in rejoin
- From: Dave Schulz <dschulz@xxxxxxxxxxx>
- ceph upgrade bug
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- multi write in block device
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Degraded data redundancy and too many PGs per OSD
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: Ceph IRC channel linked to Slack
- From: Alvaro Soto <alsotoes@xxxxxxxxx>
- Re: Maintenance mode?
- From: Jeremy Hansen <jeremy@xxxxxxxxxx>
- Re: Maintenance mode?
- From: Jeremy Hansen <jeremy@xxxxxxxxxx>
- Re: Maintenance mode?
- From: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
- IO of hell with snaptrim
- From: Ml Ml <mliebherr99@xxxxxxxxxxxxxx>
- Re: Release Index and Docker Hub images outdated
- From: Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>
- Re: Release Index and Docker Hub images outdated
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Release Index and Docker Hub images outdated
- From: Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>
- Re: "Pending Backport" without "Backports" field
- From: Konstantin Shalygin <k0ste@xxxxxxxx>
- "Pending Backport" without "Backports" field
- From: Jan-Philipp Litza <jpl@xxxxxxxxx>
- Re: Maintenance mode?
- From: Dhairya Parmar <dparmar@xxxxxxxxxx>
- Re: Maintenance mode?
- From: Jeremy Hansen <jeremy@xxxxxxxxxx>
- Re: Maintenance mode?
- From: Tyler Stachecki <stachecki.tyler@xxxxxxxxx>
- Maintenance mode?
- From: Jeremy Hansen <jeremy@xxxxxxxxxx>
- All 'ceph orch' commands hanging
- From: Rémi Rampin <remirampin@xxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Stefan Kooman <stefan@xxxxxx>
- osd latency but disks do not seem busy
- From: Ml Ml <mliebherr99@xxxxxxxxxxxxxx>
- Ceph's mgr/prometheus module is not available
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: Rebalance after draining - why?
- From: 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx
- Re: Rebalance after draining - why?
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI gateway 'opcpmfpsbpp0101' does not exist retval: -2 [ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI gateway 'opcpmfpsbpp0101' does not exist retval: -2 Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI gateway 'opcpmfpsbpp0101' does not exist retval: -2
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Ceph on RHEL 9
- From: "Robert W. Eckert" <rob@xxxxxxxxxxxxxxx>
- Re: Rebalance after draining - why?
- From: denispolom@xxxxxxxxx
- Re: Rebalance after draining - why?
- From: Michael Thomas <wart@xxxxxxxxxxx>
- Rebalance after draining - why?
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Janne Johansson <icepic.dz@xxxxxxxxx>
- Pacific documentation
- From: 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx
- Removing the cephadm OSD deployment service when not needed any more
- From: 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx
- Documentation on activating an osd on a new node with cephadm
- From: 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx
- TLS certificates for services using cephadm
- From: 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx
- Container image versions
- From: 7ba335c6-fb20-4041-8c18-1b00efb7824c@xxxxxxxxxxx
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Bogdan Adrian Velica <vbogdan@xxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: rbd command hangs
- From: "Sopena Ballesteros Manuel" <manuel.sopena@xxxxxxx>
- ceph df reporting incorrect used space after pg reduction
- From: David Alfano <dalfano@xxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
- Re: 2 pools - 513 pgs 100.00% pgs unknown - working cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxxx>
- Re: 2 pools - 513 pgs 100.00% pgs unknown - working cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: 2 pools - 513 pgs 100.00% pgs unknown - working cluster
- From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>
- Re: 2 pools - 513 pgs 100.00% pgs unknown - working cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: 2 pools - 513 pgs 100.00% pgs unknown - working cluster
- From: Eugen Block <eblock@xxxxxx>
- 2 pools - 513 pgs 100.00% pgs unknown - working cluster
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: cannot assign requested address
- From: Redouane Kachach Elhichou <rkachach@xxxxxxxxxx>
- cannot assign requested address
- From: Dmitriy Trubov <DmitriyT@xxxxxxxxxxxxxx>
- Re: Replacing OSD with DB on shared NVMe
- From: Edward R Huyer <erhvks@xxxxxxx>
- Re: Replacing OSD with DB on shared NVMe
- From: David Orman <ormandj@xxxxxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxxx>
- Replacing OSD with DB on shared NVMe
- From: Edward R Huyer <erhvks@xxxxxxx>
- Re: Error deploying iscsi service through cephadm
- From: Teoman Onay <tonay@xxxxxxxxxx>
- Re: Ceph Repo Branch Rename - May 24
- From: Neha Ojha <nojha@xxxxxxxxxx>
- Re: Ceph Repo Branch Rename - May 24
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Tim Olow <tim@xxxxxxxx>
- Re: Error deploying iscsi service through cephadm
- From: Heiner Hardt <hhardt1912@xxxxxxxxx>
- Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?
- From: Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxxx>
- OSDs won't boot after host restart
- From: Andrew Cowan <awc34@xxxxxxxxxxx>
- Ceph Leadership Team Meeting
- From: Ernesto Puerta <epuertat@xxxxxxxxxx>
- Re: rbd command hangs
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: v17.2.0 Quincy released
- From: Thomas Roth <t.roth@xxxxxx>
- Re: cephadm error mgr not available and ERROR: Failed to add host
- From: Eugen Block <eblock@xxxxxx>
- Re: rbd command hangs
- From: "Sopena Ballesteros Manuel" <manuel.sopena@xxxxxxx>
- Re: rbd command hangs
- From: "Sopena Ballesteros Manuel" <manuel.sopena@xxxxxxx>
- Re: Ceph Repo Branch Rename - May 24
- From: Laura Flores <lflores@xxxxxxxxxx>
- Re: Ceph Repo Branch Rename - May 24
- From: David Galloway <dgallowa@xxxxxxxxxx>
- cephadm error mgr not available and ERROR: Failed to add host
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: rbd command hangs
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: rbd command hangs
- From: "Sopena Ballesteros Manuel" <manuel.sopena@xxxxxxx>
- Re: Connecting to multiple filesystems from kubernetes
- From: "Fox, Kevin M" <Kevin.Fox@xxxxxxxx>
- Re: rbd command hangs
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: rbd command hangs
- From: "Sopena Ballesteros Manuel" <manuel.sopena@xxxxxxx>
- Upgrade paths beyond octopus on Centos7
- From: Gary Molenkamp <molenkam@xxxxxx>
- Re: rbd command hangs
- From: Ilya Dryomov <idryomov@xxxxxxxxx>
- Re: rbd command hangs
- From: Etienne Menguy <etienne.menguy@xxxxxxxxxxx>
- rbd command hangs
- From: "Sopena Ballesteros Manuel" <manuel.sopena@xxxxxxx>
- RGW error s3 api
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- RGW error s3 api
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: HDD disk for RGW and CACHE tier for giving beter performance
- From: Boris <bb@xxxxxxxxx>
- HDD disk for RGW and CACHE tier for giving beter performance
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- disaster in many of osd disk
- From: farhad kh <farhad.khedriyan@xxxxxxxxx>
- Re: rbd mirroring - journal growing and snapshot high io load
- From: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@xxxxxxx>
- Re: 3-node Ceph with DAS storage and multipath
- From: Dieter Roels <dieter.roels@xxxxxx>
- Connecting to multiple filesystems from kubernetes
- From: Sigurd Kristian Brinch <sigurd.k.brinch@xxxxxx>
- Usage after upgrade to Mimic
- From: Herbert Alexander Faleiros <herbert@xxxxxxxxxxx>
- Re: rbd mirroring - journal growing and snapshot high io load
- From: "ronny.lippold" <ceph@xxxxxxxxx>
- Re: 3-node Ceph with DAS storage and multipath
- From: Kostadin Bukov <kostadin.bukov@xxxxxxxxxxxx>
- orphaned journal_data objects on pool after disabling rbd mirror
- From: Denis Polom <denispolom@xxxxxxxxx>
- Re: 3-node Ceph with DAS storage and multipath
- From: Frank Schilder <frans@xxxxxx>
- Re: Dashboard: SSL error in the Object gateway menu only
- From: E Taka <0etaka0@xxxxxxxxx>
- Re: 3-node Ceph with DAS storage and multipath
- From: Kostadin Bukov <kostadin.bukov@xxxxxxxxxxxx>
- Re: Dashboard: SSL error in the Object gateway menu only
- From: Eugen Block <eblock@xxxxxx>
- Re: 3-node Ceph with DAS storage and multipath
- From: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
- Dashboard: SSL error in the Object gateway menu only
- From: E Taka <0etaka0@xxxxxxxxx>
- Re: 3-node Ceph with DAS storage and multipath
- From: Kostadin Bukov <kostadin.bukov@xxxxxxxxxxxx>
- 3-node Ceph with DAS storage and multipath
- From: Kostadin Bukov <kostadin.bukov@xxxxxxxxxxxx>
- Re: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients
- From: Denis Polom <denispolom@xxxxxxxxx>
- Re: prometheus retention
- From: Eugen Block <eblock@xxxxxx>
- Re: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients
- From: denispolom@xxxxxxxxx
- Re: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients
- From: Dan van der Ster <dvanders@xxxxxxxxx>
- Re: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients
- From: denispolom@xxxxxxxxx
- Re: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients
- From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
- Re: [ext] Re: Rename / change host names set with `ceph orch host add`
- From: Adam King <adking@xxxxxxxxxx>
- Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients
- From: Denis Polom <denispolom@xxxxxxxxx>
- Re: [ext] Re: Rename / change host names set with `ceph orch host add`
- From: "Kuhring, Mathias" <mathias.kuhring@xxxxxxxxxxxxxx>
- Re: Ceph RBD pool copy?
- From: Janne Johansson <icepic.dz@xxxxxxxxx>
- rgw crash when use swift api
- From: <zhou-jielei@xxxxxx>
- Error deploying iscsi service through cephadm
- From: Heiner Hardt <hhardt1912@xxxxxxxxx>
- Re: Ceph RBD pool copy?
- From: Eugen Block <eblock@xxxxxx>
- Ceph Repo Branch Rename - May 24
- From: David Galloway <dgallowa@xxxxxxxxxx>
- Ceph RBD pool copy?
- From: Pardhiv Karri <meher4india@xxxxxxxxx>
- Re: Rename / change host names set with `ceph orch host add`
- From: Adam King <adking@xxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[xfs]
[Linux SCSI]
[Ceph Large]
[Samba]
[Yosemite Forum]