CEPH Filesystem Users
[Prev Page][Next Page]
- 回复: half performace with keyvalue backend in 0.87
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: Is this situation about data lost?
- From: Cheng Wei-Chung <freeze.vicente.cheng@xxxxxxxxx>
- Re: Is this situation about data lost?
- From: Cheng Wei-Chung <freeze.vicente.cheng@xxxxxxxxx>
- Re: 回复: half performace with keyvalue backend in 0.87
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: 回复: half performace with keyvalue backend in 0.87
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: 回复: half performace with keyvalue backend in 0.87
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: 回复: half performace with keyvalue backend in 0.87
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: 回复: half performace with keyvalue backend in 0.87
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- 回复: half performace with keyvalue backend in 0.87
- From: 廖建锋 <Derek@xxxxxxxxx>
- half performace with keyvalue backend in 0.87
- From: 廖建锋 <Derek@xxxxxxxxx>
- Question about logging
- From: Robert LeBlanc <robert@xxxxxxxxxxxxx>
- Re: Negative amount of objects degraded
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- Re: Negative amount of objects degraded
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- CDS Hammer Videos Posted
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: OSD process exhausting server memory
- From: Lukáš Kubín <lukas.kubin@xxxxxxxxx>
- issue with activate osd in ceph with new partition created
- From: Subhadip Bagui <i.bagui@xxxxxxxxx>
- OSD process exhausting server memory
- From: Lukáš Kubín <lukas.kubin@xxxxxxxxx>
- Re: Adding a monitor to
- From: Patrick Darley <patrick.darley@xxxxxxxxxxxxxxx>
- Re: OSD process exhausting server memory
- From: "Michael J. Kidd" <michael.kidd@xxxxxxxxxxx>
- Re: Attention CephFS users: issue with giant FUSE client vs. firefly MDS
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Attention CephFS users: issue with giant FUSE client vs. firefly MDS
- From: Luis Periquito <periquito@xxxxxxxxx>
- Re: osd 100% cpu, very slow writes
- From: Cristian Falcas <cristi.falcas@xxxxxxxxx>
- Attention CephFS users: issue with giant FUSE client vs. firefly MDS
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: Negative amount of objects degraded
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: OSD process exhausting server memory
- From: Lukáš Kubín <lukas.kubin@xxxxxxxxx>
- questions about rgw, multiple zones
- From: yuelongguang <fastsync@xxxxxxx>
- Re: Negative amount of objects degraded
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- osd 100% cpu, very slow writes
- From: Cristian Falcas <cristi.falcas@xxxxxxxxx>
- Re: Redundant Power Supplies
- From: Stijn De Weirdt <stijn.deweirdt@xxxxxxxx>
- ceph-deploy and cache tier ssds
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: Ceph Giant not fixed RepllicatedPG:NotStrimming?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Redundant Power Supplies
- From: "O'Reilly, Dan" <Daniel.OReilly@xxxxxxxx>
- Re: Redundant Power Supplies
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Is this situation about data lost?
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Negative amount of objects degraded
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: the state of cephfs in giant
- From: John Spray <john.spray@xxxxxxxxxx>
- Redundant Power Supplies
- From: Nick Fisk <nick@xxxxxxxxxx>
- Admin Node Best Practices
- From: Massimiliano Cuttini <max@xxxxxxxxxxxxx>
- Re: OSD process exhausting server memory
- From: "Michael J. Kidd" <michael.kidd@xxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: SCHAER Frederic <frederic.schaer@xxxxxx>
- Re: Anyone deploying Ceph on Docker?
- From: Hunter Nield <hunter@xxxxxxxx>
- Negative amount of objects degraded
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- Re: Crash with rados cppool and snapshots
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: Delete pools with low priority?
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: the state of cephfs in giant
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: Anyone deploying Ceph on Docker?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Is this situation about data lost?
- From: Cheng Wei-Chung <freeze.vicente.cheng@xxxxxxxxx>
- Re: Delete pools with low priority?
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: Delete pools with low priority?
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: Anyone deploying Ceph on Docker?
- From: Hunter Nield <hunter@xxxxxxxx>
- Re: Crash with rados cppool and snapshots
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: where to download 0.87 debs?
- From: JF Le Fillatre <jean-francois.lefillatre@xxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Jasper Siero <jasper.siero@xxxxxxxxxxxxxxxxx>
- Re: where to download 0.87 debs?
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- where to download 0.87 debs?
- From: Jon Kåre Hellan <jon.kare.hellan@xxxxxxxxxx>
- Re: where to download 0.87 RPMS?
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: where to download 0.87 RPMS?
- From: Kenneth Waegeman <Kenneth.Waegeman@xxxxxxxx>
- Re: Anyone deploying Ceph on Docker?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: OSD process exhausting server memory
- From: Lukáš Kubín <lukas.kubin@xxxxxxxxx>
- Ceph Giant not fixed RepllicatedPG:NotStrimming?
- From: Ta Ba Tuan <tuantb@xxxxxxxxxx>
- Re: v0.87 Giant released
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: Adding a monitor to
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: use ZFS for OSDs
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Micro Ceph and OpenStack Design Summit November 3rd, 2014 11:40am
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Clone field from rados df command
- From: Mallikarjun Biradar <mallikarjuna.biradar@xxxxxxxxx>
- survey: Ceph integration into auth security frameworks (AD/kerberos/etc.)
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: v0.87 Giant released
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Anyone deploying Ceph on Docker?
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: Anyone deploying Ceph on Docker?
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: Anyone deploying Ceph on Docker?
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: Anyone deploying Ceph on Docker?
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Anyone deploying Ceph on Docker?
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: radosgw issues
- From: yuelongguang <fastsync@xxxxxxx>
- Re: Crash with rados cppool and snapshots
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: where to download 0.87 RPMS?
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Re: Delete pools with low priority?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: radosgw issues
- From: yuelongguang <fastsync@xxxxxxx>
- Re: journal on entire ssd device
- From: Christian Balzer <chibi@xxxxxxx>
- where to download 0.87 RPMS?
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: v0.87 Giant released
- From: Nigel Williams <nigel.williams@xxxxxxxxxxx>
- Re: v0.87 Giant released
- From: Christian Balzer <chibi@xxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: Adding a monitor to
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: v0.87 Giant released
- From: Nigel Williams <nigel.williams@xxxxxxxxxxx>
- Re: Rbd cache severely inhibiting read performance (Giant)
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: When will Ceph 0.72.3?
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Rbd cache severely inhibiting read performance (Giant)
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- v0.87 Giant released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: journal on entire ssd device
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- journal on entire ssd device
- From: Cristian Falcas <cristi.falcas@xxxxxxxxx>
- how to check real rados read speed
- From: VELARTIS Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx>
- Re: use ZFS for OSDs
- From: Michal Kozanecki <mkozanecki@xxxxxxxxxx>
- Re: use ZFS for OSDs
- From: Stijn De Weirdt <stijn.deweirdt@xxxxxxxx>
- Re: OSD process exhausting server memory
- From: Lukáš Kubín <lukas.kubin@xxxxxxxxx>
- Re: use ZFS for OSDs
- From: Michal Kozanecki <mkozanecki@xxxxxxxxxx>
- Micro Ceph and OpenStack Design Summit November 3rd, 2014 11:40am
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: OSD process exhausting server memory
- From: "Michael J. Kidd" <michael.kidd@xxxxxxxxxxx>
- ceph status 104 active+degraded+remapped 88 creating+incomplete
- From: Thomas Alrin <alrin@xxxxxxxxxxx>
- Re: OSD process exhausting server memory
- From: Lukáš Kubín <lukas.kubin@xxxxxxxxx>
- CDS Hammer (Day 1) Videos Posted
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- ERROR: error converting store /var/lib/ceph/osd/ceph-176: (28) No space left on device
- From: David Z <david.z1003@xxxxxxxxx>
- Re: HTTP Get returns 404 Not Found for Swift API
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: HTTP Get returns 404 Not Found for Swift API
- From: Pedro Miranda <potter737@xxxxxxxxx>
- ceph-announce list
- From: Brian Rak <brak@xxxxxxxxxxxxxxx>
- Re: use ZFS for OSDs
- From: Michal Kozanecki <mkozanecki@xxxxxxxxxx>
- Re: Delete pools with low priority?
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: use ZFS for OSDs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD process exhausting server memory
- From: "Michael J. Kidd" <michael.kidd@xxxxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Jasper Siero <jasper.siero@xxxxxxxxxxxxxxxxx>
- Crash with rados cppool and snapshots
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- OSD process exhausting server memory
- From: Lukáš Kubín <lukas.kubin@xxxxxxxxx>
- fail to add another rgw
- From: yuelongguang <fastsync@xxxxxxx>
- OSD process exhausting server memory
- From: Lukáš Kubín <lukas.kubin@xxxxxxxxx>
- Re: Object Storage Statistics
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Andrey Korolyov <andrey@xxxxxxx>
- RHEL6.6 upgrade (selinux-policy-targeted) triggers slow requests
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Vickie CH <mika.leaf666@xxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- use ZFS for OSDs
- From: Kenneth Waegeman <Kenneth.Waegeman@xxxxxxxx>
- Re: Fwd: Error zapping the disk
- From: "Sakhi Hadebe" <SHadebe@xxxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Fwd: Error zapping the disk
- From: Vickie CH <mika.leaf666@xxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Vickie CH <mika.leaf666@xxxxxxxxx>
- Fwd: Error zapping the disk
- From: "Sakhi Hadebe" <shadebe@xxxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Vickie CH <mika.leaf666@xxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Ceph MeetUp Berlin: Performance
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Use 2 osds to create cluster but health check display "active+degraded"
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Use 2 osds to create cluster but health check display "active+degraded"
- From: Vickie CH <mika.leaf666@xxxxxxxxx>
- When will Ceph 0.72.3?
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Chris Kitzmiller <ckitzmiller@xxxxxxxxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Christian Balzer <chibi@xxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Setting ceph username for rbd fuse
- From: Xavier Trilla <xavier.trilla@xxxxxxxxxxxxxxxx>
- HTTP Get returns 404 Not Found for Swift API
- From: Pedro Miranda <potter737@xxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Lincoln Bryant <lincolnb@xxxxxxxxxxxx>
- Re: error when executing ceph osd pool set foo-hot cache-mode writeback
- From: Cristian Falcas <cristi.falcas@xxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Poor RBD performance as LIO iSCSI target
- From: Christopher Spearman <neromaverick@xxxxxxxxx>
- Re: Adding a monitor to
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Gregory Farnum <gfarnum@xxxxxxxxxx>
- Re: can we deploy multi-rgw on one ceph cluster?
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Poor RBD performance as LIO iSCSI target
- From: Mike Christie <mchristi@xxxxxxxxxx>
- Re: Adding a monitor to
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: error when executing ceph osd pool set foo-hot cache-mode writeback
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Adding a monitor to
- From: Patrick Darley <patrick.darley@xxxxxxxxxxxxxxx>
- Poor RBD performance as LIO iSCSI target
- From: Christopher Spearman <neromaverick@xxxxxxxxx>
- Re: can we deploy multi-rgw on one ceph cluster?
- From: yuelongguang <fastsync@xxxxxxx>
- Ceph tries to install to root on OSD's
- From: Support - Avantek <support@xxxxxxxxxxxxx>
- Re: Scrub proces, IO performance
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Scrub proces, IO performance
- From: Mateusz Skała <mateusz.skala@xxxxxxxxxxx>
- error when executing ceph osd pool set foo-hot cache-mode writeback
- From: Cristian Falcas <cristi.falcas@xxxxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Mariusz Gronczewski <mariusz.gronczewski@xxxxxxxxxxxx>
- Re: Can't start osd- one osd alway be down.
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Nick Fisk <nick@xxxxxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Mariusz Gronczewski <mariusz.gronczewski@xxxxxxxxxxxx>
- Re: Filestore throttling
- From: GuangYang <yguang11@xxxxxxxxxxx>
- Re: Scrub proces, IO performance
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Christian Balzer <chibi@xxxxxxx>
- Scrub proces, IO performance
- From: Mateusz Skała <mateusz.skala@xxxxxxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Robert van Leeuwen <Robert.vanLeeuwen@xxxxxxxxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Christian Balzer <chibi@xxxxxxx>
- Re: All SSD storage and journals
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Poor RBD performance as LIO iSCSI target
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Poor RBD performance as LIO iSCSI target
- From: "Chen, Xiaoxi" <xiaoxi.chen@xxxxxxxxx>
- Re: Poor RBD performance as LIO iSCSI target
- From: Christopher Spearman <neromaverick@xxxxxxxxx>
- Re: Poor RBD performance as LIO iSCSI target
- From: Nick Fisk <Nick.Fisk@xxxxxxxxxxxxx>
- Poor RBD performance as LIO iSCSI target
- From: Christopher Spearman <neromaverick@xxxxxxxxx>
- Re: journals relabeled by OS, symlinks broken
- From: Steve Anthony <sma310@xxxxxxxxxx>
- Re: journals relabeled by OS, symlinks broken
- From: Scott Laird <scott@xxxxxxxxxxx>
- Re: journals relabeled by OS, symlinks broken
- From: Steve Anthony <sma310@xxxxxxxxxx>
- Re: Can't start osd- one osd alway be down.
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: can we deploy multi-rgw on one ceph cluster?
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: get/put files with radosgw once MDS crash
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: journals relabeled by OS, symlinks broken
- From: Mariusz Gronczewski <mariusz.gronczewski@xxxxxxxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: What a maximum theoretical and practical capacity in ceph cluster?
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: RBD getting unmapped every time when server reboot
- From: Laurent Barbe <laurent@xxxxxxxxxxx>
- What a maximum theoretical and practical capacity in ceph cluster?
- From: Mike <mike.almateia@xxxxxxxxx>
- Re: Change port of Mon
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Change port of Mon
- From: Wido den Hollander <wido@xxxxxxxx>
- Change port of Mon
- From: Daniel Takatori Ohara <dtohara@xxxxxxxxxxxxx>
- Re: All SSD storage and journals
- From: Sebastien Han <sebastien.han@xxxxxxxxxxxx>
- Re: Ceph and hadoop
- From: John Spray <john.spray@xxxxxxxxxx>
- [ceph 0.72.2] PGs are incomplete status after some OSDs are out of cluster
- From: "Meng, Chen" <chen.meng@xxxxxxxxx>
- Re: get/put files with radosgw once MDS crash
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: can we deploy multi-rgw on one ceph cluster?
- From: yuelongguang <fastsync@xxxxxxx>
- Re: RBD getting unmapped every time when server reboot
- From: Vickey Singh <vickey.singh22693@xxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: RBD getting unmapped every time when server reboot
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- RBD getting unmapped every time when server reboot
- From: Vickey Singh <vickey.singh22693@xxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: journals relabeled by OS, symlinks broken
- From: Scott Laird <scott@xxxxxxxxxxx>
- Re: Can't start osd- one osd alway be down.
- From: Ta Ba Tuan <tuantb@xxxxxxxxxx>
- Re: Can't start osd- one osd alway be down.
- From: Ta Ba Tuan <tuantb@xxxxxxxxxx>
- Re: recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- RadosGW does not create all pools
- From: Florent B <florent@xxxxxxxxxxx>
- Re: Can't start osd- one osd alway be down.
- From: Ta Ba Tuan <tuantb@xxxxxxxxxx>
- Re: Can't start osd- one osd alway be down.
- From: Ta Ba Tuan <tuantb@xxxxxxxxxx>
- journals relabeled by OS, symlinks broken
- From: Steve Anthony <sma310@xxxxxxxxxx>
- Re: librados crash in nova-compute
- From: "Xu (Simon) Chen" <xchenum@xxxxxxxxx>
- Re: librados crash in nova-compute
- From: "Xu (Simon) Chen" <xchenum@xxxxxxxxx>
- Re: Fio rbd stalls during 4M reads
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Can't start osd- one osd alway be down.
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: can we deploy multi-rgw on one ceph cluster?
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: get/put files with radosgw once MDS crash
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- How to recover Incomplete PGs from "lost time" symptom?
- From: Chris Kitzmiller <ckitzmiller@xxxxxxxxxxxxx>
- Ceph and hadoop
- From: Matan Safriel <dev.matan@xxxxxxxxx>
- Re: Object Storage Statistics
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: librados crash in nova-compute
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: Fio rbd stalls during 4M reads
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Fio rbd stalls during 4M reads
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: RGW Federated Gateways and Apache 2.4 problems
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Extremely slow small files rewrite performance
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Lost monitors in a multi mon cluster
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Lost monitors in a multi mon cluster
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: RGW Federated Gateways and Apache 2.4 problems
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: Extremely slow small files rewrite performance
- From: Sergey Nazarov <natarajaya@xxxxxxxxx>
- Lost monitors in a multi mon cluster
- From: HURTEVENT VINCENT <vincent.hurtevent@xxxxxxxxxxxxx>
- librados crash in nova-compute
- From: "Xu (Simon) Chen" <xchenum@xxxxxxxxx>
- Object Storage Statistics
- From: Dane Elwell <dane.elwell@xxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Jasper Siero <jasper.siero@xxxxxxxxxxxxxxxxx>
- Re: Continuous OSD crash with kv backend (firefly)
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Continuous OSD crash with kv backend (firefly)
- From: Andrey Korolyov <andrey@xxxxxxx>
- get/put files with radosgw once MDS crash
- From: 廖建锋 <Derek@xxxxxxxxx>
- can we deploy multi-rgw on one ceph cluster?
- From: yuelongguang <fastsync@xxxxxxx>
- All SSD storage and journals
- From: Christian Balzer <chibi@xxxxxxx>
- beware of jumbo frames
- From: Nigel Williams <nigel.d.williams@xxxxxxxxx>
- Can't start osd- one osd alway be down.
- From: Ta Ba Tuan <tuantb@xxxxxxxxxx>
- Re: Filestore throttling
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Filestore throttling
- From: GuangYang <yguang11@xxxxxxxxxxx>
- can we deploy multi-rgw on one ceph cluster?
- From: yuelongguang <fastsync@xxxxxxx>
- Re: Fio rbd stalls during 4M reads
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Fio rbd stalls during 4M reads
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- RGW Federated Gateways and Apache 2.4 problems
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: mon_osd_down_out_subtree_limit stuck at "rack"
- From: "Ricardo J. Barberis" <ricardo@xxxxxxxxxxxxx>
- Re: ls hangs
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Fwd: Re: Fwd: Latest firefly: osd not joining cluster after re-creation
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Fwd: Re: Fwd: Latest firefly: osd not joining cluster after re-creation
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Re: recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: recovery process stops
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- ls hangs
- From: Daniel Takatori Ohara <dtohara@xxxxxxxxxxxxx>
- Re: Filestore throttling
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Chris Kitzmiller <ckitzmiller@xxxxxxxxxxxxx>
- cannot start radosgw
- From: Geoff Galitz <ggalitz@xxxxxxxxxxxxxxxx>
- Re: Fwd: Re: Fwd: Latest firefly: osd not joining cluster after re-creation
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Fwd: Re: Fwd: Latest firefly: osd not joining cluster after re-creation
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Fwd: Re: Fwd: Latest firefly: osd not joining cluster after re-creation
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: osd_disk_thread_ioprio_class/_priorioty ignored?
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: osd_disk_thread_ioprio_class/_priorioty ignored?
- From: Paweł Sadowski <ceph@xxxxxxxxx>
- Re: Small Cluster Re-IP process
- From: Rein Remmel <rein.remmel@xxxxxxxx>
- Re: Small Cluster Re-IP process
- From: Christian Kauhaus <kc@xxxxxxxxxx>
- Re: Fwd: Latest firefly: osd not joining cluster after re-creation
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- Fwd: Re: Fwd: Latest firefly: osd not joining cluster after re-creation
- From: Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
- osd_disk_thread_ioprio_class/_priorioty ignored?
- From: Paweł Sadowski <ceph@xxxxxxxxx>
- non default pool after restart
- From: farpost <kriulkin@xxxxx>
- Re: Filestore throttling
- From: GuangYang <yguang11@xxxxxxxxxxx>
- Re: Filestore throttling
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Filestore throttling
- From: GuangYang <yguang11@xxxxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: mon_osd_down_out_subtree_limit stuck at "rack"
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Chris Kitzmiller <ckitzmiller@xxxxxxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Chris Kitzmiller <ckitzmiller@xxxxxxxxxxxxx>
- Delete pools with low priority?
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: Weight of new OSD
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Ceph RPM spec removes /etc/ceph
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- Re: mon_osd_down_out_subtree_limit stuck at "rack"
- From: "Ricardo J. Barberis" <ricardo@xxxxxxxxxxxxx>
- Ceph RPM spec removes /etc/ceph
- From: Dmitry Borodaenko <dborodaenko@xxxxxxxxxxxx>
- Fwd: Latest firefly: osd not joining cluster after re-creation
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Small Cluster Re-IP process
- From: Dan Geist <dan@xxxxxxxxxx>
- Re: ceph-deploy problem on centos6
- From: "Sanders, Bill" <Bill.Sanders@xxxxxxxxxxxx>
- Re: Troubleshooting Incomplete PGs
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: pgs stuck in 'incomplete' state, blocked ops, query command hangs
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Ceph Developer Summit - Oct 28-29
- From: Patrick McGarry <patrick@xxxxxxxxxxx>
- Puppet module for CephFS
- From: JIten Shah <jshah2005@xxxxxx>
- os-prober breaks ceph cluster on osd write assert failure
- From: Laurent GUERBY <laurent@xxxxxxxxxx>
- Re: RADOS pool snaps and RBD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Few questions.
- From: Leszek Master <keksior@xxxxxxxxx>
- Weight of new OSD
- From: Paweł Sadowski <ceph@xxxxxxxxx>
- Error zapping the disk
- From: "Sakhi Hadebe" <shadebe@xxxxxxxxxx>
- mon_osd_down_out_subtree_limit stuck at "rack"
- From: Christian Balzer <chibi@xxxxxxx>
- Troubleshooting Incomplete PGs
- From: Chris Kitzmiller <ckitzmiller@xxxxxxxxxxxxx>
- Re: Few questions.
- From: Christian Balzer <chibi@xxxxxxx>
- Re: RADOS pool snaps and RBD
- From: Xavier Trilla <xavier.trilla@xxxxxxxxxxxxxxxx>
- Re: recovery process stops
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Extremely slow small files rewrite performance
- From: Sergey Nazarov <natarajaya@xxxxxxxxx>
- Re: Extremely slow small files rewrite performance
- From: Sergey Nazarov <natarajaya@xxxxxxxxx>
- Re: recovery process stops
- From: Robert LeBlanc <robert@xxxxxxxxxxxxx>
- Re: recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: Question/idea about performance problems with a few overloaded OSDs
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: Extremely slow small files rewrite performance
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Question/idea about performance problems with a few overloaded OSDs
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: Extremely slow small files rewrite performance
- From: Sergey Nazarov <natarajaya@xxxxxxxxx>
- Re: Extremely slow small files rewrite performance
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Question/idea about performance problems with a few overloaded OSDs
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Extremely slow small files rewrite performance
- From: Sergey Nazarov <natarajaya@xxxxxxxxx>
- OSDs will not come up
- From: tsuraan <tsuraan@xxxxxxxxx>
- Question/idea about performance problems with a few overloaded OSDs
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: pgs stuck in 'incomplete' state, blocked ops, query command hangs
- From: Lincoln Bryant <lincolnb@xxxxxxxxxxxx>
- Re: pgs stuck in 'incomplete' state, blocked ops, query command hangs
- From: Lincoln Bryant <lincolnb@xxxxxxxxxxxx>
- Re: Few questions.
- From: Robert LeBlanc <robert@xxxxxxxxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Same rbd mount from multiple servers
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: recovery process stops
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Chad Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: Giant release schedule
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- pgs stuck in 'incomplete' state, blocked ops, query command hangs
- From: Lincoln Bryant <lincolnb@xxxxxxxxxxxx>
- Re: Giant release schedule
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: real beginner question
- From: Dan Geist <dan@xxxxxxxxxx>
- Re: real beginner question
- From: Christian Balzer <chibi@xxxxxxx>
- Re: why the erasure code pool not support random write?
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: real beginner question
- From: Ranju Upadhyay <Ranju.Upadhyay@xxxxxxx>
- Giant release schedule
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- ceph-deploy problem on centos6
- From: Luca Mazzaferro <luca.mazzaferro@xxxxxxxxxx>
- Re: recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: why the erasure code pool not support random write?
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: why the erasure code pool not support random write?
- From: Nicheal <zay11022@xxxxxxxxx>
- About conf parameter mon_initial_members
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: About conf parameter mon_initial_members
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: RADOS pool snaps and RBD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph counters
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Use case: one-way RADOS "replication" between two clusters by time period
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Ceph counters
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: RADOS pool snaps and RBD
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- Re: Ceph counters
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Use case: one-way RADOS "replication" between two clusters by time period
- From: Anthony Alba <ascanio.alba7@xxxxxxxxx>
- Re: radosGW balancer best practices
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: RADOS pool snaps and RBD
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- Re: how to resolve : start mon assert == 0
- From: minchen <minchen@xxxxxxxxxxxxxxx>
- Re: OSD (and probably other settings) not being picked up outside of the [global] section
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Use case: one-way RADOS "replication" between two clusters by time period
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Ceph RBD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: urgent- object unfound
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: OSD (and probably other settings) not being picked up outside of the [global] section
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: why the erasure code pool not support random write?
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: Ceph RBD
- From: Fred Yang <frederic.yang@xxxxxxxxx>
- Re: why the erasure code pool not support random write?
- From: 池信泽 <xmdxcxz@xxxxxxxxx>
- Re: recovery process stops
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: Ceph RBD
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph OSD very slow startup
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: why the erasure code pool not support random write?
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: recovery process stops
- From: Leszek Master <keksior@xxxxxxxxx>
- RADOS pool snaps and RBD
- From: Xavier Trilla <xavier.trilla@xxxxxxxxxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: why the erasure code pool not support random write?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph OSD very slow startup
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: recovery process stops
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: recovery process stops
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: why the erasure code pool not support random write?
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Wu <wudx05@xxxxxxxxx>
- Re: recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: why the erasure code pool not support random write?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: real beginner question
- From: Dan Geist <dan@xxxxxxxxxx>
- why the erasure code pool not support random write?
- From: 池信泽 <xmdxcxz@xxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: recovery process stops
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: recovery process stops
- From: Leszek Master <keksior@xxxxxxxxx>
- Re: How to calculate file size when mount a block device from rbd image
- From: Benedikt Fraunhofer <given.to.lists.ceph-users.ceph.com.toasta.001@xxxxxxxxxx>
- recovery process stops
- From: Harald Rößler <Harald.Roessler@xxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Wu <wudx05@xxxxxxxxx>
- Re: real beginner question
- From: Ashish Chandra <mail.ashishchandra@xxxxxxxxx>
- Re: Reweight a host
- From: Lei Dong <leidong@xxxxxxxxxxxxx>
- Re: Reweight a host
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- slow requests - what is causing them?
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: How to calculate file size when mount a block device from rbd image
- From: Wido den Hollander <wido@xxxxxxxx>
- real beginner question
- From: Ranju Upadhyay <Ranju.Upadhyay@xxxxxxx>
- How to calculate file size when mount a block device from rbd image
- From: Vickie CH <mika.leaf666@xxxxxxxxx>
- Few questions.
- From: Leszek Master <keksior@xxxxxxxxx>
- Re: how to resolve : start mon assert == 0
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- Re: how to resolve : start mon assert == 0
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- Re: Same rbd mount from multiple servers
- From: Mihály Árva-Tóth <mihaly.arva-toth@xxxxxxxxxxxxxxxxxxxxxx>
- Re: Same rbd mount from multiple servers
- From: Sean Redmond <Sean.Redmond@xxxxxxxxxxxx>
- Same rbd mount from multiple servers
- From: Mihály Árva-Tóth <mihaly.arva-toth@xxxxxxxxxxxxxxxxxxxxxx>
- how to resolve : start mon assert == 0
- From: minchen <minchen@xxxxxxxxxxxxxxx>
- how to resolve : start mon assert == 0
- From: minchen <minchen@xxxxxxxxxxxxxxx>
- Re: Reweight a host
- From: Lei Dong <leidong@xxxxxxxxxxxxx>
- Reweight a host
- From: Erik Logtenberg <erik@xxxxxxxxxxxxx>
- Documentation Problem
- From: Mustafa Muhammad <mustafaa.alhamdaani@xxxxxxxxx>
- Re: Error deploying Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph storage pool definition with KVM/libvirt
- From: Brian Rak <brak@xxxxxxxxxxxxxxx>
- Re: OSD (and probably other settings) not being picked up outside of the [global] section
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Debugging RadosGW
- From: Georg Höllrigl <georg.hoellrigl@xxxxxxxxxx>
- question about erasure coded pool and rados
- From: yuelongguang <fastsync@xxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Wu <wudx05@xxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Wu <wudx05@xxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Wu <wudx05@xxxxxxxxx>
- Re: Icehouse & Ceph -- live migration fails?
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: Error deploying Ceph
- From: Support - Avantek <support@xxxxxxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: monitoring tool for monitoring end-user
- From: pragya jain <prag_2648@xxxxxxxxxxx>
- [radosgw] object copy implementation
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- OSD (and probably other settings) not being picked up outside of the [global] section
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: "Shu, Xinxin" <xinxin.shu@xxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Sebastien Han <sebastien.han@xxxxxxxxxxxx>
- Re: Ceph storage pool definition with KVM/libvirt
- From: Dan Geist <dan@xxxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Chad Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Wu <wudx05@xxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Mark Wu <wudx05@xxxxxxxxx>
- Re: Error deploying Ceph
- From: Ian Colle <icolle@xxxxxxxxxx>
- slow requests - what is causing them?
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: Performance doesn't scale well on a full ssd cluster.
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Performance doesn't scale well on a full ssd cluster.
- From: Mark Wu <wudx05@xxxxxxxxx>
- Re: Error deploying Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: rados gateway pools for users
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: Usage of journal on balance operations
- From: Emmanuel Lacour <elacour@xxxxxxxxxxxxxxx>
- Re: Error deploying Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: mkcephfs error
- From: Wido den Hollander <wido@xxxxxxxx>
- Error deploying Ceph
- From: Support - Avantek <support@xxxxxxxxxxxxx>
- mkcephfs error
- From: "Sakhi Hadebe" <shadebe@xxxxxxxxxx>
- Re: pool size/min_size does not make any effect on erasure-coded pool, right?
- From: yuelongguang <fastsync@xxxxxxx>
- Re: the state of cephfs in giant
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: ssh; cannot resolve hostname errors
- From: Support - Avantek <support@xxxxxxxxxxxxx>
- Re: ssh; cannot resolve hostname errors
- From: Marco Garcês <marco@xxxxxxxxx>
- Re: Usage of journal on balance operations
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph storage pool definition with KVM/libvirt
- From: Brian Rak <brak@xxxxxxxxxxxxxxx>
- Usage of journal on balance operations
- From: Emmanuel Lacour <elacour@xxxxxxxxxxxxxxx>
- Re: Ceph storage pool definition with KVM/libvirt
- From: Dan Geist <dan@xxxxxxxxxx>
- Re: urgent- object unfound
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: Ceph storage pool definition with KVM/libvirt
- From: "Dan Ryder (daryder)" <daryder@xxxxxxxxx>
- urgent- object unfound
- From: Ta Ba Tuan <tuantaba@xxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: Replacing a disk: Best practices?
- From: Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>
- pool size/min_size does not make any effect on erasure-coded pool, right?
- From: yuelongguang <fastsync@xxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Jasper Siero <jasper.siero@xxxxxxxxxxxxxxxxx>
- Re: Replacing a disk: Best practices?
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Use case: one-way RADOS "replication" between two clusters by time period
- From: Anthony Alba <ascanio.alba7@xxxxxxxxx>
- rados gateway pools for users
- From: Shashank Puntamkar <spuntamkar@xxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: converting legacy puppet-ceph configured OSDs to look like ceph-deployed OSDs
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: ssh; cannot resolve hostname errors
- From: JIten Shah <jshah2005@xxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Firefly maintenance release schedule
- From: Dmitry Borodaenko <dborodaenko@xxxxxxxxxxxx>
- Re: Radosgw refusing to even attempt to use keystone auth
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: converting legacy puppet-ceph configured OSDs to look like ceph-deployed OSDs
- From: Mike Dawson <mike.dawson@xxxxxxxxxxxx>
- Ceph storage pool definition with KVM/libvirt
- From: Dan Geist <dan@xxxxxxxxxx>
- converting legacy puppet-ceph configured OSDs to look like ceph-deployed OSDs
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: Replacing a disk: Best practices?
- From: Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>
- (no subject)
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: Replacing a disk: Best practices?
- From: Daniel Schwager <Daniel.Schwager@xxxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Chad Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Chad Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: Firefly maintenance release schedule
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Dan van der Ster <daniel.vanderster@xxxxxxx>
- Re: the state of cephfs in giant
- From: Alphe Salas <asalas@xxxxxxxxx>
- Re: Firefly maintenance release schedule
- From: Dmitry Borodaenko <dborodaenko@xxxxxxxxxxxx>
- Re: CRUSH depends on host + OSD?
- From: Mariusz Gronczewski <mariusz.gronczewski@xxxxxxxxxxxx>
- CRUSH depends on host + OSD?
- From: Chad Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: Replacing a disk: Best practices?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Replacing a disk: Best practices?
- From: Daniel Schwager <Daniel.Schwager@xxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Replacing a disk: Best practices?
- From: Bryan Wright <bkw1a@xxxxxxxxxxxx>
- Re: ssh; cannot resolve hostname errors
- From: Wido den Hollander <wido@xxxxxxxx>
- ssh; cannot resolve hostname errors
- From: Support - Avantek <support@xxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: the state of cephfs in giant
- From: Amon Ott <a.ott@xxxxxxxxxxxx>
- Re: new installation
- From: Roman <intrasky@xxxxxxxxx>
- Re: new installation
- From: Pascal Morillon <pascal.morillon@xxxxxxxx>
- Re: new installation
- From: Anthony Alba <ascanio.alba7@xxxxxxxxx>
- Re: new installation
- From: Roman <intrasky@xxxxxxxxx>
- Re: new installation
- From: Pascal Morillon <pascal.morillon@xxxxxxxx>
- new installation
- From: Roman <intrasky@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Ceph installation error
- From: "Sakhi Hadebe" <shadebe@xxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Stijn De Weirdt <stijn.deweirdt@xxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Jasper Siero <jasper.siero@xxxxxxxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Amon Ott <a.ott@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Alphe Salas <asalas@xxxxxxxxx>
- v0.80.7 Firefly released
- From: Sage Weil <sage@xxxxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Handling of network failures in the cluster network
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- radosGW balancer best practices
- From: Simone Spinelli <simone.spinelli@xxxxxxxx>
- Re: Ceph OSD very slow startup
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: Ceph OSD very slow startup
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Micro Ceph summit during the OpenStack summit
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Misconfigured caps on client.admin key, anyway to recover from EAESS denied?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Ceph OSD very slow startup
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Jasper Siero <jasper.siero@xxxxxxxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Amon Ott <a.ott@xxxxxxxxxxxx>
- Re: Icehouse & Ceph -- live migration fails?
- From: samuel <samu60@xxxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Thomas Lemarchand <thomas.lemarchand@xxxxxxxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Micro Ceph summit during the OpenStack summit
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Misconfigured caps on client.admin key, anyway to recover from EAESS denied?
- From: Anthony Alba <ascanio.alba7@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Ceph OSD very slow startup
- From: Lionel Bouton <lionel-subscription@xxxxxxxxxxx>
- Ceph OSD very slow startup
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: Misconfigured caps on client.admin key, anyway to recover from EAESS denied?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Misconfigured caps on client.admin key, anyway to recover from EAESS denied?
- From: Wido den Hollander <wido@xxxxxxxx>
- Misconfigured caps on client.admin key, anyway to recover from EAESS denied?
- From: Anthony Alba <ascanio.alba7@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Ceph counters
- From: Jakes John <jakesjohn12345@xxxxxxxxx>
- Re: the state of cephfs in giant
- From: Jeff Bailey <bailey@xxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Handling of network failures in the cluster network
- From: Martin Mailand <martin@xxxxxxxxxxxx>
- Re: Handling of network failures in the cluster network
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Eric Eastman <eric0e@xxxxxxx>
- Handling of network failures in the cluster network
- From: Martin Mailand <martin@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: the state of cephfs in giant
- From: Wido den Hollander <wido@xxxxxxxx>
- the state of cephfs in giant
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Micro Ceph summit during the OpenStack summit
- From: "Jonathan D. Proulx" <jon@xxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Basic Ceph questions
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: Basic Ceph questions
- From: Marcus White <roastedseaweed.k@xxxxxxxxx>
- Re: Basic Ceph questions
- From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
- Re: fresh cluster - cant create keys?
- From: Marc <mail@xxxxxxxxxx>
- fresh cluster - cant create keys?
- From: Marc <mail@xxxxxxxxxx>
- Re: Micro Ceph summit during the OpenStack summit
- From: Sebastien Han <sebastien.han@xxxxxxxxxxxx>
- Re: Ceph RBD map debug: error -22 on auth protocol 2 init
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: Ceph packages being blocked by epel packages on Centos6
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Ceph packages being blocked by epel packages on Centos6
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Ceph packages being blocked by epel packages on Centos6
- From: Marco Garcês <marco@xxxxxxxxx>
- Using Ceph-Deploy to configure a public AND Cluster-Network
- From: Harald Hartlieb <Harald.Hartlieb@xxxxxxxxxxx>
- Ceph packages being blocked by epel packages on Centos6
- From: 10 minus <t10tennn@xxxxxxxxx>
- Re: python ceph-deploy problem
- From: Roman <intrasky@xxxxxxxxx>
- Re: Micro Ceph summit during the OpenStack summit
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Ceph counters
- From: Jakes John <jakesjohn12345@xxxxxxxxx>
- Re: Micro Ceph summit during the OpenStack summit
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph tell osd.6 version : hang
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph tell osd.6 version : hang
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Giant: only 1 default pool created rbd, no data or metadata
- From: Anthony Alba <ascanio.alba7@xxxxxxxxx>
- Re: ceph tell osd.6 version : hang
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph tell osd.6 version : hang
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Giant: only 1 default pool created rbd, no data or metadata
- From: Wido den Hollander <wido@xxxxxxxx>
- Giant: only 1 default pool created rbd, no data or metadata
- From: Anthony Alba <ascanio.alba7@xxxxxxxxx>
- Re: ceph tell osd.6 version : hang
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph tell osd.6 version : hang
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Basic Ceph questions
- From: Marcus White <roastedseaweed.k@xxxxxxxxx>
- ceph tell osd.6 version : hang
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Pg splitting
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Help require for Ceph object gateway, multiple pools to multiple users
- From: Shashank Puntamkar <spuntamkar@xxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Pg splitting
- From: Stefan Priebe - Profihost AG <s.priebe@xxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- 回复: 回复: scrub error with keyvalue backend
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: "Aquino, Ben O" <ben.o.aquino@xxxxxxxxx>
- CephFS priorities (survey!)
- From: Sage Weil <sweil@xxxxxxxxxx>
- Micro Ceph summit during the OpenStack summit
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Firefly v0.80.6 issues 9696 and 9732
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: ceph at "Universite de Lorraine"
- From: Serge van Ginderachter <serge@xxxxxxxxxxxxxxxxxx>
- Re: mds isn't working anymore after osd's running full
- From: Jasper Siero <jasper.siero@xxxxxxxxxxxxxxxxx>
- Re: Basic Ceph questions
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: "Aquino, Ben O" <ben.o.aquino@xxxxxxxxx>
- Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: "Aquino, Ben O" <ben.o.aquino@xxxxxxxxx>
- Re: Basic Ceph questions
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: Regarding Primary affinity configuration
- From: "Johnu George (johnugeo)" <johnugeo@xxxxxxxxx>
- Re: max_bucket limit -- safe to disable?
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: ceph at "Universite de Lorraine"
- From: Lionel Bouton <lionel-subscription@xxxxxxxxxxx>
- Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: "Aquino, Ben O" <ben.o.aquino@xxxxxxxxx>
- Re: ceph at "Universite de Lorraine"
- From: Lionel Bouton <lionel+ceph@xxxxxxxxxxx>
- Re: ceph at "Universite de Lorraine"
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph at "Universite de Lorraine"
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- ceph at "Universite de Lorraine"
- From: Stéphane DUGRAVOT <stephane.dugravot@xxxxxxxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: SCHAER Frederic <frederic.schaer@xxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Rados Gateway and Swift create containers/buckets that cannot be opened
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: v0.86 released (Giant release candidate)
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: v0.86 released (Giant release candidate)
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: SCHAER Frederic <frederic.schaer@xxxxxx>
- Re: 回复: scrub error with keyvalue backend
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- 回复: scrub error with keyvalue backend
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- scrub error with keyvalue backend
- From: 廖建锋 <Derek@xxxxxxxxx>
- Re: Basic Ceph questions
- From: Marcus White <roastedseaweed.k@xxxxxxxxx>
- Re: Blueprints
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Regarding Primary affinity configuration
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Regarding Primary affinity configuration
- From: "Johnu George (johnugeo)" <johnugeo@xxxxxxxxx>
- Blueprints
- From: Robert LeBlanc <robert@xxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Rados Gateway and Swift create containers/buckets that cannot be opened
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: Rados Gateway and Swift create containers/buckets that cannot be opened
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Regarding Primary affinity configuration
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error
- From: "Aquino, Ben O" <ben.o.aquino@xxxxxxxxx>
- Re: Ceph RBD map debug: error -22 on auth protocol 2 init
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: Ceph RBD map debug: error -22 on auth protocol 2 init
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: Openstack keystone with Radosgw
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- Re: Monitor segfaults when updating the crush map
- From: Stephen Jahl <stephenjahl@xxxxxxxxx>
- Re: Monitor segfaults when updating the crush map
- From: "Johnu George (johnugeo)" <johnugeo@xxxxxxxxx>
- Re: Monitor segfaults when updating the crush map
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Rados Gateway and Swift create containers/buckets that cannot be opened
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- [ANN] ceph-deploy 1.5.18 released
- From: Alfredo Deza <adeza@xxxxxxxxxx>
- Re: Monitor segfaults when updating the crush map
- From: Stephen Jahl <stephenjahl@xxxxxxxxx>
- Re: Monitor segfaults when updating the crush map
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Regarding Primary affinity configuration
- From: "Johnu George (johnugeo)" <johnugeo@xxxxxxxxx>
- Re: Ceph RBD map debug: error -22 on auth protocol 2 init
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: Ceph RBD map debug: error -22 on auth protocol 2 init
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Monitor segfaults when updating the crush map
- From: Stephen Jahl <stephenjahl@xxxxxxxxx>
- Re: Rados Gateway and Swift create containers/buckets that cannot be opened
- From: M Ranga Swami Reddy <swamireddy@xxxxxxxxx>
- Re: Ceph RBD map debug: error -22 on auth protocol 2 init
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: Ceph RBD map debug: error -22 on auth protocol 2 init
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: Rados Gateway and Swift create containers/buckets that cannot be opened
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Monitor segfaults when updating the crush map
- From: Stephen Jahl <stephenjahl@xxxxxxxxx>
- Re: accept: got bad authorizer
- From: Nathan Stratton <nathan@xxxxxxxxxxxx>
- Re: Ceph RBD map debug: error -22 on auth protocol 2 init
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000
- From: Loic Dachary <loic@xxxxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[xfs]
[Linux SCSI]
[Ceph Large]
[Samba]
[Yosemite Forum]