CEPH Filesystem Users
[Prev Page][Next Page]
- Privileges for read-only CephFS access?
- From: Oliver Schulz <oschulz@xxxxxxxxxx>
- Re: ceph-osd pegging CPU on giant, no snapshots involved this time
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: PG stuck degraded, undersized, unclean
- From: Brian Rak <brak@xxxxxxxxxxxxxxx>
- Re: ceph-osd pegging CPU on giant, no snapshots involved this time
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- metrics to monitor for performance bottlenecks?
- From: "Xu (Simon) Chen" <xchenum@xxxxxxxxx>
- Re: PG stuck degraded, undersized, unclean
- From: Florian Haas <florian@xxxxxxxxxxx>
- FreeBSD on RBD (KVM)
- From: Logan Barfield <lbarfield@xxxxxxxxxxxxx>
- ceph-osd pegging CPU on giant, no snapshots involved this time
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: PG stuck degraded, undersized, unclean
- From: Brian Rak <brak@xxxxxxxxxxxxxxx>
- Re: PG stuck degraded, undersized, unclean
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: Updating monmap
- From: LOPEZ Jean-Charles <jelopez@xxxxxxxxxx>
- 12 March - Ceph Day San Francisco
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- PG stuck degraded, undersized, unclean
- From: Brian Rak <brak@xxxxxxxxxxxxxxx>
- Updating monmap
- From: "SUNDAY A. OLUTAYO" <olutayo@xxxxxxxxxx>
- Re: ceph-giant installation error on centos 6.6
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: ceph-giant installation error on centos 6.6
- From: Wenxiao He <wenxiao@xxxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Unexpectedly low number of concurrent backfills
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: federico@xxxxxxxxxxxxx
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Tyler Brekke <tbrekke@xxxxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: ceph-giant installation error on centos 6.6
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Unexpectedly low number of concurrent backfills
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Unexpectedly low number of concurrent backfills
- From: Florian Haas <florian@xxxxxxxxxxx>
- ceph-giant installation error on centos 6.6
- From: Wenxiao He <wenxiao@xxxxxxxxx>
- Re: Ceph Block Device
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Ceph Block Device
- From: "Garg, Pankaj" <Pankaj.Garg@xxxxxxxxxxxxxxxxxx>
- Re: Ceph Block Device
- From: Brad Hubbard <bhubbard@xxxxxxxxxx>
- Re: Happy New Chinese Year!
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Happy New Chinese Year!
- Ceph Block Device
- From: "Garg, Pankaj" <Pankaj.Garg@xxxxxxxxxxxxxxxxxx>
- Re: Unexpectedly low number of concurrent backfills
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Help needed
- From: "SUNDAY A. OLUTAYO" <olutayo@xxxxxxxxxx>
- Re: Help needed
- From: Alan Johnson <alanj@xxxxxxxxxxxxxx>
- Re: Help needed
- From: "SUNDAY A. OLUTAYO" <olutayo@xxxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Federico Lucifredi <flucifredi@xxxxxxx>
- Re: Help needed
- From: "Weeks, Jacob (RIS-BCT)" <Jacob.Weeks@xxxxxxxxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Stephen Hindle <shindle@xxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Karan Singh <karan.singh@xxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Stephen Hindle <shindle@xxxxxxxx>
- Help needed
- From: "SUNDAY A. OLUTAYO" <olutayo@xxxxxxxxxx>
- Unexpectedly low number of concurrent backfills
- From: Florian Haas <florian@xxxxxxxxxxx>
- Re: CephFS and data locality?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- CephFS and data locality?
- From: Jake Kugel <jkugel@xxxxxxxxxx>
- Re: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- My PG is UP and Acting, yet it is unclean
- From: "Bahaa A. L." <bahaa@xxxxxxxxxxxx>
- Re: Dedicated disks for monitor and mds?
- From: John Spray <jspray@xxxxxxxxxx>
- Re: CentOS7 librbd1-devel problem.
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Mike <mike.almateia@xxxxxxxxx>
- Re: Power failure recovery woes
- From: Michal Kozanecki <mkozanecki@xxxxxxxxxx>
- Re: Power failure recovery woes
- From: Michal Kozanecki <mkozanecki@xxxxxxxxxx>
- Re: Power failure recovery woes
- From: Jeff <jeff@xxxxxxxxxxxxxxxxxxx>
- Re: Power failure recovery woes
- From: Jeff <jeff@xxxxxxxxxxxxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Vivek Varghese Cherian <vivekcherian@xxxxxxxxx>
- My PG is UP and Acting, yet it is unclean
- From: B L <super.iterator@xxxxxxxxx>
- Re: "store is getting too big" on monitors
- From: Mohamed Pakkeer <mdfakkeer@xxxxxxxxx>
- Re: Dedicated disks for monitor and mds?
- From: Francois Lafont <flafdivers@xxxxxxx>
- Re: Dedicated disks for monitor and mds?
- From: Francois Lafont <flafdivers@xxxxxxx>
- Re: Power failure recovery woes
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Power failure recovery woes
- From: Jeff <jeff@xxxxxxxxxxxxxxxxxxx>
- CentOS7 librbd1-devel problem.
- From: Leszek Master <keksior@xxxxxxxxx>
- Re: Dedicated disks for monitor and mds?
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Concurrent access of the object via Rados API...
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Concurrent access of the object via Rados API...
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Christian Balzer <chibi@xxxxxxx>
- Re: OSD turned itself off
- From: Josef Johansson <josef@xxxxxxxxxxx>
- Re: initially conf calamari to know about my Ceph cluster(s)
- From: Steffen Winther <ceph.user@xxxxxxxxxx>
- Re: OSD turned itself off
- From: Josef Johansson <josef@xxxxxxxxxxx>
- Re: Calamari build in vagrants
- From: Steffen Winther <ceph.user@xxxxxxxxxx>
- Re: Installation failure
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: ceph-osd - No Longer Creates osd.X upon Launch - Bug ?
- From: Mykola Golub <to.my.trociny@xxxxxxxxx>
- Re: OSD turned itself off
- From: Greg Farnum <gfarnum@xxxxxxxxxx>
- Dedicated disks for monitor and mds?
- From: Francois Lafont <flafdivers@xxxxxxx>
- Re: Installation failure
- From: "HEWLETT, Paul (Paul)** CTR **" <paul.hewlett@xxxxxxxxxxxxxxxxxx>
- Re: Installation failure
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: Installation failure
- From: "HEWLETT, Paul (Paul)** CTR **" <paul.hewlett@xxxxxxxxxxxxxxxxxx>
- Re: Installation failure
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: does "ceph auth caps" support multiple pools?
- From: Wido den Hollander <wido@xxxxxxxx>
- does "ceph auth caps" support multiple pools?
- From: Mingfai <mingfai.ma@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Hannes Landeholm <hannes@xxxxxxxxxxxxxx>
- Re: "store is getting too big" on monitors
- From: Joao Eduardo Luis <joao@xxxxxxxxxx>
- "store is getting too big" on monitors
- From: Mohamed Pakkeer <mdfakkeer@xxxxxxxxx>
- Installation failure
- From: "HEWLETT, Paul (Paul)** CTR **" <paul.hewlett@xxxxxxxxxxxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: "SUNDAY A. OLUTAYO" <olutayo@xxxxxxxxxx>
- Re: Having problem to start Radosgw
- From: B L <super.iterator@xxxxxxxxx>
- Re: arm cluster install
- From: Yann Dupont - Veille Techno <veilletechno-irts@xxxxxxxxxxxxxx>
- Re: ceph-osd - No Longer Creates osd.X upon Launch - Bug ?
- From: Mykola Golub <to.my.trociny@xxxxxxxxx>
- Re: ceph-osd - No Longer Creates osd.X upon Launch - Bug ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: ceph-osd - No Longer Creates osd.X upon Launch - Bug ?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: ceph-osd - No Longer Creates osd.X upon Launch - Bug ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- initially conf calamari to know about my Ceph cluster(s)
- From: Steffen Winther <ceph.user@xxxxxxxxxx>
- arm cluster install
- From: hp cre <hpcre1@xxxxxxxxx>
- Re: Having problem to start Radosgw
- From: Francois Lafont <flafdivers@xxxxxxx>
- Re: ceph-osd - No Longer Creates osd.X upon Launch - Bug ?
- From: Mykola Golub <to.my.trociny@xxxxxxxxx>
- Re: Calamari build in vagrants
- From: Steffen Winther <ceph.user@xxxxxxxxxx>
- Re: Having problem to start Radosgw
- From: B L <super.iterator@xxxxxxxxx>
- Re: Having problem to start Radosgw
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Having problem to start Radosgw
- From: B L <super.iterator@xxxxxxxxx>
- Re: Having problem to start Radosgw
- From: B L <super.iterator@xxxxxxxxx>
- Re: Having problem to start Radosgw
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Having problem to start Radosgw
- From: B L <super.iterator@xxxxxxxxx>
- Re: Having problem to start Radosgw
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Having problem to start Radosgw
- From: B L <super.iterator@xxxxxxxxx>
- Re: Having problem to start Radosgw
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Having problem to start Radosgw
- From: B L <super.iterator@xxxxxxxxx>
- Re: Having problem to start Radosgw
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Scott Laird <scott@xxxxxxxxxxx>
- Re: CRUSHMAP for chassis balance
- From: Luke Kao <Luke.Kao@xxxxxxxxxxxxx>
- Having problem to start Radosgw
- From: B L <super.iterator@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: Calamari build in vagrants
- From: Dan Mick <dmick@xxxxxxxxxx>
- Re: ceph Performance with SSD journal
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: OSD turned itself off
- From: Josef Johansson <josef@xxxxxxxxxxx>
- Re: Issus with device-mapper drive partition names.
- From: Stephen Hindle <shindle@xxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Random OSDs respawning continuously
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: CRUSHMAP for chassis balance
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Any suggestions on the best way to migrate / fix my cluster configuration
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: "Dennis Kramer (DT)" <dennis@xxxxxxxxx>
- Re: UGRENT: add mon failed and ceph monitor refreshlog crazily
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: UGRENT: add mon failed and ceph monitor refreshlog crazily
- From: "minchen" <minchen@xxxxxxxxxxxxxxx>
- Re: UGRENT: add mon failed and ceph monitor refresh log crazily
- From: Sage Weil <sweil@xxxxxxxxxx>
- Any suggestions on the best way to migrate / fix my cluster configuration
- From: Carl J Taylor <cjtaylor@xxxxxxxxx>
- Issus with device-mapper drive partition names.
- From: Tyler Bishop <tyler.bishop@xxxxxxxxxxxxxxxxx>
- CRUSHMAP for chassis balance
- From: Luke Kao <Luke.Kao@xxxxxxxxxxxxx>
- Re: Random OSDs respawning continuously
- From: Mohamed Pakkeer <mdfakkeer@xxxxxxxxx>
- 回复:Re: ceph mds zombie
- From: "981163874@xxxxxx" <981163874@xxxxxx>
- Re: ceph Performance with SSD journal
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Status of SAMBA VFS
- From: "Dennis Kramer (DT)" <dennis@xxxxxxxxx>
- Re: Question about ceph exclusive object?
- From: Kim Vandry <vandry@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: David <david@xxxxxxxxxx>
- Question about ceph exclusive object?
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Frank Yu <flyxiaoyu@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Karan Singh <karan.singh@xxxxxx>
- Re: OSD slow requests causing disk aborts in KVM
- From: Krzysztof Nowicki <krzysztof.a.nowicki@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: certificate of `ceph.com' is not trusted!
- From: Dietmar Maurer <dietmar@xxxxxxxxxxx>
- Re: ceph Performance with SSD journal
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: OSD slow requests causing disk aborts in KVM
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: OSD slow requests causing disk aborts in KVM
- From: Krzysztof Nowicki <krzysztof.a.nowicki@xxxxxxxxx>
- Re: certificate of `ceph.com' is not trusted!
- From: Daniel Schwager <Daniel.Schwager@xxxxxxxx>
- certificate of `ceph.com' is not trusted!
- From: Dietmar Maurer <dietmar@xxxxxxxxxxx>
- Re: ceph mds zombie
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Frank Yu <flyxiaoyu@xxxxxxxxx>
- Re: re: Upgrade 0.80.5 to 0.80.8 --the VM's read requestbecome too slow
- From: 杨万元 <yangwanyuan8861@xxxxxxxxx>
- Re: ceph Performance with SSD journal
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- UGRENT: add mon failed and ceph monitor refresh log crazily
- From: minchen <minchen@xxxxxxxxxxxxxxx>
- UGRENT: ceph monitor refresh log crazily
- From: minchen <minchen@xxxxxxxxxxxxxxx>
- Re: wider rados namespace support?
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: mongodb on top of rbd volumes (through krbd) ?
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Calamari build in vagrants
- From: Steffen Winther <ceph.user@xxxxxxxxxx>
- Re: wider rados namespace support?
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Can't add RadosGW keyring to the cluster
- From: B L <super.iterator@xxxxxxxxx>
- Re: CephFS removal.
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: CephFS removal.
- From: <warren.jeffs@xxxxxxxxxx>
- Re: CephFS removal.
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Calamari build in vagrants
- From: Steffen Winther <ceph.user@xxxxxxxxxx>
- Re: OSD slow requests causing disk aborts in KVM
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: OSD slow requests causing disk aborts in KVM
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: re: Upgrade 0.80.5 to 0.80.8 --the VM's read requestbecome too slow
- From: "killingwolf" <killingwolf@xxxxxx>
- ceph mds zombie
- From: "kenmasida" <981163874@xxxxxx>
- OSD slow requests causing disk aborts in KVM
- From: Krzysztof Nowicki <krzysztof.a.nowicki@xxxxxxxxx>
- Re: RGW put file question
- From: Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>
- Internal performance counters in Ceph
- From: Alyona Kiselyova <akiselyova@xxxxxxxxxxxx>
- CephFS removal.
- From: <warren.jeffs@xxxxxxxxxx>
- Re: OSD capacity variance ?
- From: LOPEZ Jean-Charles <jelopez@xxxxxxxxxx>
- 400 Errors uploadig files
- From: Eduard Kormann <ekormann@xxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Mark Nelson <mnelson@xxxxxxxxxx>
- Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Karan Singh <karan.singh@xxxxxx>
- Random OSDs respawning continuously
- From: Mohamed Pakkeer <mdfakkeer@xxxxxxxxx>
- Re: re: Upgrade 0.80.5 to 0.80.8 --the VM's read requestbecome too slow
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Upgrade 0.80.5 to 0.80.8 --the VM's read request become too slow
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Cache Tier 1 vs. Journal
- From: Mike <mike.almateia@xxxxxxxxx>
- Re: re: Upgrade 0.80.5 to 0.80.8 --the VM's read requestbecome too slow
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- =?gb18030?b?cmWjuiBVcGdyYWRlIDAuODAuNSB0byAwLjgwLjgg?==?gb18030?q?--the_VM=27s_read_requestbecome_too_slow?=
- From: "=?gb18030?b?a2lsbGluZ3dvbGY=?=" <killingwolf@xxxxxx>
- Re: mongodb on top of rbd volumes (through krbd) ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: mongodb on top of rbd volumes (through krbd) ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: combined ceph roles
- From: André Gemünd <andre.gemuend@xxxxxxxxxxxxxxxxxx>
- mongodb on top of rbd volumes (through krbd) ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: ceph Performance with SSD journal
- From: Chris Hoy Poy <chris@xxxxxxxx>
- Upgrade 0.80.5 to 0.80.8 --the VM's read request become too slow
- From: 杨万元 <yangwanyuan8861@xxxxxxxxx>
- ceph Performance with SSD journal
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- Re: combined ceph roles
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Re: combined ceph roles
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: combined ceph roles
- From: Stephen Hindle <shindle@xxxxxxxx>
- Call for Ceph Day Speakers (SF + Amsterdam)
- From: Patrick McGarry <pmcgarry@xxxxxxxxxx>
- Re: Cache pressure fail
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Are EC pools ready for production use ?
- From: Florent B <florent@xxxxxxxxxxx>
- Re: Ceph vs Hardware RAID: No battery backed cache
- From: Thomas Güttler <guettliml@xxxxxxxxxxxxxxxxxx>
- Re: combined ceph roles
- From: Nick Fisk <nick@xxxxxxxxxx>
- Re: Cache pressure fail
- From: "Dennis Kramer (DT)" <dennis@xxxxxxxxx>
- Re: Are EC pools ready for production use ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Cache pressure fail
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Cache pressure fail
- From: "Dennis Kramer (DT)" <dennis@xxxxxxxxx>
- Are EC pools ready for production use ?
- From: Florent B <florent@xxxxxxxxxxx>
- Re: wider rados namespace support?
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Re: wider rados namespace support?
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: Too few pgs per osd - Health_warn for EC pool
- From: Mohamed Pakkeer <mdfakkeer@xxxxxxxxx>
- Re: ceph Performance vs PG counts
- From: Vikhyat Umrao <vumrao@xxxxxxxxxx>
- wider rados namespace support?
- From: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
- Update 0.80.5 to 0.80.8 --the VM's read request become too slow
- From: 杨万元 <yangwanyuan8861@xxxxxxxxx>
- Re: ceph Performance vs PG counts
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Vickie ch <mika.leaf666@xxxxxxxxx>
- Re: stuck with dell perc 710p / (aka mega raid 2208?)
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Re: 答复: Re: can not add osd
- From: Alan Johnson <alanj@xxxxxxxxxxxxxx>
- Re: combined ceph roles
- From: Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx>
- cannot obtain keys from the nodes : [ceph_deploy.gatherkeys][WARNIN] Unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring on ['ceph-vm01']
- From: Konstantin Khatskevich <home@xxxxxxxx>
- combined ceph roles
- From: David Graham <xtnega@xxxxxxxxx>
- Re: stuck with dell perc 710p / (aka mega raid 2208?)
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Vikhyat Umrao <vumrao@xxxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Micha Kersloot <micha@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Vikhyat Umrao <vumrao@xxxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Owen Synge <osynge@xxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: stuck with dell perc 710p / (aka mega raid 2208?)
- From: Daniel Swarbrick <daniel.swarbrick@xxxxxxxxxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Too few pgs per osd - Health_warn for EC pool
- From: Mohamed Pakkeer <mdfakkeer@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Vickie ch <mika.leaf666@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: Vickie ch <mika.leaf666@xxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: ISCSI LIO hang after 2-3 days of working
- From: Nick Fisk <nick@xxxxxxxxxx>
- Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- stuck with dell perc 710p / (aka mega raid 2208?)
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in
- From: B L <super.iterator@xxxxxxxxx>
- Re: Ceph vs Hardware RAID: No battery backed cache
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Ceph vs Hardware RAID: No battery backed cache
- From: Thomas Güttler <guettliml@xxxxxxxxxxxxxxxxxx>
- Re: requests are blocked > 32 sec woes
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Compilation problem
- From: Dan Mick <dmick@xxxxxxxxxx>
- Re: Compilation problem
- From: Dan Mick <dmick@xxxxxxxxxx>
- Re: Compilation problem
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- Re: requests are blocked > 32 sec woes
- From: Matthew Monaco <matt@xxxxxxxxx>
- Re: kernel crash after 'ceph: mds0 caps stale' and 'mds0 hung' -- issue with timestamps or HVM virtualization on EC2?
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: kernel crash after 'ceph: mds0 caps stale' and 'mds0 hung' -- issue with timestamps or HVM virtualization on EC2?
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: kernel crash after 'ceph: mds0 caps stale' and 'mds0 hung' -- issue with timestamps or HVM virtualization on EC2?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: journal placement for small office?
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- kernel crash after 'ceph: mds0 caps stale' and 'mds0 hung' -- issue with timestamps or HVM virtualization on EC2?
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: Compilation problem
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: requests are blocked > 32 sec woes
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: [rbd] Ceph RBD kernel client using with cephx
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- [rbd] Ceph RBD kernel client using with cephx
- From: Vikhyat Umrao <vumrao@xxxxxxxxxx>
- ceph-deploy does not create the keys
- From: Konstantin Khatskevich <home@xxxxxxxx>
- Re: journal placement for small office?
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: crush tunables : optimal : upgrade from firefly to hammer behaviour ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: crush tunables : optimal : upgrade from firefly to hammer behaviour ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: crush tunables : optimal : upgrade from firefly to hammer behaviour ?
- From: Sage Weil <sage@xxxxxxxxxxxx>
- crush tunables : optimal : upgrade from firefly to hammer behaviour ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- requests are blocked > 32 sec woes
- From: Matthew Monaco <matt@xxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Scott Laird <scott@xxxxxxxxxxx>
- Applied crush rules to pool but not working.
- From: Vickie ch <mika.leaf666@xxxxxxxxx>
- Re: ceph Performance vs PG counts
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- ceph Performance vs PG counts
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- ct_target_max_mem_mb 1000000
- From: "Aquino, Ben O" <ben.o.aquino@xxxxxxxxx>
- Mount CEPH RBD devices into OpenSVC service
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Vickey Singh <vickey.singh22693@xxxxxxxxx>
- Re: CEPH RBD and OpenStack
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: CEPH RBD and OpenStack
- From: Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx>
- Re: cephfs not mounting on boot
- From: Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx>
- Re: Problem mapping RBD images with v0.92
- From: Eric Eastman <eric.eastman@xxxxxxxxxxxxxx>
- Re: Problem mapping RBD images with v0.92
- From: Raju Kurunkad <Raju.Kurunkad@xxxxxxxxxxx>
- Problem mapping RBD images with v0.92
- From: Eric Eastman <eric.eastman@xxxxxxxxxxxxxx>
- Re: Update 0.80.7 to 0.80.8 -- Restart Order
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: Cache Settings
- From: Nick Fisk <nick@xxxxxxxxxx>
- Re: Cache Settings
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Cache Settings
- From: LOPEZ Jean-Charles <jelopez@xxxxxxxxxx>
- Cache Settings
- From: Nick Fisk <nick@xxxxxxxxxx>
- Re: CEPH RBD and OpenStack
- From: Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx>
- Re: CEPH RBD and OpenStack
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: Fwd: Multi-site deployment RBD and Federated Gateways
- From: lakshmi k s <lux_ks@xxxxxxxxx>
- CEPH RBD and OpenStack
- From: Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx>
- Re: parsing ceph -s and how much free space, really?
- From: John Spray <john.spray@xxxxxxxxxx>
- Compilation problem
- From: "David J. Arias" López "M." <david.arias@xxxxxxxxxx>
- Re: Status of SAMBA VFS
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- replica or erasure coding for small office?
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Replacing an OSD Drive
- From: Gaylord Holder <gholder@xxxxxxxxxxxxx>
- journal placement for small office?
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Re: Status of SAMBA VFS
- From: "Dennis Kramer (DT)" <dennis@xxxxxxxxx>
- Re: Status of SAMBA VFS
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Status of SAMBA VFS
- From: "Dennis Kramer (DT)" <dennis@xxxxxxxxx>
- parsing ceph -s and how much free space, really?
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Re: Status of SAMBA VFS
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: erasure code : number of chunks for a small cluster ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: erasure code : number of chunks for a small cluster ?
- From: Hector Martin <hector@xxxxxxxxxxxxxx>
- Re: erasure code : number of chunks for a small cluster ?
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Status of SAMBA VFS
- From: "Dennis Kramer (DT)" <dennis@xxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Mohamed Pakkeer <mdfakkeer@xxxxxxxxx>
- Re: updation of container and account while using Swift API
- From: Abhishek L <abhishek.lekshmanan@xxxxxxxxx>
- Re: Introducing "Learning Ceph" : The First ever Book on Ceph
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Stephan Seitz <s.seitz@xxxxxxxxxxxxxxxxxxx>
- Introducing "Learning Ceph" : The First ever Book on Ceph
- From: Karan Singh <karan.singh@xxxxxx>
- 0.80.8 ReplicationPG Fail
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: erasure code : number of chunks for a small cluster ?
- From: Hector Martin <hector@xxxxxxxxxxxxxx>
- updation of container and account while using Swift API
- From: pragya jain <prag_2648@xxxxxxxxxxx>
- Re: ISCSI LIO hang after 2-3 days of working
- From: Mike Christie <mchristi@xxxxxxxxxx>
- Re: How to notify an object watched by client via ceph class API
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: RBD deprecated?
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- Re: RBD deprecated?
- From: Don Doerner <dondoerner@xxxxxxxxxxxxx>
- Re: RBD deprecated?
- From: Ken Dreyer <kdreyer@xxxxxxxxxx>
- RBD deprecated?
- From: Don Doerner <dondoerner@xxxxxxxxxxxxx>
- PG stuck unclean for long time
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: OSD down
- From: Steve Anthony <sma310@xxxxxxxxxx>
- ceph-osd - No Longer Creates osd.X upon Launch - Bug ?
- From: Ron Allred <rallred@xxxxxxxxxxxxx>
- Re: OSD down
- From: Daniel Takatori Ohara <dtohara@xxxxxxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: OSD down
- From: Alexis KOALLA <alexis.koalla@xxxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Andrey Korolyov <andrey@xxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Sage Weil <sage@xxxxxxxxxxxx>
- ISCSI LIO hang after 2-3 days of working
- From: reistlin87 <79026480913@xxxxxxxxx>
- OSD down
- From: Daniel Takatori Ohara <dtohara@xxxxxxxxxxxxx>
- Re: How to notify an object watched by client via ceph class API
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: How to notify an object watched by client via ceph class API
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Christian Eichelmann <christian.eichelmann@xxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Sahana Lokeshappa <Sahana.Lokeshappa@xxxxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: How to notify an object watched by client via ceph class API
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: snapshoting on btrfs vs xfs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- How to notify an object watched by client via ceph class API
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: ceph Performance random write is more then sequential
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- Re: command to flush rbd cache?
- From: Dan Mick <dmick@xxxxxxxxxx>
- Re: command to flush rbd cache?
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: command to flush rbd cache?
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: command to flush rbd cache?
- From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
- Re: command to flush rbd cache?
- From: Dan Mick <dmick@xxxxxxxxxx>
- command to flush rbd cache?
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: snapshoting on btrfs vs xfs
- From: Cristian Falcas <cristi.falcas@xxxxxxxxx>
- Re: RGW put file question
- From: "baijiaruo@xxxxxxx" <baijiaruo@xxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Christian Balzer <chibi@xxxxxxx>
- Re: ceph Performance random write is more then sequential
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- RGW put file question
- From: "baijiaruo@xxxxxxx" <baijiaruo@xxxxxxx>
- Re: snapshoting on btrfs vs xfs
- From: Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx>
- Re: snapshoting on btrfs vs xfs
- From: Daniel Schwager <Daniel.Schwager@xxxxxxxx>
- Re: snapshoting on btrfs vs xfs
- From: Cristian Falcas <cristi.falcas@xxxxxxxxx>
- Re: PG to pool mapping?
- From: Lincoln Bryant <lincolnb@xxxxxxxxxxxx>
- Re: PG to pool mapping?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: snapshoting on btrfs vs xfs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- PG to pool mapping?
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- snapshoting on btrfs vs xfs
- From: Cristian Falcas <cristi.falcas@xxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Christian Eichelmann <christian.eichelmann@xxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Nick Fisk <nick@xxxxxxxxxx>
- Re: Question about output message and object update for ceph class
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Question about output message and object update for ceph class
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Colombo Marco <Marco.Colombo@xxxxxxxx>
- rbd recover tool for stopped ceph cluster
- From: "minchen" <minchen@xxxxxxxxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Christian Balzer <chibi@xxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: v0.92 released
- From: Samuel Just <sam.just@xxxxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: Update 0.80.7 to 0.80.8 -- Restart Order
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: client unable to access files after caching pool addition
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- client unable to access files after caching pool addition
- From: J-P Methot <jpmethot@xxxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: cephfs-fuse: set/getfattr, change pools
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Update 0.80.7 to 0.80.8 -- Restart Order
- From: Stephen Jahl <stephenjahl@xxxxxxxxx>
- Re: .Health Warning : .rgw.buckets has too few pgs
- From: Stephen Hindle <shindle@xxxxxxxx>
- .Health Warning : .rgw.buckets has too few pgs
- From: Shashank Puntamkar <spuntamkar@xxxxxxxxx>
- Re: cephfs not mounting on boot
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Colombo Marco <Marco.Colombo@xxxxxxxx>
- v0.92 released
- From: Sage Weil <sweil@xxxxxxxxxx>
- Re: Ceph Supermicro hardware recommendation
- From: Nick Fisk <nick@xxxxxxxxxx>
- Ceph Supermicro hardware recommendation
- From: Colombo Marco <Marco.Colombo@xxxxxxxx>
- method to verify replica's actually exist on disk ?
- From: Stephen Hindle <shindle@xxxxxxxx>
- Re: ceph reports 10x actuall available space
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- cephfs-fuse: set/getfattr, change pools
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: features of the next stable release
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Monitor Restart triggers half of our OSDs marked down
- From: Andrey Korolyov <andrey@xxxxxxx>
- Monitor Restart triggers half of our OSDs marked down
- From: Christian Eichelmann <christian.eichelmann@xxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: Mudit Verma <mudit.f2004912@xxxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Reduce pg_num
- From: Mateusz Skała <mateusz.skala@xxxxxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: Mudit Verma <mudit.f2004912@xxxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: Mudit Verma <mudit.f2004912@xxxxxxxxx>
- Re: Reduce pg_num
- From: John Spray <john.spray@xxxxxxxxxx>
- Reduce pg_num
- From: Mateusz Skała <mateusz.skala@xxxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Ritesh Raj Sarraf <rrs@xxxxxxxxxxxxxx>
- Re: ssd OSD and disk controller limitation
- From: mad Engineer <themadengin33r@xxxxxxxxx>
- Re: features of the next stable release
- From: mad Engineer <themadengin33r@xxxxxxxxx>
- Re: erasure code : number of chunks for a small cluster ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: erasure code : number of chunks for a small cluster ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Question about CRUSH rule set parameter "min_size" "max_size"
- From: Vickie ch <mika.leaf666@xxxxxxxxx>
- Re: Question about CRUSH rule set parameter "min_size" "max_size"
- From: Sahana Lokeshappa <Sahana.Lokeshappa@xxxxxxxxxxx>
- Question about CRUSH rule set parameter "min_size" "max_size"
- From: Vickie ch <mika.leaf666@xxxxxxxxx>
- Re: Selecting between multiple public networks
- From: "Nick @ Deltaband" <nick@xxxxxxxxxxxxx>
- Re: ceph reports 10x actuall available space
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Nicheal <zay11022@xxxxxxxxx>
- ceph reports 10x actuall available space
- From: pixelfairy <pixelfairy@xxxxxxxxx>
- Re: features of the next stable release
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Bruce McFarland <Bruce.McFarland@xxxxxxxxxxxxxxxx>
- Re: features of the next stable release
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Bruce McFarland <Bruce.McFarland@xxxxxxxxxxxxxxxx>
- Re: Selecting between multiple public networks
- From: Nicheal <zay11022@xxxxxxxxx>
- Re: Selecting between multiple public networks
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: RGW region metadata sync prevents writes to non-master region
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Rbd device on RHEL 6.5
- From: "Nick @ Deltaband" <nick@xxxxxxxxxxxxx>
- Selecting between multiple public networks
- From: "Nick @ Deltaband" <nick@xxxxxxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Bruce McFarland <Bruce.McFarland@xxxxxxxxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- POC doc
- From: Hoc Phan <quanghoc@xxxxxxxxx>
- Re: features of the next stable release
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: features of the next stable release
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Ritesh Raj Sarraf <rrs@xxxxxxxxxxxxxx>
- Re: ssd OSD and disk controller limitation
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- Re: ssd OSD and disk controller limitation
- From: mad Engineer <themadengin33r@xxxxxxxxx>
- Re: features of the next stable release
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: ssd OSD and disk controller limitation
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- Re: Update 0.80.7 to 0.80.8 -- Restart Order
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: filestore_fiemap and other ceph tweaks
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: filestore_fiemap and other ceph tweaks
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- Re: filestore_fiemap and other ceph tweaks
- From: J-P Methot <jpmethot@xxxxxxxxxx>
- Re: Update 0.80.7 to 0.80.8 -- Restart Order
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- Re: Update 0.80.7 to 0.80.8 -- Restart Order
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: CEPH BackUPs
- From: Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx>
- JCloud on Ceph
- From: Alexis KOALLA <alexis.koalla@xxxxxxxxxx>
- Re: filestore_fiemap and other ceph tweaks
- From: Haomai Wang <haomaiwang@xxxxxxxxx>
- filestore_fiemap and other ceph tweaks
- From: J-P Methot <jpmethot@xxxxxxxxxx>
- Update 0.80.7 to 0.80.8 -- Restart Order
- From: Daniel Schneller <daniel.schneller@xxxxxxxxxxxxxxxx>
- CacheCade to cache pool - worth it?
- From: mailinglist@xxxxxxxxxxxxxxxxxxx
- Re: Repetitive builds for Ceph
- From: John Spray <john.spray@xxxxxxxxxx>
- features of the next stable release
- From: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Ritesh Raj Sarraf <rrs@xxxxxxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Ritesh Raj Sarraf <rrs@xxxxxxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Repetitive builds for Ceph
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Repetitive builds for Ceph
- From: Ritesh Raj Sarraf <rrs@xxxxxxxxxxxxxx>
- Re: ceph Performance random write is more then sequential
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- Re: [Solved] No auto-mount of OSDs after server reboot
- From: Alexis KOALLA <alexis.koalla@xxxxxxxxxx>
- Re: Question about primary OSD of a pool
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- ssd OSD and disk controller limitation
- From: mad Engineer <themadengin33r@xxxxxxxxx>
- Re: Question about primary OSD of a pool
- From: Sudarshan Pathak <sushan.pth@xxxxxxxxx>
- Fwd: error opening rbd image
- From: Aleksey Leonov <nazarianin@xxxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: Question about primary OSD of a pool
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: ceph Performance random write is more then sequential
- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
- Re: Question about primary OSD of a pool
- From: Sudarshan Pathak <sushan.pth@xxxxxxxxx>
- Re: Question about primary OSD of a pool
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: Question about primary OSD of a pool
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: ceph Performance random write is more then sequential
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- OSD can't start After server restart
- From: wsnote <wsnote@xxxxxxx>
- error opening rbd image
- From: Aleksey Leonov <nazarianin@xxxxxxxxx>
- Re: ceph Performance random write is more then sequential
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- RBD snap unprotect need ACLs on all pools ?
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Mykola Golub <to.my.trociny@xxxxxxxxx>
- Re: estimate the impact of changing pg_num
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: estimate the impact of changing pg_num
- From: "Xu (Simon) Chen" <xchenum@xxxxxxxxx>
- Re: estimate the impact of changing pg_num
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: estimate the impact of changing pg_num
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: estimate the impact of changing pg_num
- From: "Xu (Simon) Chen" <xchenum@xxxxxxxxx>
- Re: estimate the impact of changing pg_num
- From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
- Re: erasure code : number of chunks for a small cluster ?
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: erasure code : number of chunks for a small cluster ?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- estimate the impact of changing pg_num
- From: "Xu (Simon) Chen" <xchenum@xxxxxxxxx>
- erasure code : number of chunks for a small cluster ?
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: cephfs: from a file name determine the objects name
- From: Jean-Charles Lopez <jelopez@xxxxxxxxxx>
- Re: Question about primary OSD of a pool
- From: Jean-Charles Lopez <jelopez@xxxxxxxxxx>
- Re: OSD capacity variance ?
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Arbitrary OSD Number Assignment
- From: Ron Allred <rallred@xxxxxxxxxxxxx>
- cephfs: from a file name determine the objects name
- From: Mudit Verma <mudit.f2004912@xxxxxxxxx>
- Re: Question about primary OSD of a pool
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: ceph Performance random write is more then sequential
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- ceph Performance random write is more then sequential
- From: Sumit Gaur <sumitkgaur@xxxxxxxxx>
- Question about primary OSD of a pool
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- Re: Moving a Ceph cluster (to a new network)
- From: François Petit <francois.petit@xxxxxxxxxxxxxxxx>
- Re: OSD capacity variance ?
- From: Sudarshan Pathak <sushan.pth@xxxxxxxxx>
- OSD capacity variance ?
- From: Howard Thomson <hat@xxxxxxxxxxxxxx>
- Rbd device on RHEL 6.5
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- POC Test Plan
- From: Amir Kazemi <Amir.Kazemi@xxxxxxxx>
- Cache tiering writeback mode, object in cold and hot pool ?
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- Move objects from one pool to other
- From: Shashank Puntamkar <spuntamkar@xxxxxxxxx>
- Re: cephfs - disabling cache on client and on OSDs
- From: Mudit Verma <mudit.f2004912@xxxxxxxxx>
- Re: btrfs backend with autodefrag mount option
- From: Luke Kao <Luke.Kao@xxxxxxxxxxxxx>
- Re: No auto-mount of OSDs after server reboot
- From: "Anthony D'Atri" <aad@xxxxxxxxxxxxxx>
- Re: cephfs - disabling cache on client and on OSDs
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- calamari server error 503 detail rpc error lost remote after 10s heartbeat
- From: Tony <unixfly@xxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Bruce McFarland <Bruce.McFarland@xxxxxxxxxxxxxxxx>
- Re: RBD caching on 4K reads???
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- RBD caching on 4K reads???
- From: Bruce McFarland <Bruce.McFarland@xxxxxxxxxxxxxxxx>
- Re: btrfs backend with autodefrag mount option
- From: Lionel Bouton <lionel-subscription@xxxxxxxxxxx>
- Moving a Ceph cluster (to a new network)
- From: Don Doerner <dondoerner@xxxxxxxxxxxxx>
- Re: error in sys.exitfunc
- From: Travis Rhoden <trhoden@xxxxxxxxx>
- Re: btrfs backend with autodefrag mount option
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- Re: btrfs backend with autodefrag mount option
- From: Mark Nelson <mark.nelson@xxxxxxxxxxx>
- btrfs backend with autodefrag mount option
- From: Luke Kao <Luke.Kao@xxxxxxxxxxxxx>
- cephfs - disabling cache on client and on OSDs
- From: Mudit Verma <mudit.f2004912@xxxxxxxxx>
- Re: No auto-mount of OSDs after server reboot
- From: James Eckersall <james.eckersall@xxxxxxxxx>
- Re: No auto-mount of OSDs after server reboot
- From: Alexis KOALLA <alexis.koalla@xxxxxxxxxx>
- Re: mon leveldb loss
- From: Sebastien Han <sebastien.han@xxxxxxxxxxxx>
- Re: radosgw (0.87) and multipart upload (result object size = 0)
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: radosgw (0.87) and multipart upload (result object size = 0)
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: radosgw (0.87) and multipart upload (result object size = 0)
- From: Dong Yuan <yuandong1222@xxxxxxxxx>
- Re: CEPH BackUPs
- From: Christian Balzer <chibi@xxxxxxx>
- Re: RGW region metadata sync prevents writes to non-master region
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: RGW region metadata sync prevents writes to non-master region
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: RGW region metadata sync prevents writes to non-master region
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- CEPH BackUPs
- From: Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx>
- Help:mount error
- From: 于泓海 <foxconn-etc@xxxxxxx>
- Re: keyvaluestore backend metadata overhead
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Cephfs: Read Errors
- From: Mathias Ewald <mathias.ewald@xxxxxxxxxxxxx>
- Re: error in sys.exitfunc
- From: "Blake, Karl D" <karl.d.blake@xxxxxxxxx>
- Deploying ceph using Dell equallogic storage arrays
- From: Imran Khan <khan.imran2591@xxxxxxxxx>
- Re: error in sys.exitfunc
- From: "Blake, Karl D" <karl.d.blake@xxxxxxxxx>
- mon leveldb loss
- From: Mike Winfield <mike.winfield@xxxxxxxxxxxxxxxxxx>
- Question about ceph class usage
- From: Dennis Chen <kernel.org.gnu@xxxxxxxxx>
- error in sys.exitfunc
- From: "Blake, Karl D" <karl.d.blake@xxxxxxxxx>
- radosgw (0.87) and multipart upload (result object size = 0)
- From: Gleb Borisov <borisov.gleb@xxxxxxxxx>
- keyvaluestore backend metadata overhead
- From: Chris Pacejo <cpacejo@xxxxxxxxxxxxxxxx>
- Re: RGW region metadata sync prevents writes to non-master region
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: RGW region metadata sync prevents writes to non-master region
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: RGW region metadata sync prevents writes to non-master region
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: No auto-mount of OSDs after server reboot
- From: Daniel Swarbrick <daniel.swarbrick@xxxxxxxxxxxxxxxx>
- Re: Sizing SSD's for ceph
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- radosgw + s3 + keystone + Browser-Based POST problem
- From: Valery Tschopp <valery.tschopp@xxxxxxxxx>
- Re: No auto-mount of OSDs after server reboot
- From: Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx>
- No auto-mount of OSDs after server reboot
- From: Alexis KOALLA <alexis.koalla@xxxxxxxxxx>
- Re: Sizing SSD's for ceph
- From: Christian Balzer <chibi@xxxxxxx>
- Is this ceph issue ? snapshot freeze on save state
- From: Zeeshan Ali Shah <zashah@xxxxxxxxxx>
- Re: RGW region metadata sync prevents writes to non-master region
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Re: Help:mount error
- From: 于泓海 <foxconn-etc@xxxxxxx>
- Sizing SSD's for ceph
- From: "Ramakrishna Nishtala (rnishtal)" <rnishtal@xxxxxxxxx>
- RGW region metadata sync prevents writes to non-master region
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Survey re journals on SSD vs co-located on spinning rust
- From: "Anthony D'Atri" <aad@xxxxxxxxxxxxxx>
- Re: chattr +i not working with cephfs
- From: Eric Eastman <eric.eastman@xxxxxxxxxxxxxx>
- Re: Ceph Testing
- From: Lincoln Bryant <lincolnb@xxxxxxxxxxxx>
- Re: Ceph Testing
- From: "Jeripotula, Shashiraj" <shashiraj.jeripotula@xxxxxxxxxxx>
- Re: cephfs modification time
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: chattr +i not working with cephfs
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: chattr +i not working with cephfs
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: chattr +i not working with cephfs
- From: John Spray <john.spray@xxxxxxxxxx>
- OSDs not getting mounted back after reboot
- From: J-P Methot <jpmethot@xxxxxxxxxx>
- Re: cephfs modification time
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: chattr +i not working with cephfs
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Ceph hunting for monitor on load
- From: Erwin Lubbers <ceph@xxxxxxxxxxxxxxxxx>
- Re: chattr +i not working with cephfs
- From: Eric Eastman <eric.eastman@xxxxxxxxxxxxxx>
- Re: Help:mount error
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: RBD over cache tier over EC pool: rbd rm doesn't remove objects
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Help:mount error
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: RBD over cache tier over EC pool: rbd rm doesn't remove objects
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: RBD over cache tier over EC pool: rbd rm doesn't remove objects
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: chattr +i not working with cephfs
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Mike Christie <mchristi@xxxxxxxxxx>
- Health warning : .rgw.buckets has too few pgs
- From: Shashank Puntamkar <spuntamkar@xxxxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Nick Fisk <nick@xxxxxxxxxx>
- Re: Help:mount error
- From: Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx>
- Re: Help:mount error
- From: 于泓海 <foxconn-etc@xxxxxxx>
- Re: CEPH I/O Performance with OpenStack
- From: Robert van Leeuwen <Robert.vanLeeuwen@xxxxxxxxxxxxx>
- Re: Help:mount error
- From: 王亚洲 <breboel@xxxxxxx>
- Help:mount error
- From: 于泓海 <foxconn-etc@xxxxxxx>
- Re: Consumer Grade SSD Clusters
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Mike Christie <mchristi@xxxxxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Mike Christie <mchristi@xxxxxxxxxx>
- chattr +i not working with cephfs
- From: Eric Eastman <eric.eastman@xxxxxxxxxxxxxx>
- Re: Consumer Grade SSD Clusters
- From: Quenten Grasso <qgrasso@xxxxxxxxxx>
- Re: cephfs modification time
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: ceph as a primary storage for owncloud
- From: Steve Anthony <sma310@xxxxxxxxxx>
- Ceph Testing
- From: "Jeripotula, Shashiraj" <shashiraj.jeripotula@xxxxxxxxxxx>
- 85% of the cluster won't start, or how I learned why to use disk UUIDs
- From: Steve Anthony <sma310@xxxxxxxxxx>
- Re: verifying tiered pool functioning
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- cache pool and storage pool: possible to remove storage pool?
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- Ceph and btrfs - disable copy-on-write?
- From: Christopher Armstrong <chris@xxxxxxxxxxxx>
- Re: CEPH I/O Performance with OpenStack
- From: Ramy Allam <linux@xxxxxxxxxxxxx>
- Re: slow read-performance inside the vm
- From: Udo Lembke <ulembke@xxxxxxxxxxxx>
- Re: RBD over cache tier over EC pool: rbd rm doesn't remove objects
- From: Sage Weil <sage@xxxxxxxxxxxx>
- Re: Ceph File System Question
- From: Aaron Ten Clay <aarontc@xxxxxxxxxxx>
- Re: Ceph File System Question
- From: John Spray <john.spray@xxxxxxxxxx>
- Re: CEPH I/O Performance with OpenStack
- From: Robert van Leeuwen <Robert.vanLeeuwen@xxxxxxxxxxxxx>
- Re: slow read-performance inside the vm
- From: Patrik Plank <patrik@xxxxxxxx>
- CEPH I/O Performance with OpenStack
- From: Ramy Allam <linux@xxxxxxxxxxxxx>
- Re: How to do maintenance without falling out of service?
- From: J David <j.david.lists@xxxxxxxxx>
- ceph as a primary storage for owncloud
- From: Simone Spinelli <simone.spinelli@xxxxxxxx>
- Re: Appending to a rados object with feedback
- From: Kim Vandry <vandry@xxxxxxxxx>
- Re: Total number PGs using multiple pools
- From: Luis Periquito <periquito@xxxxxxxxx>
- Re: Appending to a rados object with feedback
- From: Kim Vandry <vandry@xxxxxxxxx>
- Re: about command "ceph osd map" can display non-existent object
- From: Wido den Hollander <wido@xxxxxxxx>
- about command "ceph osd map" can display non-existent object
- From: Vickie ch <mika.leaf666@xxxxxxxxx>
- Re: RBD over cache tier over EC pool: rbd rm doesn't remove objects
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Part 2: ssd osd fails often with "FAILED assert(soid < scrubber.start || soid >= scrubber.end)"
- From: Irek Fasikhov <malmyzh@xxxxxxxxx>
- Re: Appending to a rados object with feedback
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: verifying tiered pool functioning
- From: "Zhang, Jian" <jian.zhang@xxxxxxxxx>
- Appending to a rados object with feedback
- From: Kim Vandry <vandry@xxxxxxxxx>
- Re: OSD removal rebalancing again
- From: Christian Balzer <chibi@xxxxxxx>
- Re: OSD removal rebalancing again
- From: Quenten Grasso <qgrasso@xxxxxxxxxx>
- Re: OSD removal rebalancing again
- From: Christian Balzer <chibi@xxxxxxx>
- Re: OSD removal rebalancing again
- From: Quenten Grasso <qgrasso@xxxxxxxxxx>
- Re: OSD removal rebalancing again
- From: Christian Balzer <chibi@xxxxxxx>
- OSD removal rebalancing again
- From: Quenten Grasso <qgrasso@xxxxxxxxxx>
- Ceph File System Question
- From: "Jeripotula, Shashiraj" <shashiraj.jeripotula@xxxxxxxxxxx>
- Re: pg_num not being set to ceph.conf default when creating pool via python librados
- From: Christian Balzer <chibi@xxxxxxx>
- Tengine SSL proxy and Civetweb
- From: Ben <b@benjackson.email>
- Re: osd crush create-or-move doesn't move things?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- osd crush create-or-move doesn't move things?
- From: Brian Rak <brak@xxxxxxxxxxxxxxx>
- Re: pg_num not being set to ceph.conf default when creating pool via python librados
- From: Jason Anderson <Jason.Anderson@xxxxxxxxxxxxxxxx>
- RGW removed objects and rados pool
- From: Italo Santos <okdokk@xxxxxxxxx>
- Re: Total number PGs using multiple pools
- From: Italo Santos <okdokk@xxxxxxxxx>
- Re: pg_num not being set to ceph.conf default when creating pool via python librados
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: pg_num not being set to ceph.conf default when creating pool via python librados
- From: Jason Anderson <Jason.Anderson@xxxxxxxxxxxxxxxx>
- Re: pg_num not being set to ceph.conf default when creating pool via python librados
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- pg_num not being set to ceph.conf default when creating pool via python librados
- From: Jason Anderson <Jason.Anderson@xxxxxxxxxxxxxxxx>
- Re: remote storage
- From: Eneko Lacunza <elacunza@xxxxxxxxx>
- Re: CEPH Expansion
- From: Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx>
- Re: RBD client & STRIPINGV2 support
- From: Ilya Dryomov <ilya.dryomov@xxxxxxxxxxx>
- Re: RBD client & STRIPINGV2 support
- From: Florent MONTHEL <fmonthel@xxxxxxxxxxxxx>
- Re: Consumer Grade SSD Clusters
- From: Wido den Hollander <wido@xxxxxxxx>
- Re: Consumer Grade SSD Clusters
- From: Nick Fisk <nick@xxxxxxxxxx>
- Consumer Grade SSD Clusters
- From: Quenten Grasso <qgrasso@xxxxxxxxxx>
- Re: CEPH Expansion
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Re: CEPH Expansion
- From: Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Zoltan Arnold Nagy <zoltan@xxxxxxxxxxxxxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Jake Young <jak3kaj@xxxxxxxxx>
- Re: CEPH Expansion
- From: Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>
- Ceph with IB and ETH
- From: German Anders <ganders@xxxxxxxxxxxx>
- Re: Having an issue with: 7 pgs stuck inactive; 7 pgs stuck unclean; 71 requests are blocked > 32
- From: Jean-Charles Lopez <jc.lopez@xxxxxxxxxxx>
- Having an issue with: 7 pgs stuck inactive; 7 pgs stuck unclean; 71 requests are blocked > 32
- From: Glen Aidukas <GAidukas@xxxxxxxxxxxxxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Jake Young <jak3kaj@xxxxxxxxx>
- Re: RGW Enabling non default region on existing cluster - data migration
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Nick Fisk <nick@xxxxxxxxxx>
- Re: RGW Enabling non default region on existing cluster - data migration
- From: Yehuda Sadeh <yehuda@xxxxxxxxxx>
- Re: Different flavors of storage?
- From: Don Doerner <dondoerner@xxxxxxxxxxxxx>
- Re: Ceph, LIO, VMWARE anyone?
- From: Zoltan Arnold Nagy <zoltan@xxxxxxxxxxxxxxxxxx>
- Re: RBD backup and snapshot
- From: Frank Yu <flyxiaoyu@xxxxxxxxx>
- remote storage
- From: Robert Duncan <Robert.Duncan@xxxxxxxx>
- Re: Different flavors of storage?
- From: Luis Periquito <periquito@xxxxxxxxx>
- Re: Different flavors of storage?
- From: Jason King <chn.kei@xxxxxxxxx>
- how to remove storage tier
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: erasure coded pool why ever k>1?
- From: Loic Dachary <loic@xxxxxxxxxxx>
- Re: erasure coded pool why ever k>1?
- From: Chad William Seys <cwseys@xxxxxxxxxxxxxxxx>
- Re: 4 GB mon database?
- From: Brian Rak <brak@xxxxxxxxxxxxxxx>
- Re: Journals on all SSD cluster
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: Journals on all SSD cluster
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Journals on all SSD cluster
- From: Andrew Thrift <andrew@xxxxxxxxxxxxxxxxx>
- multiple osd failure
- From: Rob Antonello <RobA@xxxxxxxxxxxxxxxxx>
- Re: Journals on all SSD cluster
- From: Alexandre DERUMIER <aderumier@xxxxxxxxx>
- Re: How to do maintenance without falling out of service?
- From: Luke Kao <Luke.Kao@xxxxxxxxxxxxx>
- rbd loaded 100%
- From: Никитенко Виталий <v1t83@xxxxxxxxx>
- RGW Enabling non default region on existing cluster - data migration
- From: Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>
- Installation of 2 radosgw, ceph username and instance
- From: Francois Lafont <flafdivers@xxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[xfs]
[Linux SCSI]
[Ceph Large]
[Samba]
[Yosemite Forum]