CEPH Filesystem Users
[Prev Page][Next Page]
- RBD for ephemeral
- From: pierre.grandin@xxxxxxxxxxxxx (Pierre Grandin)
- RBD for ephemeral
- From: michael.kidd@xxxxxxxxxxx (Michael J. Kidd)
- Fwd: "rbd map" command hangs
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Fwd: "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- Subscribe
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- Working at RedHat & Ceph User Committee
- From: karan.singh@xxxxxx (Karan Singh)
- Erasure coding
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Erasure coding
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Erasure coding
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- metadata pool : size growing
- From: florent@xxxxxxxxxxx (Florent B)
- metadata pool : size growing
- From: wido@xxxxxxxx (Wido den Hollander)
- metadata pool : size growing
- From: florent@xxxxxxxxxxx (Florent B)
- Working at RedHat & Ceph User Committee
- From: wido@xxxxxxxx (Wido den Hollander)
- Ceph booth at http://www.solutionslinux.fr/
- From: loic@xxxxxxxxxxx (Loic Dachary)
- ' rbd username specified but secret not found' error, virsh live migration on rbd
- From: calanchue@xxxxxxxxx (JinHwan Hwang)
- Ceph booth at http://www.solutionslinux.fr/
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- Working at RedHat & Ceph User Committee
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Various file lengths while uploading the same file
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- Various file lengths while uploading the same file
- From: arthurtumanyan@xxxxxxxxx (Arthur Tumanyan)
- How to point custom domains to a bucket and set default page and error page
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- ERROR: modinfo: could not find module rbd
- From: xanpeng@xxxxxxxxx (xan.peng)
- Error while initializing OSD directory
- From: xanpeng@xxxxxxxxx (xan.peng)
- RBD for ephemeral
- From: yumima@xxxxxxxxx (Yuming Ma (yumima))
- mon create error
- From: reistlin87@xxxxxxxxx (reistlin87)
- can i change the ruleset for the default pools (data, metadata, rbd)?
- From: xanpeng@xxxxxxxxx (xan.peng)
- OpenStack Icehouse and ephemeral disks created from image
- From: motovilovets.sergey@xxxxxxxxx (Sergey Motovilovets)
- OpenStack Icehouse and ephemeral disks created from image
- From: motovilovets.sergey@xxxxxxxxx (Sergey Motovilovets)
- mon create error
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- CephFS parallel reads from multiple replicas ?
- From: michal.pazdera@xxxxxxxxx (Michal Pazdera)
- mon create error
- From: reistlin87@xxxxxxxxx (reistlin87)
- "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- How to point custom domains to a bucket and set default page and error page
- From: wsnote@xxxxxxx (wsnote)
- Problem with radosgw and some file name characters
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Journal SSD durability
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- Journal SSD durability
- From: cperez@xxxxxxxxx (Carlos M. Perez)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Journal SSD durability
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- visualizing a ceph cluster automatically
- From: dotalton@xxxxxxxxx (Don Talton (dotalton))
- Alternate pools for RGW
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Problem with radosgw and some file name characters
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- visualizing a ceph cluster automatically
- From: christian.eichelmann@xxxxxxxx (Christian Eichelmann)
- attive+degraded cluster
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Ceph with VMWare / XenServer
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- attive+degraded cluster
- From: ignaziocassano@xxxxxxxxx (Ignazio Cassano)
- Berlin MeetUp
- From: r.sander@xxxxxxxxxxxxxxxxxxx (Robert Sander)
- Advanced CRUSH map rules
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- Storage Multi Tenancy
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- Problem with ceph_filestore_dump, possibly stuck in a loop
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- visualizing a ceph cluster automatically
- From: sergking@xxxxxxxxx (Sergey Korolev)
- Not specifically related to ceph but 6tb sata drives on Dell Poweredge servers
- From: drew.weaver@xxxxxxxxxx (Drew Weaver)
- Journal SSD durability
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- visualizing a ceph cluster automatically
- From: drew.weaver@xxxxxxxxxx (Drew Weaver)
- Alternate pools for RGW
- From: Ilya_Storozhilov@xxxxxxxx (Ilya Storozhilov)
- raid levels (Information needed)
- From: r.sander@xxxxxxxxxxxxxxxxxxx (Robert Sander)
- raid levels (Information needed)
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Does CEPH rely on any multicasting?
- From: dietmar@xxxxxxxxxxx (Dietmar Maurer)
- raid levels (Information needed)
- From: jerker@xxxxxxxxxxxx (Jerker Nyberg)
- Does CEPH rely on any multicasting?
- From: r.sander@xxxxxxxxxxxxxxxxxxx (Robert Sander)
- Does CEPH rely on any multicasting?
- From: dietmar@xxxxxxxxxxx (Dietmar Maurer)
- Does CEPH rely on any multicasting?
- From: dwm37@xxxxxxxxx (David McBride)
- [ceph-users] “ceph pg dump summary –f json” question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- [ceph-users] “ceph pg dump summary –f json” question
- From: xanpeng@xxxxxxxxx (xan.peng)
- [ceph-users] “ceph pg dump summary –f json” question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- osd down/autoout problem
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Information needed
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- [ceph-users] “ceph pg dump summary –f json” question
- From: xanpeng@xxxxxxxxx (xan.peng)
- “ceph pg dump summary –f json” question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- osd down/autoout problem
- From: yguang11@xxxxxxxxx (Guang)
- help to subscribe to this email address
- From: sean_cao@xxxxxxxxxxxx (Sean Cao)
- PCI-E SSD Journal for SSD-OSD Disks
- From: chibi@xxxxxxx (Christian Balzer)
- mkcephfs questions
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- OpenStack Icehouse and ephemeral disks created from image
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- PCI-E SSD Journal for SSD-OSD Disks
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- PCI-E SSD Journal for SSD-OSD Disks
- From: stephane.boisvert@xxxxxxxxxxxx (Stephane Boisvert)
- PCI-E SSD Journal for SSD-OSD Disks
- From: kupo@xxxxxxxxxxxxxxxx (Tyler Wilson)
- Does CEPH rely on any multicasting?
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- Does CEPH rely on any multicasting?
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- Does CEPH rely on any multicasting?
- From: dietmar@xxxxxxxxxxx (Dietmar Maurer)
- Question about Performance with librados
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Does CEPH rely on any multicasting?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Does CEPH rely on any multicasting?
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- Does CEPH rely on any multicasting?
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- OpenStack Icehouse and ephemeral disks created from image
- From: pierre.grandin@xxxxxxxxxxxxx (Pierre Grandin)
- OpenStack Icehouse and ephemeral disks created from image
- From: motovilovets.sergey@xxxxxxxxx (Сергей Мотовиловец)
- Segmentation fault RadosGW
- From: f.zimmermann@xxxxxxxxxxx (Fabian Zimmermann)
- Problem with radosgw and some file name characters
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- OSD crashed
- From: sage@xxxxxxxxxxx (Sage Weil)
- osd down/autoout problem
- From: sage@xxxxxxxxxxx (Sage Weil)
- osd down/autoout problem
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Storage Multi Tenancy
- From: jvleur@xxxxxxx (Jeroen van Leur)
- cephx authentication defaults
- From: sage@xxxxxxxxxxx (Sage Weil)
- OSD crashed
- From: sergking@xxxxxxxxx (Sergey Korolev)
- OpenStack Icehouse and ephemeral disks created from image
- From: macias@xxxxxxxxxxxxxxx (Maciej Gałkiewicz)
- Performance stats
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- osd down/autoout problem
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Performance stats
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- osd down/autoout problem
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- librados with java - who is using it?
- From: wido@xxxxxxxx (Wido den Hollander)
- Benchmark for Ceph
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Benchmark for Ceph
- From: cyril.seguin@xxxxxxxxxxxxx (Séguin Cyril)
- Slow IOPS on RBD compared to journal and backing devices
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe - Profihost AG)
- Slow IOPS on RBD compared to journal and backing devices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Pool without Name
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Was the /etc/init.d/ceph bug fixed in firefly?
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Slow IOPS on RBD compared to journal and backing devices
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe - Profihost AG)
- Performance stats
- From: jc.lopez@xxxxxxxxxxx (Jean-Charles LOPEZ)
- Performance stats
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Slow IOPS on RBD compared to journal and backing devices
- From: xanpeng@xxxxxxxxx (xan.peng)
- PCI-E SSD Journal for SSD-OSD Disks
- From: chibi@xxxxxxx (Christian Balzer)
- Flapping OSDs. Safe to upgrade?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- OpenStack Icehouse and ephemeral disks created from image
- From: macias@xxxxxxxxxxxxxxx (Maciej Gałkiewicz)
- PCI-E SSD Journal for SSD-OSD Disks
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Flapping OSDs. Safe to upgrade?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Flapping OSDs. Safe to upgrade?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Move osd disks between hosts
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- PCI-E SSD Journal for SSD-OSD Disks
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- can i change the ruleset for the default pools (data, metadata, rbd)?
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Bulk storage use case
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- PCI-E SSD Journal for SSD-OSD Disks
- From: kupo@xxxxxxxxxxxxxxxx (Tyler Wilson)
- Slow IOPS on RBD compared to journal and backing devices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- simultaneous access to ceph via librados and s3 gw
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- simultaneous access to ceph via librados and s3 gw
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- librados with java - who is using it?
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- simultaneous access to ceph via librados and s3 gw
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- cephx authentication defaults
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Why number of objects increase when a PG is added
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Why number of objects increase when a PG is added
- From: sheshas@xxxxxxxxx (Shesha Sreenivasamurthy)
- Advanced CRUSH map rules
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- sparse copy between pools
- From: andrey@xxxxxxx (Andrey Korolyov)
- Advanced CRUSH map rules
- From: pasha@xxxxxxxxx (Pavel V. Kaygorodov)
- Advanced CRUSH map rules
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Advanced CRUSH map rules
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- ceph firefly PGs in active+clean+scrubbing state
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- Pool without Name
- From: wido@xxxxxxxx (Wido den Hollander)
- Pool without Name
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- crushmap question
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Slow IOPS on RBD compared to journalandbackingdevices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Move osd disks between hosts
- From: dinuvlad13@xxxxxxxxx (Dinu Vlad)
- Ceph Plugin for Collectd
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- Bulk storage use case
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- Slow IOPS on RBD compared to journalandbackingdevices
- From: ganders@xxxxxxxxxxxx (German Anders)
- Move osd disks between hosts
- From: sage@xxxxxxxxxxx (Sage Weil)
- Move osd disks between hosts
- From: dinuvlad13@xxxxxxxxx (Dinu Vlad)
- Slow IOPS on RBD compared to journal andbackingdevices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Slow IOPS on RBD compared to journal andbackingdevices
- From: ganders@xxxxxxxxxxxx (German Anders)
- Slow IOPS on RBD compared to journal and backingdevices
- From: ganders@xxxxxxxxxxxx (German Anders)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Ceph Plugin for Collectd
- From: christian.eichelmann@xxxxxxxx (Christian Eichelmann)
- Rados GW Method not allowed
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Monitoring ceph statistics using rados python module
- From: adrian@xxxxxxxxxxx (Adrian Banasiak)
- client: centos6.4 no rbd.ko
- From: cristi.falcas@xxxxxxxxx (Cristian Falcas)
- sparse copy between pools
- From: ceph@xxxxxxxxxxxxxxxxx (Erwin Lubbers)
- client: centos6.4 no rbd.ko
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- client: centos6.4 no rbd.ko
- From: maoqi1982@xxxxxxx (maoqi1982)
- Slow IOPS on RBD compared to journal and backing devices
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe - Profihost AG)
- Slow IOPS on RBD compared to journal and backing devices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- crushmap question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- crushmap question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Error while initializing OSD directory
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- Monitoring ceph statistics using rados python module
- From: log1024@xxxxxxxx (Kai Zhang)
- Monitoring ceph statistics using rados python module
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Journal SSD durability
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- Journal SSD durability
- From: kyle.bader@xxxxxxxxx (Kyle Bader)
- Rados GW Method not allowed
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Journal SSD durability
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Migrate whole clusters
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Migrate whole clusters
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Ceph 0.80.1 delete/recreate data/metadata pools
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- Ceph 0.80.1 delete/recreate data/metadata pools
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- ceph firefly PGs in active+clean+scrubbing state
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- Occasional Missing Admin Sockets
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Migrate whole clusters
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Occasional Missing Admin Sockets
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Migrate whole clusters
- From: frederic.yang@xxxxxxxxx (Fred Yang)
- Occasional Missing Admin Sockets
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- crushmap question
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Where is the SDK of ceph object storage
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- too slowly upload on ceph object storage
- From: stephen.taylor@xxxxxxxxxxxxxxxx (Stephen Taylor)
- Monitoring ceph statistics using rados python module
- From: dotalton@xxxxxxxxx (Don Talton (dotalton))
- Monitoring ceph statistics using rados python module
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Lost access to radosgw after crash?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Lost access to radosgw after crash?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Monitoring ceph statistics using rados python module
- From: adrian@xxxxxxxxxxx (Adrian Banasiak)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Lost access to radosgw after crash?
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- Lost access to radosgw after crash?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Occasional Missing Admin Sockets
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- ceph firefly PGs in active+clean+scrubbing state
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- Lost access to radosgw after crash?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Monitoring ceph statistics using rados python module
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- NFS over CEPH - best practice
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- Monitoring ceph statistics using rados python module
- From: adrian@xxxxxxxxxxx (Adrian Banasiak)
- Ceph with VMWare / XenServer
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Migrate whole clusters
- From: kyle.bader@xxxxxxxxx (Kyle Bader)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Monitoring ceph statistics
- From: adrian@xxxxxxxxxxx (Adrian Banasiak)
- Ceph with VMWare / XenServer
- From: gilles.mocellin@xxxxxxxxxxxxxx (Gilles Mocellin)
- Performance stats
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- Journal SSD durability
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Rados GW Method not allowed
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- Bulk storage use case
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Bulk storage use case
- From: c.lemarchand@xxxxxxxxxxx (Cédric Lemarchand)
- Fwd: What is link and unlink options used for in radosgw-admin
- From: huangwenjun20@xxxxxxxxx (Wenjun Huang)
- Journal SSD durability
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- Journal SSD durability
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- crushmap question
- From: ptiernan@xxxxxxxxxxxx (Peter)
- crushmap question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- What is link and unlink options used for in radosgw-admin
- From: huangwenjun20@xxxxxxxxx (Wenjun Huang)
- Where is the SDK of ceph object storage
- From: wsnote@xxxxxxx (wsnote)
- How to set selinux for ceph on CentOS
- From: ji.you@xxxxxxxxx (You, Ji)
- v0.80.1 Firefly released
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- ceph firefly PGs in active+clean+scrubbing state
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- How to enable the 'fancy striping' in Ceph
- From: blacker1981@xxxxxxx (lijian)
- How to enable the 'fancy striping' in Ceph
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- v0.80.1 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- CEPH placement groups and pool sizes
- From: pieter.koorts@xxxxxx (Pieter Koorts)
- v0.80 Firefly released
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph with VMWare / XenServer
- From: uwe@xxxxxxxxxxxxx (Uwe Grohnwaldt)
- CEPH placement groups and pool sizes
- From: Bradley.McNamara@xxxxxxxxxxx (McNamara, Bradley)
- NFS over CEPH - best practice
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- NFS over CEPH - best practice
- From: Bradley.McNamara@xxxxxxxxxxx (McNamara, Bradley)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- Tape backup for CEPH
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph with VMWare / XenServer
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- Ceph booth at http://www.solutionslinux.fr/
- From: loic@xxxxxxxxxxx (Loic Dachary)
- NFS over CEPH - best practice
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- Ceph with VMWare / XenServer
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- Unable to attach a volume, device is busy
- From: mloza@xxxxxxxxxxxxx (Mark Loza)
- ceph firefly PGs in active+clean+scrubbing state
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- Ceph booth at http://www.solutionslinux.fr/
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Ceph booth at http://www.solutionslinux.fr/
- From: loic@xxxxxxxxxxx (Loic Dachary)
- NFS over CEPH - best practice
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- List connected clients ?
- From: florent@xxxxxxxxxxx (Florent B)
- Tape backup for CEPH
- From: yguang11@xxxxxxxxx (Guang)
- ceph firefly PGs in active+clean+scrubbing state
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Ceph with VMWare / XenServer
- From: uwe@xxxxxxxxxxxxx (Uwe Grohnwaldt)
- ceph firefly PGs in active+clean+scrubbing state
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- CEPH placement groups and pool sizes
- From: wido@xxxxxxxx (Wido den Hollander)
- Ceph with VMWare / XenServer
- From: jak3kaj@xxxxxxxxx (Jake Young)
- Ceph with VMWare / XenServer
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- CEPH placement groups and pool sizes
- From: pieter.koorts@xxxxxx (Pieter Koorts)
- Question about Performance with librados
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- How to enable the 'fancy striping' in Ceph
- From: blacker1981@xxxxxxx (lijian)
- Ceph with VMWare / XenServer
- From: uwe@xxxxxxxxxxxxx (Uwe Grohnwaldt)
- [Query]Monitoring ceph resources
- From: saurav.lahiri@xxxxxxxxxxxxx (Saurav Lahiri)
- Ceph with VMWare / XenServer
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Ceph Not getting into a clean state
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Ceph Not getting into a clean state
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Don't allow user to create buckets but can read in radosgw
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- [OFF TOPIC] Deep Intellect - Inside the mind of the octopus
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- [OFF TOPIC] Deep Intellect - Inside the mind of the octopus
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- fixing degraded PGs
- From: kei.masumoto@xxxxxxxxx (Kei.masumoto)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- ceph-noarch firefly repodata
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- ceph-noarch firefly repodata
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- Info firefly qemu rbd
- From: fiezzi@xxxxxxxx (Federico Iezzi)
- v0.80 Firefly released
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- qemu-img break cloudstack snapshot
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Bulk storage use case
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Bulk storage use case
- From: c.lemarchand@xxxxxxxxxxx (Cédric Lemarchand)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Migrate whole clusters
- From: andrey@xxxxxxx (Andrey Korolyov)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- Fwd: Bad performance of CephFS (first use)
- From: chibi@xxxxxxx (Christian Balzer)
- Bulk storage use case
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- v0.80 Firefly released
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- pgs not mapped to osds, tearing hair out
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Suggestions on new cluster
- From: cperez@xxxxxxxxx (Carlos M. Perez)
- Fwd: Bad performance of CephFS (first use)
- From: michal.pazdera@xxxxxxxxx (Michal Pazdera)
- Low latency values
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- issues with ceph
- From: lincolnb@xxxxxxxxxxxx (Lincoln Bryant)
- Low latency values
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- Migrate whole clusters
- From: kyle.bader@xxxxxxxxx (Kyle Bader)
- Low latency values
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- issues with ceph
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- too slowly upload on ceph object storage
- From: stephen.taylor@xxxxxxxxxxxxxxxx (Stephen Taylor)
- Migrate whole clusters
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Low latency values
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- Low latency values
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Low latency values
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- issues with ceph
- From: earonesty@xxxxxxxxxxxxxxxxxxxxxx (Aronesty, Erik)
- issues with ceph
- From: earonesty@xxxxxxxxxxxxxxxxxxxxxx (Aronesty, Erik)
- Suggestions on new cluster
- From: chibi@xxxxxxx (Christian Balzer)
- Low latency values
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- pgs not mapped to osds, tearing hair out
- From: sage@xxxxxxxxxxx (Sage Weil)
- Low latency values
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- issues with ceph
- From: lincolnb@xxxxxxxxxxxx (Lincoln Bryant)
- issues with ceph
- From: earonesty@xxxxxxxxxxxxxxxxxxxxxx (Aronesty, Erik)
- pgs not mapped to osds, tearing hair out
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: sage@xxxxxxxxxxx (Sage Weil)
- NFS over CEPH - best practice
- From: maciej.bonin@xxxxxxxx (Maciej Bonin)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Bulk storage use case
- From: c.lemarchand@xxxxxxxxxxx (Cédric Lemarchand)
- Migrate whole clusters
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Ceph Not getting into a clean state
- From: martin@xxxxxxxxxxx (Martin B Nielsen)
- Help -Ceph deployment in Single node Like Devstack
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- Ceph Not getting into a clean state
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Delete pool .rgw.bucket and objects within it
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- Fwd: Bad performance of CephFS (first use)
- From: chibi@xxxxxxx (Christian Balzer)
- ERROR: modinfo: could not find module rbd
- From: easelu@xxxxxxxxx (Ease Lu)
- Ceph Not getting into a clean state
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- Ceph Not getting into a clean state
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Ceph Not getting into a clean state
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Fwd: Bad performance of CephFS (first use)
- From: michal.pazdera@xxxxxxxxx (Michal Pazdera)
- Suggestions on new cluster
- From: chibi@xxxxxxx (Christian Balzer)
- List users not listing users
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- List users not listing users
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- List users not listing users
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- Delete pool .rgw.bucket and objects within it
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- too slowly upload on ceph object storage
- From: wsnote@xxxxxxx (wsnote)
- subscribe ceph mail list
- From: sean_cao@xxxxxxxxxxxx (Sean Cao)
- NFS over CEPH - best practice
- From: stuartl@xxxxxxxxxx (Stuart Longland)
- 0.80 Firefly Debian/Ubuntu Trusty Packages
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- 0.80 Firefly Debian/Ubuntu Trusty Packages
- From: lists@xxxxxxxxx (Henrik Korkuc)
- 0.80 Firefly Debian/Ubuntu Trusty Packages
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- 0.80 binaries?
- From: lists@xxxxxxxxx (Henrik Korkuc)
- Info firefly qemu rbd
- From: josh.durgin@xxxxxxxxxxx (Josh Durgin)
- Ceph Not getting into a clean state
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Info firefly qemu rbd
- From: fiezzi@xxxxxxxx (Federico Iezzi)
- Question about Performance with librados
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- 0.80 binaries?
- From: lesser.evil@xxxxxxxxx (Shawn Edwards)
- Replace journals disk
- From: sage@xxxxxxxxxxx (Sage Weil)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- 0.67.7 rpms changed today??
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Replace journals disk
- From: sage@xxxxxxxxxxx (Sage Weil)
- 0.67.7 rpms changed today??
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Unable to remove RBD volume
- From: jon@xxxxxxxxxxxxxxxx (Jonathan Gowar)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: ulembke@xxxxxxxxxxxx (Udo Lembke)
- Slow IOPS on RBD compared to journal and backing devices
- From: ulembke@xxxxxxxxxxxx (Udo Lembke)
- Suggestions on new cluster
- From: cperez@xxxxxxxxx (Carlos M. Perez)
- 16 osds: 11 up, 16 in
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- Unable to remove RBD volume
- From: jon@xxxxxxxxxxxxxxxx (Jonathan Gowar)
- Bad performance of CephFS (first use)
- From: michal.pazdera@xxxxxxxxx (Michal Pazdera)
- Ceph Not getting into a clean state
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- List users not listing users
- From: hypunit@xxxxxxxxx (Punit Dambiwal)
- v0.80 Firefly released
- From: andrey@xxxxxxx (Andrey Korolyov)
- Error while running rados gateway
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- Hey, about radosgw , always encounter internal server error .
- From: ptiernan@xxxxxxxxxxxx (Peter)
- Does ceph has impact on imp IO performance
- From: duan.xufeng@xxxxxxxxxx (duan.xufeng at zte.com.cn)
- Does ceph has impact on imp IO performance
- From: duan.xufeng@xxxxxxxxxx (duan.xufeng at zte.com.cn)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Errors while integrating Rados Gateway with Keystone
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Hey, about radosgw , always encounter internal server error .
- From: peng.dev@xxxxxx (peng)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Help -Ceph deployment in Single node Like Devstack
- From: neil.levine@xxxxxxxxxxx (Neil Levine)
- Deep-Scrub Scheduling
- From: aarontc@xxxxxxxxxxx (Aaron Ten Clay)
- Deep-Scrub Scheduling
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Deep-Scrub Scheduling
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Help -Ceph deployment in Single node Like Devstack
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Deep-Scrub Scheduling
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Slow IOPS on RBD compared to journal and backing devices
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- NFS over CEPH - best practice
- From: gilles.mocellin@xxxxxxxxxxxxxx (Gilles Mocellin)
- NFS over CEPH - best practice
- From: vadikgo@xxxxxxxxx (Vladislav Gorbunov)
- 16 osds: 11 up, 16 in
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- 16 osds: 11 up, 16 in
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- 16 osds: 11 up, 16 in
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- Bulk storage use case
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ovirt
- From: wido@xxxxxxxx (Wido den Hollander)
- [ANN] ceph-deploy 1.5.2 released
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- 16 osds: 11 up, 16 in
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- v0.80 Firefly released
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- cannot revert lost objects
- From: khoran@xxxxxxxxxxxxxxxxxxxx (Kevin Horan)
- [Ceph-community] How to install CEPH on CentOS 6.3
- From: aarontc@xxxxxxxxxxx (Aaron Ten Clay)
- v0.80 Firefly released
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Ovirt
- From: neil.levine@xxxxxxxxxxx (Neil Levine)
- health HEALTH_WARN too few pgs per osd (16 < min 20)
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- health HEALTH_WARN too few pgs per osd (16 < min 20)
- From: lists@xxxxxxxxx (Henrik Korkuc)
- health HEALTH_WARN too few pgs per osd (16 < min 20)
- From: st.uzver@xxxxxxxxx (*sm1Ly)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Delete pool .rgw.bucket and objects within it
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- Ovirt
- From: nathan@xxxxxxxxxxxx (Nathan Stratton)
- v0.80 Firefly released
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- v0.80 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Cache tiering
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- v0.80 Firefly released
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Cache tiering
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- v0.80 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Cache tiering
- From: sage@xxxxxxxxxxx (Sage Weil)
- v0.80 Firefly released
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Cache tiering
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Explicit F2FS support (was: v0.80 Firefly released)
- From: sage@xxxxxxxxxxx (Sage Weil)
- Cache tiering
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- [rados-java] Hi, I am a newer for ceph . And I found rados-java in github, but there are some problems for me .
- From: wido@xxxxxxxx (Wido den Hollander)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- NFS over CEPH - best practice
- From: vadikgo@xxxxxxxxx (Vlad Gorbunov)
- v0.80 Firefly released
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- v0.80 Firefly released
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Cache tiering
- From: wido@xxxxxxxx (Wido den Hollander)
- NFS over CEPH - best practice
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Cache tiering
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- NFS over CEPH - best practice
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- NFS over CEPH - best practice
- From: vadikgo@xxxxxxxxx (Vlad Gorbunov)
- NFS over CEPH - best practice
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- advice with hardware configuration
- From: chibi@xxxxxxx (Christian Balzer)
- NFS over CEPH - best practice
- From: wido@xxxxxxxx (Wido den Hollander)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Bulk storage use case
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- Change size journal's blocks from 4k to another.
- From: mike.almateia@xxxxxxxxx (Mike)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- Explicit F2FS support (was: v0.80 Firefly released)
- From: andrey@xxxxxxx (Andrey Korolyov)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- How to install CEPH on CentOS 6.3
- From: easelu@xxxxxxxxx (Ease Lu)
- Delete pool .rgw.bucket and objects within it
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- About ceph.conf
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Delete pool .rgw.bucket and objects within it
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- v0.80 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Ceph OpenStack Integration
- From: derek@xxxxxxxxxxxxxx (Derek Yarnell)
- Open Source Storage Hackathon Before OpenStack Summit
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- cannot revert lost objects
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- some unfound object
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- advice with hardware configuration
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Replace journals disk
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- advice with hardware configuration
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- advice with hardware configuration
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- advice with hardware configuration
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- RBD on Mac OS X
- From: Jurvis.LaSalle@xxxxxxxxxxxxxxxxxxxxx (LaSalle, Jurvis)
- advice with hardware configuration
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- RBD on Mac OS X
- From: mike@xxxxxxxxxxxxxxxx (Mike Bryant)
- advice with hardware configuration
- From: chibi@xxxxxxx (Christian Balzer)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- advice with hardware configuration
- From: chibi@xxxxxxx (Christian Balzer)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- advice with hardware configuration
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- Migrate system VMs from local storage to CEPH
- From: wido@xxxxxxxx (Wido den Hollander)
- advice with hardware configuration
- From: wido@xxxxxxxx (Wido den Hollander)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- About ceph.conf
- From: sage@xxxxxxxxxxx (Sage Weil)
- Fwd: Ceph perfomance issue!
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- View or set Policy
- From: spuntamkar@xxxxxxxxx (Shashank Puntamkar)
- Replace journals disk
- From: pasha@xxxxxxxxx (Pavel V. Kaygorodov)
- Ceph installation
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- Replace journals disk
- From: frederic.yang@xxxxxxxxx (Fred Yang)
- Ceph installation
- From: shadebe@xxxxxxxxxx (Sakhi Hadebe)
- RBD on Mac OS X
- From: andrey@xxxxxxx (Andrey Korolyov)
- RBD on Mac OS X
- From: wogri@xxxxxxxxx (Wolfgang Hennerbichler)
- RBD on Mac OS X
- From: pasha@xxxxxxxxx (Pavel V. Kaygorodov)
- About ceph.conf
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Replace journals disk
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Default pool ruleset problem
- From: jc.lopez@xxxxxxxxxxx (Jean-Charles Lopez)
- Default pool ruleset problem
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- About ceph.conf
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Where does ceph save files?
- From: wsnote@xxxxxxx (wsnote)
- Manually mucked up pg, need help fixing
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Manually mucked up pg, need help fixing
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Manually mucked up pg, need help fixing
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Manually mucked up pg, need help fixing
- From: jak3kaj@xxxxxxxxx (Jake Young)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Fatigue for XFS
- From: david@xxxxxxxxxxxxx (Dave Chinner)
- Manually mucked up pg, need help fixing
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- some unfound object
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Fatigue for XFS
- From: andrey@xxxxxxx (Andrey Korolyov)
- Fatigue for XFS
- From: david@xxxxxxxxxxxxx (Dave Chinner)
- Manually mucked up pg, need help fixing
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Fatigue for XFS
- From: andrey@xxxxxxx (Andrey Korolyov)
- Rados Gateway pagination
- From: fabricio@xxxxxxxxxxxxxxx (Fabricio Archanjo)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Migrate system VMs from local storage to CEPH
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Ceph RADOS Gateway setup with Apache 2.4.3 and FastCGI 2.4.6 vesions
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- ceph editable failure domains
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- some unfound object
- From: vernon1987@xxxxxxx (vernon1987 at 126.com)
- Rados Gateway pagination
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- [rados-java] Hi, I am a newer for ceph . And I found rados-java in github, but there are some problems for me .
- From: peng.dev@xxxxxx (peng)
- Migrate system VMs from local storage to CEPH
- From: wido@xxxxxxxx (Wido den Hollander)
- Replace OSD drive without remove/re-add OSD
- From: indra@xxxxxxxx (Indra Pramana)
- Replace OSD drive without remove/re-add OSD
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- mkcephfs questions
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Replace OSD drive without remove/re-add OSD
- From: indra@xxxxxxxx (Indra Pramana)
- cannot revert lost objects
- From: khoran@xxxxxxxxxxxxxxxxxxxx (Kevin Horan)
- cannot revert lost objects
- From: kevinhoran@xxxxxxxxxxxxxxxxxxxx (Kevin Horan)
- Manually mucked up pg, need help fixing
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Ceph User Committee monthly meeting #2 : executive summary
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Replace OSD drive without remove/re-add OSD
- From: andrey@xxxxxxx (Andrey Korolyov)
- Replace OSD drive without remove/re-add OSD
- From: indra@xxxxxxxx (Indra Pramana)
- ceph editable failure domains
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph User Committee elections
- From: loic@xxxxxxxxxxx (Loic Dachary)
- ceph mom help
- From: ian.colle@xxxxxxxxxxx (Ian Colle)
- ceph mom help
- From: jlu@xxxxxxxxxxxxx (Jimmy Lu)
- Rados Gateway pagination
- From: fabricio@xxxxxxxxxxxxxxx (Fabricio Archanjo)
- ceph mom help
- From: jlu@xxxxxxxxxxxxx (Jimmy Lu)
- ceph mom help
- From: jlu@xxxxxxxxxxxxx (Jimmy Lu)
- help to tune ceph
- From: matteo.favaro@xxxxxxxxxxxx (Matteo Favaro)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Manual emperor monitor installation hangs at ceph-mon --mkfs
- From: stefan.walter@xxxxxxxxxxx (Stefan U. Walter)
- Replace OSD drive without remove/re-add OSD
- From: lists@xxxxxxxxx (Henrik Korkuc)
- Replace OSD drive without remove/re-add OSD
- From: andrey@xxxxxxx (Andrey Korolyov)
- ceph editable failure domains
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- Replace OSD drive without remove/re-add OSD
- From: indra@xxxxxxxx (Indra Pramana)
- Fwd: Access denied error
- From: hypunit@xxxxxxxxx (Punit Dambiwal)
- Red Hat to acquire Inktank
- From: Suresh.Sadhu@xxxxxxxxxx (Suresh Sadhu)
- Ceph Object Storage front-end?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph Object Storage front-end?
- From: mandell@xxxxxxxxxxxxxxx (Mandell Degerness)
- Ceph User Committee monthly meeting #2 : May 2nd, 2014
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Ceph unstable when upgrading from emperor (v0.72.2) to firefly (v0.80-rc1-16-g2708c3c)
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- cannot revert lost objects
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Red Hat to acquire Inktank
- From: neil.levine@xxxxxxxxxxx (Neil Levine)
- cannot revert lost objects
- From: khoran@xxxxxxxxxxxxxxxxxxxx (kevin horan)
- [ANN] ceph-deploy 1.5.0 released!
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- "ceph-deploy osd activate" error: AttributeError: 'module' object has no attribute 'logger' exception
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- v0.67.8 released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Red Hat to acquire Inktank
- From: loic@xxxxxxxxxxx (Loic Dachary)
- "ceph-deploy osd activate" error: AttributeError: 'module' object has no attribute 'logger' exception
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- how to modify the osd map
- From: duron800@xxxxxx (=?gb18030?b?t8k=?=)
- Red Hat to acquire Inktank
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- Red Hat to acquire Inktank
- From: danny@xxxxxxxxxxxxxxxxxxxxxx (Danny Luhde-Thompson)
- Red Hat to acquire Inktank
- From: stuartl@xxxxxxxxxx (Stuart Longland)
- Red Hat to acquire Inktank
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- osd can not start
- From: duron800@xxxxxx (=?gb18030?b?t8k=?=)
- Red Hat to acquire Inktank
- From: martin@xxxxxxxxxxx (Martin B Nielsen)
- Cancel a scrub?
- From: clewis at centraldesktop.com (Craig Lewis)
- ceph 0.78 mon and mds crashing (bus error)
- From: stijn.deweirdt at ugent.be (Stijn De Weirdt)
- write speed issue on RBD image
- From: ganders at despegar.com (German Anders)
- Cancel a scrub?
- From: sage at inktank.com (Sage Weil)
- Cancel a scrub?
- From: clewis at centraldesktop.com (Craig Lewis)
- CentOS radosgw-agent cannot be installed
- From: giorgis at acmac.uoc.gr (Georgios Dimitrakakis)
- ceph 0.78 mon and mds crashing (bus error)
- From: stijn.deweirdt at ugent.be (Stijn De Weirdt)
- Backup & Restore?
- From: clewis at centraldesktop.com (Craig Lewis)
- write speed issue on RBD image
- From: rglaue at cait.org (Russell E. Glaue)
- OpenStack + Ceph Integration
- From: sebastien.han at enovance.com (Sebastien Han)
- write speed issue on RBD image
- From: rglaue at cait.org (Russell E. Glaue)
- Multi-site Implementation
- From: clewis at centraldesktop.com (Craig Lewis)
- ceph 0.78 mon and mds crashing (bus error)
- From: greg at inktank.com (Gregory Farnum)
- ceph 0.78 mon and mds crashing (bus error)
- From: stijn.deweirdt at ugent.be (Stijn De Weirdt)
- cephx key for CephFS access only
- From: trhoden at gmail.com (Travis Rhoden)
- cephx key for CephFS access only
- From: greg at inktank.com (Gregory Farnum)
- cephx key for CephFS access only
- From: trhoden at gmail.com (Travis Rhoden)
- ceph 0.78 mon and mds crashing (bus error)
- From: stijn.deweirdt at ugent.be (Stijn De Weirdt)
- radosgw multipart-uploaded downloads fail
- From: yehuda at inktank.com (Yehuda Sadeh)
- Setting root directory in fstab with Fuse
- From: greg at inktank.com (Gregory Farnum)
- MDS crash when client goes to sleep
- From: greg at inktank.com (Gregory Farnum)
- ceph 0.78 mon and mds crashing (bus error)
- From: greg at inktank.com (Gregory Farnum)
- MDS crash when client goes to sleep
- From: florent at coppint.com (Florent B)
- Setting root directory in fstab with Fuse
- From: florent at coppint.com (Florent B)
- ceph 0.78 mon and mds crashing (bus error)
- From: Kenneth.Waegeman at UGent.be (Kenneth Waegeman)
- radosgw multipart-uploaded downloads fail
- From: given.to.lists.ceph-users.ceph.com.toasta.001 at traced.net (Benedikt Fraunhofer)
- OpenStack + Ceph Integration
- From: tomokazu.hirai at gmail.com (Tomokazu HIRAI)
- rbd map error - numerical result out of range
- From: ilya.dryomov at inktank.com (Ilya Dryomov)
- Backup & Restore?
- From: karan.singh at csc.fi (Karan Singh)
- rbd map error - numerical result out of range
- From: tom at t0mb.net (Tom)
- Backup & Restore?
- From: r.sander at heinlein-support.de (Robert Sander)
- can not find files .asok of osds in the folder /var/run/ceph
- From: thanhtv26 at gmail.com (Thanh Tran)
- Multi-site Implementation
- From: shang at canonical.com (Shang Wu)
- Ceph Multi-site Implementation
- From: shang at ubuntu.com (Shang Wu)
- Could anyone tell me How to remove MDS in cluster? Thanks
- From: duan.xufeng at zte.com.cn (duan.xufeng at zte.com.cn)
- RBD does not load at boot
- From: jeremy.hanmer at dreamhost.com (Jeremy Hanmer)
- RBD does not load at boot
- From: dnk at daterainc.com (Dan Koren)
- Largest Production Ceph Cluster
- From: jeremy.hanmer at dreamhost.com (Jeremy Hanmer)
- can not find files .asok of osds in the folder /var/run/ceph
- From: john.spray at inktank.com (John Spray)
- Multi-site Implementation
- From: shang.wu at canonical.com (Shang Wu)
- ceph 0.78 mon and mds crashing (bus error)
- From: greg at inktank.com (Gregory Farnum)
- ceph 0.78 mon and mds crashing (bus error)
- From: Kenneth.Waegeman at UGent.be (Kenneth Waegeman)
- rbd map error - numerical result out of range
- From: ilya.dryomov at inktank.com (Ilya Dryomov)
- rbd map error - numerical result out of range
- From: tom at t0mb.net (Tom)
- rbd map error - numerical result out of range
- From: ilya.dryomov at inktank.com (Ilya Dryomov)
- rbd map error - numerical result out of range
- From: tom at t0mb.net (Tom)
- Largest Production Ceph Cluster
- From: daniel.vanderster at cern.ch (Dan Van Der Ster)
- ceph 0.78 mon and mds crashing (bus error)
- From: ukernel at gmail.com (Yan, Zheng)
- radosgw multipart-uploaded downloads fail
- From: given.to.lists.ceph-users.ceph.com.toasta.001 at traced.net (Benedikt Fraunhofer)
- ceph 0.78 mon and mds crashing (bus error)
- From: Kenneth.Waegeman at UGent.be (Kenneth Waegeman)
- Largest Production Ceph Cluster
- From: andrey at xdel.ru (Andrey Korolyov)
- radosgw multipart-uploaded downloads fail
- From: yehuda at inktank.com (Yehuda Sadeh)
- RBD as backend for iSCSI SAN Targets
- From: ganders at despegar.com (German Anders)
- ceph 0.78 mon and mds crashing (bus error)
- From: Kenneth.Waegeman at UGent.be (Kenneth Waegeman)
- ceph 0.78 mon and mds crashing (bus error)
- From: joao.luis at inktank.com (Joao Eduardo Luis)
- ceph 0.78 mon and mds crashing (bus error)
- From: Kenneth.Waegeman at UGent.be (Kenneth Waegeman)
- Security Hole?
- From: larryliugml at gmail.com (Larry Liu)
- Largest Production Ceph Cluster
- From: r.sander at heinlein-support.de (Robert Sander)
- Largest Production Ceph Cluster
- From: Karol.Kozubal at elits.com (Karol Kozubal)
- RBD as backend for iSCSI SAN Targets
- From: jianingy.yang at gmail.com (Jianing Yang)
- OSD mystery
- From: reynoldp at b-one.net (Reynold PJ)
- radosgw multipart-uploaded downloads fail
- From: given.to.lists.ceph-users.ceph.com.toasta.001 at traced.net (Benedikt Fraunhofer)
- RBD does not load at boot
- From: igor.laskovy at gmail.com (Igor Laskovy)
- RDO - CEPH
- From: karan.singh at csc.fi (Karan Singh)
- can not find files .asok of osds in the folder /var/run/ceph
- From: thanhtv26 at gmail.com (Thanh Tran)
- OSDs crashing frequently
- From: Aaron Ten Clay <aarontc@xxxxxxxxxxx>
- Re: OSD mystery
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD mystery
- From: Dan Koren <dnk@xxxxxxxxxxxxx>
- Re: OSD mystery
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- OSD mystery
- From: Dan Koren <dnk@xxxxxxxxxxxxx>
- Re: RDO - CEPH
- From: Vilobh Meshram <vilobhmm@xxxxxxxxxxxxx>
- Re: Ceph: Error librbd to create a clone
- From: Jean-Charles Lopez <jc.lopez@xxxxxxxxxxx>
- Re: Mon hangs when started after Emperor upgrade
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Mon hangs when started after Emperor upgrade
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: Mon hangs when started after Emperor upgrade
- From: Jens Kristian Søgaard <jens@xxxxxxxxxxxxxxxxxxxx>
- Re: Security Hole?
- From: Dan Van Der Ster <daniel.vanderster@xxxxxxx>
- Re: How do I know which object takes storage space?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Mon hangs when started after Emperor upgrade
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Security Hole?
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: cephx key for CephFS access only
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Problem with object size in rados bench
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: Backward compatibility of librados in Firefly
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: OSD Restarts cause excessively high load average and "requests are blocked > 32 sec"
- From: Gregory Farnum <greg@xxxxxxxxxxx>
- Re: MDS debugging
- From: Kenneth Waegeman <Kenneth.Waegeman@xxxxxxxx>
- Re: MDS debugging
- From: Kenneth Waegeman <Kenneth.Waegeman@xxxxxxxx>
- Re: MDS debugging
- From: Martin B Nielsen <martin@xxxxxxxxxxx>
- Re: MDS debugging
- From: "Yan, Zheng" <ukernel@xxxxxxxxx>
- Re: How do I know which object takes storage space?
- From: Hell <hell@xxxxxxxxxxx>
- MDS debugging
- From: Kenneth Waegeman <Kenneth.Waegeman@xxxxxxxx>
- How do I know which object takes storage space?
- From: Jianing Yang <jianingy.yang@xxxxxxxxx>
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[xfs]
[Linux SCSI]
[Ceph Large]
[Samba]
[Yosemite Forum]