CEPH Filesystem Users
[Prev Page][Next Page]
- ceph-deploy mon create-initial
- From: martins@xxxxxxxxxx (Mārtiņš Jakubovičs)
- ceph-deploy mon create-initial
- From: wido@xxxxxxxx (Wido den Hollander)
- How to find the disk partitions attached to a OSD
- From: sharmilagovind@xxxxxxxxx (Sharmila Govind)
- ceph-deploy mon create-initial
- From: martins@xxxxxxxxxx (Mārtiņš Jakubovičs)
- ceph-deploy mon create-initial
- From: wido@xxxxxxxx (Wido den Hollander)
- ceph-deploy mon create-initial
- From: martins@xxxxxxxxxx (Mārtiņš Jakubovičs)
- Data still in OSD directories after removing
- From: ceph.list@xxxxxxxxx (Olivier Bonvalet)
- Access denied error for list users
- From: alain.dechorgnat@xxxxxxxxxx (alain.dechorgnat at orange.com)
- rbd watchers
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- 70+ OSD are DOWN and not coming up
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Inter-region data replication through radosgw
- From: wsnote@xxxxxxx (wsnote)
- 70+ OSD are DOWN and not coming up
- From: sage@xxxxxxxxxxx (Sage Weil)
- Questions about zone and disater recovery
- From: wsnote@xxxxxxx (wsnote)
- rbd watchers
- From: mandell@xxxxxxxxxxxxxxx (Mandell Degerness)
- Quota Management in CEPH
- From: josh.durgin@xxxxxxxxxxx (Josh Durgin)
- 70+ OSD are DOWN and not coming up
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Data still in OSD directories after removing
- From: josh.durgin@xxxxxxxxxxx (Josh Durgin)
- Inter-region data replication through radosgw
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Expanding pg's of an erasure coded pool
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Quota Management in CEPH
- From: vilobhmm@xxxxxxxxxxxxx (Vilobh Meshram)
- Data still in OSD directories after removing
- From: ceph.list@xxxxxxxxx (Olivier Bonvalet)
- RBD cache pool - not cleaning up
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- RBD cache pool - not cleaning up
- From: sage@xxxxxxxxxxx (Sage Weil)
- RBD cache pool - not cleaning up
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- Feature request: stable naming for external journals
- From: scott@xxxxxxxxxxxxx (Scott Laird)
- v0.67.9 Dumpling released
- From: sage@xxxxxxxxxxx (Sage Weil)
- CephFS MDS Setup
- From: wido@xxxxxxxx (Wido den Hollander)
- CephFS MDS Setup
- From: scottix@xxxxxxxxx (Scottix)
- How to find the disk partitions attached to a OSD
- From: jlu@xxxxxxxxxxxxx (Jimmy Lu)
- Inter-region data replication through radosgw
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- How to find the disk partitions attached to a OSD
- From: sage@xxxxxxxxxxx (Sage Weil)
- Problem with radosgw and some file name characters
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- How to find the disk partitions attached to a OSD
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Data still in OSD directories after removing
- From: sage@xxxxxxxxxxx (Sage Weil)
- How to find the disk partitions attached to a OSD
- From: sharmilagovind@xxxxxxxxx (Sharmila Govind)
- How to find the disk partitions attached to a OSD
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- How to find the disk partitions attached to a OSD
- From: sharmilagovind@xxxxxxxxx (Sharmila Govind)
- Ceph Firefly on Centos 6.5 cannot deploy osd
- From: ceph@xxxxxxxxxxxxxx (ceph at jack.fr.eu.org)
- Ceph Firefly on Centos 6.5 cannot deploy osd
- From: t10tennn@xxxxxxxxx (10 minus)
- 70+ OSD are DOWN and not coming up
- From: karan.singh@xxxxxx (Karan Singh)
- Access denied error for list users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- Expanding pg's of an erasure coded pool
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Data still in OSD directories after removing
- From: ceph.list@xxxxxxxxx (Olivier Bonvalet)
- Access denied error for list users
- From: alain.dechorgnat@xxxxxxxxxx (alain.dechorgnat at orange.com)
- Problem with ceph_filestore_dump, possibly stuck in a loop
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- PG Selection Criteria for Deep-Scrub
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- PG Selection Criteria for Deep-Scrub
- From: aarontc@xxxxxxxxxxx (Aaron Ten Clay)
- Ceph booth in Paris at solutionlinux.fr
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- PG Selection Criteria for Deep-Scrub
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- How do I do deep-scrub manually?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- nginx (tengine) and radosgw
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- nginx (tengine) and radosgw
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- nginx (tengine) and radosgw
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Ceph booth in Paris at solutionlinux.fr
- From: loic@xxxxxxxxxxx (Loic Dachary)
- issues with creating Swift users for radosgw
- From: simonw@xxxxxxxxxx (Simon Weald)
- nginx (tengine) and radosgw
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- RBD for ephemeral
- From: pierre.grandin@xxxxxxxxxxxxx (Pierre Grandin)
- Expanding pg's of an erasure coded pool
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- RBD for ephemeral
- From: pierre.grandin@xxxxxxxxxxxxx (Pierre Grandin)
- Problem with ceph_filestore_dump, possibly stuck in a loop
- From: david.zafman@xxxxxxxxxxx (David Zafman)
- RBD for ephemeral
- From: pierre.grandin@xxxxxxxxxxxxx (Pierre Grandin)
- 70+ OSD are DOWN and not coming up
- From: sage@xxxxxxxxxxx (Sage Weil)
- Fwd: "rbd map" command hangs
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- [radosgw] unable to perform any operation using s3 api
- From: dererk@xxxxxxxxxxxxxxx (Dererk)
- Ceph User Committee : call for votes
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Fwd: "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- Access denied error for list users
- From: alain.dechorgnat@xxxxxxxxxx (alain.dechorgnat at orange.com)
- Fwd: "rbd map" command hangs
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Fwd: "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- rbd watchers
- From: james.eckersall@xxxxxxxxx (James Eckersall)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Expanding pg's of an erasure coded pool
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Data still in OSD directories after removing
- From: ceph.list@xxxxxxxxx (Olivier Bonvalet)
- How do I do deep-scrub manually?
- From: tuantb@xxxxxxxxxx (Ta Ba Tuan)
- subscrible ceph-users mail list
- From: sean_cao@xxxxxxxxxxxx (Sean Cao)
- 70+ OSD are DOWN and not coming up
- From: karan.singh@xxxxxx (Karan Singh)
- Fwd: "rbd map" command hangs
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Erasure coding
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Access denied error for list users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- mon create error
- From: reistlin87@xxxxxxxxx (reistlin87)
- Fwd: "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- How do I do deep-scrub manually?
- From: jianingy.yang@xxxxxxxxx (Jianing Yang)
- crushmap for datacenters
- From: vadikgo@xxxxxxxxx (Vladislav Gorbunov)
- Firefly 0.80 rados bench cleanup / object removal broken?
- From: yguang11@xxxxxxxxx (Guang Yang)
- ' rbd username specified but secret not found' error, virsh live migration on rbd
- From: calanchue@xxxxxxxxx (JinHwan Hwang)
- ' rbd username specified but secret not found' error, virsh live migration on rbd
- From: josh.durgin@xxxxxxxxxxx (Josh Durgin)
- crushmap for datacenters
- From: vadikgo@xxxxxxxxx (Vladislav Gorbunov)
- Firefly 0.80 rados bench cleanup / object removal broken?
- From: Matt.Latter@xxxxxxxx (Matt.Latter at hgst.com)
- RBD for ephemeral
- From: pierre.grandin@xxxxxxxxxxxxx (Pierre Grandin)
- Ceph Plugin for Collectd
- From: dwm37@xxxxxxxxx (David McBride)
- is cephfs ready for production ?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- metadata pool : size growing
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- CephFS parallel reads from multiple replicas ?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Looking for ceph consultant
- From: GAidukas@xxxxxxxxxxxxxxxxxx (Glen Aidukas)
- mon create error
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- Web Gateway Start problem after upgrading Emperor to Firefly
- From: julien.calvet@xxxxxxxxxx (Julien Calvet)
- Fwd: "rbd map" command hangs
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Erasure coding
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- Fwd: "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- Fwd: "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- is cephfs ready for production ?
- From: ignaziocassano@xxxxxxxxx (Ignazio Cassano)
- RBD for ephemeral
- From: michael.kidd@xxxxxxxxxxx (Michael J. Kidd)
- RBD for ephemeral
- From: pierre.grandin@xxxxxxxxxxxxx (Pierre Grandin)
- RBD for ephemeral
- From: michael.kidd@xxxxxxxxxxx (Michael J. Kidd)
- Fwd: "rbd map" command hangs
- From: ilya.dryomov@xxxxxxxxxxx (Ilya Dryomov)
- Fwd: "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- Subscribe
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- Working at RedHat & Ceph User Committee
- From: karan.singh@xxxxxx (Karan Singh)
- Erasure coding
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Erasure coding
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Erasure coding
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- metadata pool : size growing
- From: florent@xxxxxxxxxxx (Florent B)
- metadata pool : size growing
- From: wido@xxxxxxxx (Wido den Hollander)
- metadata pool : size growing
- From: florent@xxxxxxxxxxx (Florent B)
- Working at RedHat & Ceph User Committee
- From: wido@xxxxxxxx (Wido den Hollander)
- Ceph booth at http://www.solutionslinux.fr/
- From: loic@xxxxxxxxxxx (Loic Dachary)
- ' rbd username specified but secret not found' error, virsh live migration on rbd
- From: calanchue@xxxxxxxxx (JinHwan Hwang)
- Ceph booth at http://www.solutionslinux.fr/
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- Working at RedHat & Ceph User Committee
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Various file lengths while uploading the same file
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- Various file lengths while uploading the same file
- From: arthurtumanyan@xxxxxxxxx (Arthur Tumanyan)
- How to point custom domains to a bucket and set default page and error page
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- ERROR: modinfo: could not find module rbd
- From: xanpeng@xxxxxxxxx (xan.peng)
- Error while initializing OSD directory
- From: xanpeng@xxxxxxxxx (xan.peng)
- RBD for ephemeral
- From: yumima@xxxxxxxxx (Yuming Ma (yumima))
- mon create error
- From: reistlin87@xxxxxxxxx (reistlin87)
- can i change the ruleset for the default pools (data, metadata, rbd)?
- From: xanpeng@xxxxxxxxx (xan.peng)
- OpenStack Icehouse and ephemeral disks created from image
- From: motovilovets.sergey@xxxxxxxxx (Sergey Motovilovets)
- OpenStack Icehouse and ephemeral disks created from image
- From: motovilovets.sergey@xxxxxxxxx (Sergey Motovilovets)
- mon create error
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- CephFS parallel reads from multiple replicas ?
- From: michal.pazdera@xxxxxxxxx (Michal Pazdera)
- mon create error
- From: reistlin87@xxxxxxxxx (reistlin87)
- "rbd map" command hangs
- From: jay.janardhan@xxxxxxxxxx (Jay Janardhan)
- How to point custom domains to a bucket and set default page and error page
- From: wsnote@xxxxxxx (wsnote)
- Problem with radosgw and some file name characters
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Journal SSD durability
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- Journal SSD durability
- From: cperez@xxxxxxxxx (Carlos M. Perez)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Journal SSD durability
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- visualizing a ceph cluster automatically
- From: dotalton@xxxxxxxxx (Don Talton (dotalton))
- Alternate pools for RGW
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Problem with radosgw and some file name characters
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- visualizing a ceph cluster automatically
- From: christian.eichelmann@xxxxxxxx (Christian Eichelmann)
- attive+degraded cluster
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Ceph with VMWare / XenServer
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- attive+degraded cluster
- From: ignaziocassano@xxxxxxxxx (Ignazio Cassano)
- Berlin MeetUp
- From: r.sander@xxxxxxxxxxxxxxxxxxx (Robert Sander)
- Advanced CRUSH map rules
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- Storage Multi Tenancy
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- Problem with ceph_filestore_dump, possibly stuck in a loop
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- visualizing a ceph cluster automatically
- From: sergking@xxxxxxxxx (Sergey Korolev)
- Not specifically related to ceph but 6tb sata drives on Dell Poweredge servers
- From: drew.weaver@xxxxxxxxxx (Drew Weaver)
- Journal SSD durability
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- visualizing a ceph cluster automatically
- From: drew.weaver@xxxxxxxxxx (Drew Weaver)
- Alternate pools for RGW
- From: Ilya_Storozhilov@xxxxxxxx (Ilya Storozhilov)
- raid levels (Information needed)
- From: r.sander@xxxxxxxxxxxxxxxxxxx (Robert Sander)
- raid levels (Information needed)
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Does CEPH rely on any multicasting?
- From: dietmar@xxxxxxxxxxx (Dietmar Maurer)
- raid levels (Information needed)
- From: jerker@xxxxxxxxxxxx (Jerker Nyberg)
- Does CEPH rely on any multicasting?
- From: r.sander@xxxxxxxxxxxxxxxxxxx (Robert Sander)
- Does CEPH rely on any multicasting?
- From: dietmar@xxxxxxxxxxx (Dietmar Maurer)
- Does CEPH rely on any multicasting?
- From: dwm37@xxxxxxxxx (David McBride)
- [ceph-users] “ceph pg dump summary –f json” question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- [ceph-users] “ceph pg dump summary –f json” question
- From: xanpeng@xxxxxxxxx (xan.peng)
- [ceph-users] “ceph pg dump summary –f json” question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- osd down/autoout problem
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Information needed
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- [ceph-users] “ceph pg dump summary –f json” question
- From: xanpeng@xxxxxxxxx (xan.peng)
- “ceph pg dump summary –f json” question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- osd down/autoout problem
- From: yguang11@xxxxxxxxx (Guang)
- help to subscribe to this email address
- From: sean_cao@xxxxxxxxxxxx (Sean Cao)
- PCI-E SSD Journal for SSD-OSD Disks
- From: chibi@xxxxxxx (Christian Balzer)
- mkcephfs questions
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- OpenStack Icehouse and ephemeral disks created from image
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- PCI-E SSD Journal for SSD-OSD Disks
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- PCI-E SSD Journal for SSD-OSD Disks
- From: stephane.boisvert@xxxxxxxxxxxx (Stephane Boisvert)
- PCI-E SSD Journal for SSD-OSD Disks
- From: kupo@xxxxxxxxxxxxxxxx (Tyler Wilson)
- Does CEPH rely on any multicasting?
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- Does CEPH rely on any multicasting?
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- Does CEPH rely on any multicasting?
- From: dietmar@xxxxxxxxxxx (Dietmar Maurer)
- Question about Performance with librados
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Does CEPH rely on any multicasting?
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Does CEPH rely on any multicasting?
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- Does CEPH rely on any multicasting?
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- OpenStack Icehouse and ephemeral disks created from image
- From: pierre.grandin@xxxxxxxxxxxxx (Pierre Grandin)
- OpenStack Icehouse and ephemeral disks created from image
- From: motovilovets.sergey@xxxxxxxxx (Сергей Мотовиловец)
- Segmentation fault RadosGW
- From: f.zimmermann@xxxxxxxxxxx (Fabian Zimmermann)
- Problem with radosgw and some file name characters
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- Problem with radosgw and some file name characters
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- OSD crashed
- From: sage@xxxxxxxxxxx (Sage Weil)
- osd down/autoout problem
- From: sage@xxxxxxxxxxx (Sage Weil)
- osd down/autoout problem
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Storage Multi Tenancy
- From: jvleur@xxxxxxx (Jeroen van Leur)
- cephx authentication defaults
- From: sage@xxxxxxxxxxx (Sage Weil)
- OSD crashed
- From: sergking@xxxxxxxxx (Sergey Korolev)
- OpenStack Icehouse and ephemeral disks created from image
- From: macias@xxxxxxxxxxxxxxx (Maciej Gałkiewicz)
- Performance stats
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- osd down/autoout problem
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Performance stats
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- osd down/autoout problem
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- librados with java - who is using it?
- From: wido@xxxxxxxx (Wido den Hollander)
- Benchmark for Ceph
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Benchmark for Ceph
- From: cyril.seguin@xxxxxxxxxxxxx (Séguin Cyril)
- Slow IOPS on RBD compared to journal and backing devices
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe - Profihost AG)
- Slow IOPS on RBD compared to journal and backing devices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Pool without Name
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Was the /etc/init.d/ceph bug fixed in firefly?
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Slow IOPS on RBD compared to journal and backing devices
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe - Profihost AG)
- Performance stats
- From: jc.lopez@xxxxxxxxxxx (Jean-Charles LOPEZ)
- Performance stats
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Slow IOPS on RBD compared to journal and backing devices
- From: xanpeng@xxxxxxxxx (xan.peng)
- PCI-E SSD Journal for SSD-OSD Disks
- From: chibi@xxxxxxx (Christian Balzer)
- Flapping OSDs. Safe to upgrade?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- OpenStack Icehouse and ephemeral disks created from image
- From: macias@xxxxxxxxxxxxxxx (Maciej Gałkiewicz)
- PCI-E SSD Journal for SSD-OSD Disks
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Flapping OSDs. Safe to upgrade?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Flapping OSDs. Safe to upgrade?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Move osd disks between hosts
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- PCI-E SSD Journal for SSD-OSD Disks
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- can i change the ruleset for the default pools (data, metadata, rbd)?
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Bulk storage use case
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- PCI-E SSD Journal for SSD-OSD Disks
- From: kupo@xxxxxxxxxxxxxxxx (Tyler Wilson)
- Slow IOPS on RBD compared to journal and backing devices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- simultaneous access to ceph via librados and s3 gw
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- simultaneous access to ceph via librados and s3 gw
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- librados with java - who is using it?
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- simultaneous access to ceph via librados and s3 gw
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- cephx authentication defaults
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Why number of objects increase when a PG is added
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Why number of objects increase when a PG is added
- From: sheshas@xxxxxxxxx (Shesha Sreenivasamurthy)
- Advanced CRUSH map rules
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- sparse copy between pools
- From: andrey@xxxxxxx (Andrey Korolyov)
- Advanced CRUSH map rules
- From: pasha@xxxxxxxxx (Pavel V. Kaygorodov)
- Advanced CRUSH map rules
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Advanced CRUSH map rules
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- ceph firefly PGs in active+clean+scrubbing state
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- Pool without Name
- From: wido@xxxxxxxx (Wido den Hollander)
- Pool without Name
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- crushmap question
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Slow IOPS on RBD compared to journalandbackingdevices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Move osd disks between hosts
- From: dinuvlad13@xxxxxxxxx (Dinu Vlad)
- Ceph Plugin for Collectd
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- Bulk storage use case
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- Slow IOPS on RBD compared to journalandbackingdevices
- From: ganders@xxxxxxxxxxxx (German Anders)
- Move osd disks between hosts
- From: sage@xxxxxxxxxxx (Sage Weil)
- Move osd disks between hosts
- From: dinuvlad13@xxxxxxxxx (Dinu Vlad)
- Slow IOPS on RBD compared to journal andbackingdevices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- Slow IOPS on RBD compared to journal andbackingdevices
- From: ganders@xxxxxxxxxxxx (German Anders)
- Slow IOPS on RBD compared to journal and backingdevices
- From: ganders@xxxxxxxxxxxx (German Anders)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Ceph Plugin for Collectd
- From: christian.eichelmann@xxxxxxxx (Christian Eichelmann)
- Rados GW Method not allowed
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Monitoring ceph statistics using rados python module
- From: adrian@xxxxxxxxxxx (Adrian Banasiak)
- client: centos6.4 no rbd.ko
- From: cristi.falcas@xxxxxxxxx (Cristian Falcas)
- sparse copy between pools
- From: ceph@xxxxxxxxxxxxxxxxx (Erwin Lubbers)
- client: centos6.4 no rbd.ko
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- client: centos6.4 no rbd.ko
- From: maoqi1982@xxxxxxx (maoqi1982)
- Slow IOPS on RBD compared to journal and backing devices
- From: s.priebe@xxxxxxxxxxxx (Stefan Priebe - Profihost AG)
- Slow IOPS on RBD compared to journal and backing devices
- From: josef@xxxxxxxxxxx (Josef Johansson)
- crushmap question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- crushmap question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Error while initializing OSD directory
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- Monitoring ceph statistics using rados python module
- From: log1024@xxxxxxxx (Kai Zhang)
- Monitoring ceph statistics using rados python module
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Journal SSD durability
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- Journal SSD durability
- From: kyle.bader@xxxxxxxxx (Kyle Bader)
- Rados GW Method not allowed
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Journal SSD durability
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Migrate whole clusters
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Migrate whole clusters
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Ceph 0.80.1 delete/recreate data/metadata pools
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- Ceph 0.80.1 delete/recreate data/metadata pools
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- ceph firefly PGs in active+clean+scrubbing state
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- Occasional Missing Admin Sockets
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Migrate whole clusters
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Occasional Missing Admin Sockets
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Migrate whole clusters
- From: frederic.yang@xxxxxxxxx (Fred Yang)
- Occasional Missing Admin Sockets
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- crushmap question
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Where is the SDK of ceph object storage
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- too slowly upload on ceph object storage
- From: stephen.taylor@xxxxxxxxxxxxxxxx (Stephen Taylor)
- Monitoring ceph statistics using rados python module
- From: dotalton@xxxxxxxxx (Don Talton (dotalton))
- Monitoring ceph statistics using rados python module
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Lost access to radosgw after crash?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Lost access to radosgw after crash?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Monitoring ceph statistics using rados python module
- From: adrian@xxxxxxxxxxx (Adrian Banasiak)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Lost access to radosgw after crash?
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- Lost access to radosgw after crash?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Occasional Missing Admin Sockets
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- ceph firefly PGs in active+clean+scrubbing state
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- Lost access to radosgw after crash?
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Monitoring ceph statistics using rados python module
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- NFS over CEPH - best practice
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- Monitoring ceph statistics using rados python module
- From: adrian@xxxxxxxxxxx (Adrian Banasiak)
- Ceph with VMWare / XenServer
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Migrate whole clusters
- From: kyle.bader@xxxxxxxxx (Kyle Bader)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Monitoring ceph statistics
- From: adrian@xxxxxxxxxxx (Adrian Banasiak)
- Ceph with VMWare / XenServer
- From: gilles.mocellin@xxxxxxxxxxxxxx (Gilles Mocellin)
- Performance stats
- From: yalla.gnan.kumar@xxxxxxxxxxxxx (yalla.gnan.kumar at accenture.com)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- Journal SSD durability
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Rados GW Method not allowed
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- Bulk storage use case
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Bulk storage use case
- From: c.lemarchand@xxxxxxxxxxx (Cédric Lemarchand)
- Fwd: What is link and unlink options used for in radosgw-admin
- From: huangwenjun20@xxxxxxxxx (Wenjun Huang)
- Journal SSD durability
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- Journal SSD durability
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Journal SSD durability
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- crushmap question
- From: ptiernan@xxxxxxxxxxxx (Peter)
- crushmap question
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- What is link and unlink options used for in radosgw-admin
- From: huangwenjun20@xxxxxxxxx (Wenjun Huang)
- Where is the SDK of ceph object storage
- From: wsnote@xxxxxxx (wsnote)
- How to set selinux for ceph on CentOS
- From: ji.you@xxxxxxxxx (You, Ji)
- v0.80.1 Firefly released
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- ceph firefly PGs in active+clean+scrubbing state
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- How to enable the 'fancy striping' in Ceph
- From: blacker1981@xxxxxxx (lijian)
- How to enable the 'fancy striping' in Ceph
- From: ukernel@xxxxxxxxx (Yan, Zheng)
- v0.80.1 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- CEPH placement groups and pool sizes
- From: pieter.koorts@xxxxxx (Pieter Koorts)
- v0.80 Firefly released
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph with VMWare / XenServer
- From: uwe@xxxxxxxxxxxxx (Uwe Grohnwaldt)
- CEPH placement groups and pool sizes
- From: Bradley.McNamara@xxxxxxxxxxx (McNamara, Bradley)
- NFS over CEPH - best practice
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- NFS over CEPH - best practice
- From: Bradley.McNamara@xxxxxxxxxxx (McNamara, Bradley)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- Tape backup for CEPH
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph with VMWare / XenServer
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- Ceph booth at http://www.solutionslinux.fr/
- From: loic@xxxxxxxxxxx (Loic Dachary)
- NFS over CEPH - best practice
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- Ceph with VMWare / XenServer
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- Unable to attach a volume, device is busy
- From: mloza@xxxxxxxxxxxxx (Mark Loza)
- ceph firefly PGs in active+clean+scrubbing state
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- Ceph booth at http://www.solutionslinux.fr/
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Ceph booth at http://www.solutionslinux.fr/
- From: loic@xxxxxxxxxxx (Loic Dachary)
- NFS over CEPH - best practice
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- List connected clients ?
- From: florent@xxxxxxxxxxx (Florent B)
- Tape backup for CEPH
- From: yguang11@xxxxxxxxx (Guang)
- ceph firefly PGs in active+clean+scrubbing state
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Ceph with VMWare / XenServer
- From: uwe@xxxxxxxxxxxxx (Uwe Grohnwaldt)
- ceph firefly PGs in active+clean+scrubbing state
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- CEPH placement groups and pool sizes
- From: wido@xxxxxxxx (Wido den Hollander)
- Ceph with VMWare / XenServer
- From: jak3kaj@xxxxxxxxx (Jake Young)
- Ceph with VMWare / XenServer
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- CEPH placement groups and pool sizes
- From: pieter.koorts@xxxxxx (Pieter Koorts)
- Question about Performance with librados
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- How to enable the 'fancy striping' in Ceph
- From: blacker1981@xxxxxxx (lijian)
- Ceph with VMWare / XenServer
- From: uwe@xxxxxxxxxxxxx (Uwe Grohnwaldt)
- [Query]Monitoring ceph resources
- From: saurav.lahiri@xxxxxxxxxxxxx (Saurav Lahiri)
- Ceph with VMWare / XenServer
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Ceph Not getting into a clean state
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Ceph Not getting into a clean state
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Don't allow user to create buckets but can read in radosgw
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- [OFF TOPIC] Deep Intellect - Inside the mind of the octopus
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- [OFF TOPIC] Deep Intellect - Inside the mind of the octopus
- From: amit.vijairania@xxxxxxxxx (Amit Vijairania)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- fixing degraded PGs
- From: kei.masumoto@xxxxxxxxx (Kei.masumoto)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- ceph-noarch firefly repodata
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- ceph-noarch firefly repodata
- From: sironside@xxxxxxxxxxxxx (Simon Ironside)
- Info firefly qemu rbd
- From: fiezzi@xxxxxxxx (Federico Iezzi)
- v0.80 Firefly released
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- qemu-img break cloudstack snapshot
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Bulk storage use case
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Bulk storage use case
- From: c.lemarchand@xxxxxxxxxxx (Cédric Lemarchand)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Migrate whole clusters
- From: andrey@xxxxxxx (Andrey Korolyov)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- Fwd: Bad performance of CephFS (first use)
- From: chibi@xxxxxxx (Christian Balzer)
- Bulk storage use case
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- v0.80 Firefly released
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- pgs not mapped to osds, tearing hair out
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Suggestions on new cluster
- From: cperez@xxxxxxxxx (Carlos M. Perez)
- Fwd: Bad performance of CephFS (first use)
- From: michal.pazdera@xxxxxxxxx (Michal Pazdera)
- Low latency values
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- issues with ceph
- From: lincolnb@xxxxxxxxxxxx (Lincoln Bryant)
- Low latency values
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- Migrate whole clusters
- From: kyle.bader@xxxxxxxxx (Kyle Bader)
- Low latency values
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- issues with ceph
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- too slowly upload on ceph object storage
- From: stephen.taylor@xxxxxxxxxxxxxxxx (Stephen Taylor)
- Migrate whole clusters
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Low latency values
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- Low latency values
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- Low latency values
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- issues with ceph
- From: earonesty@xxxxxxxxxxxxxxxxxxxxxx (Aronesty, Erik)
- issues with ceph
- From: earonesty@xxxxxxxxxxxxxxxxxxxxxx (Aronesty, Erik)
- Suggestions on new cluster
- From: chibi@xxxxxxx (Christian Balzer)
- Low latency values
- From: haomaiwang@xxxxxxxxx (Haomai Wang)
- pgs not mapped to osds, tearing hair out
- From: sage@xxxxxxxxxxx (Sage Weil)
- Low latency values
- From: daryder@xxxxxxxxx (Dan Ryder (daryder))
- issues with ceph
- From: lincolnb@xxxxxxxxxxxx (Lincoln Bryant)
- issues with ceph
- From: earonesty@xxxxxxxxxxxxxxxxxxxxxx (Aronesty, Erik)
- pgs not mapped to osds, tearing hair out
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: sage@xxxxxxxxxxx (Sage Weil)
- NFS over CEPH - best practice
- From: maciej.bonin@xxxxxxxx (Maciej Bonin)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Bulk storage use case
- From: c.lemarchand@xxxxxxxxxxx (Cédric Lemarchand)
- Migrate whole clusters
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Ceph Not getting into a clean state
- From: martin@xxxxxxxxxxx (Martin B Nielsen)
- Help -Ceph deployment in Single node Like Devstack
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- Ceph Not getting into a clean state
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Delete pool .rgw.bucket and objects within it
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- Fwd: Bad performance of CephFS (first use)
- From: chibi@xxxxxxx (Christian Balzer)
- ERROR: modinfo: could not find module rbd
- From: easelu@xxxxxxxxx (Ease Lu)
- Ceph Not getting into a clean state
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- Ceph Not getting into a clean state
- From: mark.kirkwood@xxxxxxxxxxxxxxx (Mark Kirkwood)
- Ceph Not getting into a clean state
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- Fwd: Bad performance of CephFS (first use)
- From: michal.pazdera@xxxxxxxxx (Michal Pazdera)
- Suggestions on new cluster
- From: chibi@xxxxxxx (Christian Balzer)
- List users not listing users
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- List users not listing users
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- List users not listing users
- From: yehuda@xxxxxxxxxxx (Yehuda Sadeh)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- Delete pool .rgw.bucket and objects within it
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- too slowly upload on ceph object storage
- From: wsnote@xxxxxxx (wsnote)
- subscribe ceph mail list
- From: sean_cao@xxxxxxxxxxxx (Sean Cao)
- NFS over CEPH - best practice
- From: stuartl@xxxxxxxxxx (Stuart Longland)
- 0.80 Firefly Debian/Ubuntu Trusty Packages
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- 0.80 Firefly Debian/Ubuntu Trusty Packages
- From: lists@xxxxxxxxx (Henrik Korkuc)
- 0.80 Firefly Debian/Ubuntu Trusty Packages
- From: michael@xxxxxxxxxxxxxxxxxx (Michael)
- NFS over CEPH - best practice
- From: leen@xxxxxxxxxxxxxxxxx (Leen Besselink)
- 0.80 binaries?
- From: lists@xxxxxxxxx (Henrik Korkuc)
- Info firefly qemu rbd
- From: josh.durgin@xxxxxxxxxxx (Josh Durgin)
- Ceph Not getting into a clean state
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Info firefly qemu rbd
- From: fiezzi@xxxxxxxx (Federico Iezzi)
- Question about Performance with librados
- From: Erik.Lukac@xxxxx (Lukac, Erik)
- 0.80 binaries?
- From: lesser.evil@xxxxxxxxx (Shawn Edwards)
- Replace journals disk
- From: sage@xxxxxxxxxxx (Sage Weil)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- 0.67.7 rpms changed today??
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Replace journals disk
- From: sage@xxxxxxxxxxx (Sage Weil)
- 0.67.7 rpms changed today??
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Unable to remove RBD volume
- From: jon@xxxxxxxxxxxxxxxx (Jonathan Gowar)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: ulembke@xxxxxxxxxxxx (Udo Lembke)
- Slow IOPS on RBD compared to journal and backing devices
- From: ulembke@xxxxxxxxxxxx (Udo Lembke)
- Suggestions on new cluster
- From: cperez@xxxxxxxxx (Carlos M. Perez)
- 16 osds: 11 up, 16 in
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- Unable to remove RBD volume
- From: jon@xxxxxxxxxxxxxxxx (Jonathan Gowar)
- Bad performance of CephFS (first use)
- From: michal.pazdera@xxxxxxxxx (Michal Pazdera)
- Ceph Not getting into a clean state
- From: georg.hoellrigl@xxxxxxxxxx (Georg Höllrigl)
- List users not listing users
- From: hypunit@xxxxxxxxx (Punit Dambiwal)
- v0.80 Firefly released
- From: andrey@xxxxxxx (Andrey Korolyov)
- Error while running rados gateway
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- Hey, about radosgw , always encounter internal server error .
- From: ptiernan@xxxxxxxxxxxx (Peter)
- Does ceph has impact on imp IO performance
- From: duan.xufeng@xxxxxxxxxx (duan.xufeng at zte.com.cn)
- Does ceph has impact on imp IO performance
- From: duan.xufeng@xxxxxxxxxx (duan.xufeng at zte.com.cn)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Errors while integrating Rados Gateway with Keystone
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Hey, about radosgw , always encounter internal server error .
- From: peng.dev@xxxxxx (peng)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Slow IOPS on RBD compared to journal and backing devices
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Slow IOPS on RBD compared to journal and backing devices
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- Help -Ceph deployment in Single node Like Devstack
- From: neil.levine@xxxxxxxxxxx (Neil Levine)
- Deep-Scrub Scheduling
- From: aarontc@xxxxxxxxxxx (Aaron Ten Clay)
- Deep-Scrub Scheduling
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Deep-Scrub Scheduling
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Help -Ceph deployment in Single node Like Devstack
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- Deep-Scrub Scheduling
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Slow IOPS on RBD compared to journal and backing devices
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Slow IOPS on RBD compared to journal and backing devices
- From: chibi@xxxxxxx (Christian Balzer)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- NFS over CEPH - best practice
- From: gilles.mocellin@xxxxxxxxxxxxxx (Gilles Mocellin)
- NFS over CEPH - best practice
- From: vadikgo@xxxxxxxxx (Vladislav Gorbunov)
- 16 osds: 11 up, 16 in
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- 16 osds: 11 up, 16 in
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- 16 osds: 11 up, 16 in
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- Bulk storage use case
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ovirt
- From: wido@xxxxxxxx (Wido den Hollander)
- [ANN] ceph-deploy 1.5.2 released
- From: alfredo.deza@xxxxxxxxxxx (Alfredo Deza)
- 16 osds: 11 up, 16 in
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- v0.80 Firefly released
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- cannot revert lost objects
- From: khoran@xxxxxxxxxxxxxxxxxxxx (Kevin Horan)
- [Ceph-community] How to install CEPH on CentOS 6.3
- From: aarontc@xxxxxxxxxxx (Aaron Ten Clay)
- v0.80 Firefly released
- From: mike.dawson@xxxxxxxxxxxx (Mike Dawson)
- Ovirt
- From: neil.levine@xxxxxxxxxxx (Neil Levine)
- health HEALTH_WARN too few pgs per osd (16 < min 20)
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- health HEALTH_WARN too few pgs per osd (16 < min 20)
- From: lists@xxxxxxxxx (Henrik Korkuc)
- health HEALTH_WARN too few pgs per osd (16 < min 20)
- From: st.uzver@xxxxxxxxx (*sm1Ly)
- 16 osds: 11 up, 16 in
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Delete pool .rgw.bucket and objects within it
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- Ovirt
- From: nathan@xxxxxxxxxxxx (Nathan Stratton)
- v0.80 Firefly released
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- v0.80 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Cache tiering
- From: mark.nelson@xxxxxxxxxxx (Mark Nelson)
- v0.80 Firefly released
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Cache tiering
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- v0.80 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Cache tiering
- From: sage@xxxxxxxxxxx (Sage Weil)
- v0.80 Firefly released
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Cache tiering
- From: daniel.vanderster@xxxxxxx (Dan van der Ster)
- Explicit F2FS support (was: v0.80 Firefly released)
- From: sage@xxxxxxxxxxx (Sage Weil)
- Cache tiering
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- [rados-java] Hi, I am a newer for ceph . And I found rados-java in github, but there are some problems for me .
- From: wido@xxxxxxxx (Wido den Hollander)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- NFS over CEPH - best practice
- From: vadikgo@xxxxxxxxx (Vlad Gorbunov)
- v0.80 Firefly released
- From: aderumier@xxxxxxxxx (Alexandre DERUMIER)
- v0.80 Firefly released
- From: Kenneth.Waegeman@xxxxxxxx (Kenneth Waegeman)
- Cache tiering
- From: wido@xxxxxxxx (Wido den Hollander)
- NFS over CEPH - best practice
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Cache tiering
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- NFS over CEPH - best practice
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- NFS over CEPH - best practice
- From: vadikgo@xxxxxxxxx (Vlad Gorbunov)
- NFS over CEPH - best practice
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- advice with hardware configuration
- From: chibi@xxxxxxx (Christian Balzer)
- NFS over CEPH - best practice
- From: wido@xxxxxxxx (Wido den Hollander)
- NFS over CEPH - best practice
- From: andrei@xxxxxxxxxx (Andrei Mikhailovsky)
- Bulk storage use case
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- Change size journal's blocks from 4k to another.
- From: mike.almateia@xxxxxxxxx (Mike)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- Explicit F2FS support (was: v0.80 Firefly released)
- From: andrey@xxxxxxx (Andrey Korolyov)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- How to install CEPH on CentOS 6.3
- From: easelu@xxxxxxxxx (Ease Lu)
- Delete pool .rgw.bucket and objects within it
- From: malmyzh@xxxxxxxxx (Irek Fasikhov)
- About ceph.conf
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Replace journals disk
- From: indra@xxxxxxxx (Indra Pramana)
- Delete pool .rgw.bucket and objects within it
- From: thanhtv26@xxxxxxxxx (Thanh Tran)
- v0.80 Firefly released
- From: sage@xxxxxxxxxxx (Sage Weil)
- Ceph OpenStack Integration
- From: derek@xxxxxxxxxxxxxx (Derek Yarnell)
- Open Source Storage Hackathon Before OpenStack Summit
- From: patrick@xxxxxxxxxxx (Patrick McGarry)
- cannot revert lost objects
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- some unfound object
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- advice with hardware configuration
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Replace journals disk
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- advice with hardware configuration
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- advice with hardware configuration
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- advice with hardware configuration
- From: dmaziuk@xxxxxxxxxxxxx (Dimitri Maziuk)
- RBD on Mac OS X
- From: Jurvis.LaSalle@xxxxxxxxxxxxxxxxxxxxx (LaSalle, Jurvis)
- advice with hardware configuration
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- RBD on Mac OS X
- From: mike@xxxxxxxxxxxxxxxx (Mike Bryant)
- advice with hardware configuration
- From: chibi@xxxxxxx (Christian Balzer)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- advice with hardware configuration
- From: chibi@xxxxxxx (Christian Balzer)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- advice with hardware configuration
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- Migrate system VMs from local storage to CEPH
- From: wido@xxxxxxxx (Wido den Hollander)
- advice with hardware configuration
- From: wido@xxxxxxxx (Wido den Hollander)
- advice with hardware configuration
- From: xelkano@xxxxxxxxxxxx (Xabier Elkano)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- About ceph.conf
- From: sage@xxxxxxxxxxx (Sage Weil)
- Fwd: Ceph perfomance issue!
- From: sebastien.han@xxxxxxxxxxxx (Sebastien Han)
- View or set Policy
- From: spuntamkar@xxxxxxxxx (Shashank Puntamkar)
- Replace journals disk
- From: pasha@xxxxxxxxx (Pavel V. Kaygorodov)
- Ceph installation
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- Replace journals disk
- From: frederic.yang@xxxxxxxxx (Fred Yang)
- Ceph installation
- From: shadebe@xxxxxxxxxx (Sakhi Hadebe)
- RBD on Mac OS X
- From: andrey@xxxxxxx (Andrey Korolyov)
- RBD on Mac OS X
- From: wogri@xxxxxxxxx (Wolfgang Hennerbichler)
- RBD on Mac OS X
- From: pasha@xxxxxxxxx (Pavel V. Kaygorodov)
- About ceph.conf
- From: john.wilkins@xxxxxxxxxxx (John Wilkins)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: daniel.vanderster@xxxxxxx (Dan Van Der Ster)
- Replace journals disk
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Replace journals disk
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Replace journals disk
- From: gandalf.corvotempesta@xxxxxxxxx (Gandalf Corvotempesta)
- Default pool ruleset problem
- From: jc.lopez@xxxxxxxxxxx (Jean-Charles Lopez)
- Default pool ruleset problem
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- About ceph.conf
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Where does ceph save files?
- From: wsnote@xxxxxxx (wsnote)
- Manually mucked up pg, need help fixing
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Manually mucked up pg, need help fixing
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Manually mucked up pg, need help fixing
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Manually mucked up pg, need help fixing
- From: jak3kaj@xxxxxxxxx (Jake Young)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Fatigue for XFS
- From: david@xxxxxxxxxxxxx (Dave Chinner)
- Manually mucked up pg, need help fixing
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- some unfound object
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Fatigue for XFS
- From: andrey@xxxxxxx (Andrey Korolyov)
- Fatigue for XFS
- From: david@xxxxxxxxxxxxx (Dave Chinner)
- Manually mucked up pg, need help fixing
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- Fatigue for XFS
- From: andrey@xxxxxxx (Andrey Korolyov)
- Rados Gateway pagination
- From: fabricio@xxxxxxxxxxxxxxx (Fabricio Archanjo)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Migrate system VMs from local storage to CEPH
- From: brak@xxxxxxxxxxxxxxx (Brian Rak)
- Ceph RADOS Gateway setup with Apache 2.4.3 and FastCGI 2.4.6 vesions
- From: sragolu@xxxxxxxxxx (Srinivasa Rao Ragolu)
- ceph editable failure domains
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- List users not listing users
- From: xielesshanil@xxxxxxxxx (Shanil S)
- some unfound object
- From: vernon1987@xxxxxxx (vernon1987 at 126.com)
- Rados Gateway pagination
- From: hell@xxxxxxxxxxx (Sergey Malinin)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- [rados-java] Hi, I am a newer for ceph . And I found rados-java in github, but there are some problems for me .
- From: peng.dev@xxxxxx (peng)
- Migrate system VMs from local storage to CEPH
- From: wido@xxxxxxxx (Wido den Hollander)
- Replace OSD drive without remove/re-add OSD
- From: indra@xxxxxxxx (Indra Pramana)
- Replace OSD drive without remove/re-add OSD
- From: cedric@xxxxxxxxxxx (Cedric Lemarchand)
- mkcephfs questions
- From: buddy.cao@xxxxxxxxx (Cao, Buddy)
- Replace OSD drive without remove/re-add OSD
- From: indra@xxxxxxxx (Indra Pramana)
- cannot revert lost objects
- From: khoran@xxxxxxxxxxxxxxxxxxxx (Kevin Horan)
- cannot revert lost objects
- From: kevinhoran@xxxxxxxxxxxxxxxxxxxx (Kevin Horan)
- Manually mucked up pg, need help fixing
- From: jbachtel@xxxxxxxxxxxxxxxxxxxxxx (Jeff Bachtel)
- Ceph User Committee monthly meeting #2 : executive summary
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Replace OSD drive without remove/re-add OSD
- From: andrey@xxxxxxx (Andrey Korolyov)
- Replace OSD drive without remove/re-add OSD
- From: indra@xxxxxxxx (Indra Pramana)
- ceph editable failure domains
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph User Committee elections
- From: loic@xxxxxxxxxxx (Loic Dachary)
- ceph mom help
- From: ian.colle@xxxxxxxxxxx (Ian Colle)
- ceph mom help
- From: jlu@xxxxxxxxxxxxx (Jimmy Lu)
- Rados Gateway pagination
- From: fabricio@xxxxxxxxxxxxxxx (Fabricio Archanjo)
- ceph mom help
- From: jlu@xxxxxxxxxxxxx (Jimmy Lu)
- ceph mom help
- From: jlu@xxxxxxxxxxxxx (Jimmy Lu)
- help to tune ceph
- From: matteo.favaro@xxxxxxxxxxxx (Matteo Favaro)
- Migrate system VMs from local storage to CEPH
- From: andrija.panic@xxxxxxxxx (Andrija Panic)
- Manual emperor monitor installation hangs at ceph-mon --mkfs
- From: stefan.walter@xxxxxxxxxxx (Stefan U. Walter)
- Replace OSD drive without remove/re-add OSD
- From: lists@xxxxxxxxx (Henrik Korkuc)
- Replace OSD drive without remove/re-add OSD
- From: andrey@xxxxxxx (Andrey Korolyov)
- ceph editable failure domains
- From: fabrizio.ventola@xxxxxxxx (Fabrizio G. Ventola)
- Replace OSD drive without remove/re-add OSD
- From: indra@xxxxxxxx (Indra Pramana)
- Fwd: Access denied error
- From: hypunit@xxxxxxxxx (Punit Dambiwal)
- Red Hat to acquire Inktank
- From: Suresh.Sadhu@xxxxxxxxxx (Suresh Sadhu)
- Ceph Object Storage front-end?
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Ceph Object Storage front-end?
- From: mandell@xxxxxxxxxxxxxxx (Mandell Degerness)
- Ceph User Committee monthly meeting #2 : May 2nd, 2014
- From: loic@xxxxxxxxxxx (Loic Dachary)
- Ceph unstable when upgrading from emperor (v0.72.2) to firefly (v0.80-rc1-16-g2708c3c)
- From: greg@xxxxxxxxxxx (Gregory Farnum)
- cannot revert lost objects
- From: clewis@xxxxxxxxxxxxxxxxxx (Craig Lewis)
- Red Hat to acquire Inktank
- From: neil.levine@xxxxxxxxxxx (Neil Levine)
- cannot revert lost objects
- From: khoran@xxxxxxxxxxxxxxxxxxxx (kevin horan)
[Index of Archives]
[Ceph Dev]
[Linux USB Devel]
[Video for Linux]
[xfs]
[Linux SCSI]
[Ceph Large]
[Samba]
[Yosemite Forum]