Re: RHEL 7.1 ceph-disk failures creating OSD with ver 0.94.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are using Ceph (Hammer) on Centos7 and RHEL7.1 successfully.

One secret is to ensure that the disk is cleaned prior to ceph-disk
command. Because GPT tables are used one must use the Œsgdisk -Z¹ command
to purge the disk of all partition tables. We usually issue this command
in the RedHat kickstart file.

The second trick is not to use the mount command explicitly (as shown in
your post below).

The Œceph-disk prepare¹ command should automatically start the OSD.

Paul

On 29/06/2015 20:19, "Bruce McFarland" <Bruce.McFarland@xxxxxxxxxxxxxxxx>
wrote:

>Do these issues occur in Centos 7 also?
>
>> -----Original Message-----
>> From: Bruce McFarland
>> Sent: Monday, June 29, 2015 12:06 PM
>> To: 'Loic Dachary'; 'ceph-users@xxxxxxxxxxxxxx'
>> Subject: RE:  RHEL 7.1 ceph-disk failures creating OSD with
>>ver
>> 0.94.2
>> 
>> Using the "manual" method of creating an OSD on RHEL 7.1 with Ceph 94.2
>> turns up an issue with the ondisk fsid of the journal device. From a
>>quick
>> web search I've found reference to this exact same issue from earlier
>>this
>> year. Is there a version of Ceph that works with RHEL 7.1???
>> 
>> [root@ceph0 ceph]# ceph-disk-prepare --cluster ceph --cluster-uuid
>> b2c2e866-ab61-4f80-b116-20fa2ea2ca94 --fs-type xfs /dev/sdc /dev/sdb1
>> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
>> same device as the osd data The operation has completed successfully.
>> partx: /dev/sdc: error adding partition 1
>> meta-data=/dev/sdc1              isize=2048   agcount=4,
>>agsize=244188597
>> blks
>>          =                       sectsz=512   attr=2, projid32bit=1
>>          =                       crc=0        finobt=0
>> data     =                       bsize=4096   blocks=976754385,
>>imaxpct=5
>>          =                       sunit=0      swidth=0 blks
>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
>> log      =internal log           bsize=4096   blocks=476930, version=2
>>          =                       sectsz=512   sunit=0 blks, lazy-count=1
>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>> The operation has completed successfully.
>> partx: /dev/sdc: error adding partition 1
>> [root@ceph0 ceph]# mkdir /var/lib/ceph/osd/ceph-0
>> [root@ceph0 ceph]# ll /var/lib/ceph/osd/ total 0 drwxr-xr-x. 2 root
>>root 6
>> Jun 29 12:01 ceph-0
>> [root@ceph0 ceph]# mount -t xfs /dev/sdc1 /var/lib/ceph/osd/ceph-0/
>> [root@ceph0 ceph]# mount
>> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) sysfs on /sys
>>type
>> sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
>> devtmpfs on /dev type devtmpfs
>> (rw,nosuid,seclabel,size=57648336k,nr_inodes=14412084,mode=755)
>> securityfs on /sys/kernel/security type securityfs
>> (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs
>> (rw,nosuid,nodev,seclabel) devpts on /dev/pts type devpts
>> (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
>> tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
>> tmpfs on /sys/fs/cgroup type tmpfs
>> (rw,nosuid,nodev,noexec,seclabel,mode=755)
>> cgroup on /sys/fs/cgroup/systemd type cgroup
>> 
>>(rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/sys
>> temd-cgroups-agent,name=systemd)
>> pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
>> cgroup on /sys/fs/cgroup/cpuset type cgroup
>> (rw,nosuid,nodev,noexec,relatime,cpuset)
>> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
>> (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
>> cgroup on /sys/fs/cgroup/memory type cgroup
>> (rw,nosuid,nodev,noexec,relatime,memory)
>> cgroup on /sys/fs/cgroup/devices type cgroup
>> (rw,nosuid,nodev,noexec,relatime,devices)
>> cgroup on /sys/fs/cgroup/freezer type cgroup
>> (rw,nosuid,nodev,noexec,relatime,freezer)
>> cgroup on /sys/fs/cgroup/net_cls type cgroup
>> (rw,nosuid,nodev,noexec,relatime,net_cls)
>> cgroup on /sys/fs/cgroup/blkio type cgroup
>> (rw,nosuid,nodev,noexec,relatime,blkio)
>> cgroup on /sys/fs/cgroup/perf_event type cgroup
>> (rw,nosuid,nodev,noexec,relatime,perf_event)
>> cgroup on /sys/fs/cgroup/hugetlb type cgroup
>> (rw,nosuid,nodev,noexec,relatime,hugetlb)
>> configfs on /sys/kernel/config type configfs (rw,relatime)
>> /dev/mapper/rhel_ceph0-root on / type xfs
>> (rw,relatime,seclabel,attr2,inode64,noquota)
>> selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
>> systemd-1 on /proc/sys/fs/binfmt_misc type autofs
>> (rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
>> debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on
>> /dev/mqueue type mqueue (rw,relatime,seclabel) hugetlbfs on
>> /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
>> /dev/mapper/rhel_ceph0-home on /home type xfs
>> (rw,relatime,seclabel,attr2,inode64,noquota)
>> /dev/sda2 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
>> binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
>> fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
>> /dev/sdc1 on /var/lib/ceph/osd/ceph-0 type xfs
>> (rw,relatime,seclabel,attr2,inode64,noquota)
>> [root@ceph0 ceph]# ceph-osd -i=0 --mkfs
>> 2015-06-29 12:02:47.702808 7f2fb4625880 -1 journal FileJournal::_open:
>> disabling aio for non-block journal.  Use journal_force_aio to force
>>use of aio
>> anyway
>> 2015-06-29 12:02:47.702851 7f2fb4625880 -1 journal check: ondisk fsid
>> 00000000-0000-0000-0000-000000000000 doesn't match expected
>> 7e792d5e-a5c6-40cd-a361-0457875ea92c, invalid (someone else's?) journal
>> 2015-06-29 12:02:47.702876 7f2fb4625880 -1
>> filestore(/var/lib/ceph/osd/ceph-0) mkjournal error creating journal on
>> /var/lib/ceph/osd/ceph-0/journal: (22) Invalid argument
>> 2015-06-29 12:02:47.702890 7f2fb4625880 -1 OSD::mkfs: ObjectStore::mkfs
>> failed with error -22
>> 2015-06-29 12:02:47.702928 7f2fb4625880 -1  ** ERROR: error creating
>> empty object store in /var/lib/ceph/osd/ceph-0: (22) Invalid argument
>> [root@ceph0 ceph]#
>> 
>> > -----Original Message-----
>> > From: Bruce McFarland
>> > Sent: Monday, June 29, 2015 11:39 AM
>> > To: 'Loic Dachary'; ceph-users@xxxxxxxxxxxxxx
>> > Subject: RE:  RHEL 7.1 ceph-disk failures creating OSD
>> > with ver
>> > 0.94.2
>> >
>> > It doesn't appear to be related to using wwn's for the drive id. The
>> > verbose output shows ceph converting from wwn to sd letter. I ran with
>> > verbose on and used sd letters for the data drive and the journal and
>> > get the same failures. I'm attempting to create OSD's manually now.
>> >
>> > [root@ceph0 ceph]# ceph-disk -v prepare --cluster ceph --cluster-uuid
>> > b2c2e866-ab61-4f80-b116-20fa2ea2ca94 --fs-type xfs --zap-disk /dev/sdc
>> > /dev/sdb1 DEBUG:ceph-disk:Zapping partition table on /dev/sdc
>> > INFO:ceph- disk:Running command: /usr/sbin/sgdisk --zap-all --
>> > /dev/sdc
>> > Caution: invalid backup GPT header, but valid main header;
>> > regenerating backup header from main header.
>> >
>> > Warning! Main and backup partition tables differ! Use the 'c' and 'e'
>> > options on the recovery & transformation menu to examine the two
>>tables.
>> >
>> > Warning! One or more CRCs don't match. You should repair the disk!
>> >
>> >
>> **************************************************************
>> > **************
>> > Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT,
>> > but disk verification and recovery are STRONGLY recommended.
>> >
>> **************************************************************
>> > **************
>> > GPT data structures destroyed! You may now partition the disk using
>> > fdisk or other utilities.
>> > INFO:ceph-disk:Running command: /usr/sbin/sgdisk --clear --mbrtogpt --
>> > /dev/sdc Creating new GPT entries.
>> > The operation has completed successfully.
>> > INFO:ceph-disk:calling partx on zapped device /dev/sdc
>> > INFO:ceph-disk:re- reading known partitions will display errors
>> > INFO:ceph-disk:Running
>> > command: /usr/sbin/partx -d /dev/sdc
>> > partx: specified range <1:0> does not make sense
>> > INFO:ceph-disk:Running
>> > command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup
>> > osd_mkfs_options_xfs INFO:ceph-disk:Running command: /usr/bin/ceph-
>> > conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
>> > INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --
>> > name=osd. --lookup osd_mount_options_xfs INFO:ceph-disk:Running
>> > command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup
>> > osd_fs_mount_options_xfs INFO:ceph-disk:Running command:
>> > /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
>> > INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --
>> > name=osd. --lookup osd_cryptsetup_parameters INFO:ceph-disk:Running
>> > command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup
>> > osd_dmcrypt_key_size INFO:ceph-disk:Running command: /usr/bin/ceph-
>> > conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type DEBUG:ceph-
>> > disk:Journal is file /dev/sdb1 WARNING:ceph-disk:OSD will not be hot-
>> > swappable if journal is not the same device as the osd data
>> > DEBUG:ceph- disk:Creating osd partition on /dev/sdc INFO:ceph-
>> disk:Running command:
>> > /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data
>> > --partition- guid=1:6d05612e-5cc0-422c-9228-4e53ee0f27ac
>> > --typecode=1:89c57f98- 2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdc The
>> > operation has completed successfully.
>> > INFO:ceph-disk:calling partx on created device /dev/sdc
>> > INFO:ceph-disk:re- reading known partitions will display errors
>> > INFO:ceph-disk:Running
>> > command: /usr/sbin/partx -a /dev/sdc
>> > partx: /dev/sdc: error adding partition 1 INFO:ceph-disk:Running
>> command:
>> > /usr/bin/udevadm settle DEBUG:ceph-disk:Creating xfs fs on /dev/sdc1
>> > INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs -f -i size=2048
>> > --
>> > /dev/sdc1
>> > meta-data=/dev/sdc1              isize=2048   agcount=4,
>>agsize=244188597
>> > blks
>> >          =                       sectsz=512   attr=2, projid32bit=1
>> >          =                       crc=0        finobt=0
>> > data     =                       bsize=4096   blocks=976754385,
>>imaxpct=5
>> >          =                       sunit=0      swidth=0 blks
>> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
>> > log      =internal log           bsize=4096   blocks=476930, version=2
>> >          =                       sectsz=512   sunit=0 blks,
>>lazy-count=1
>> > realtime =none                   extsz=4096   blocks=0, rtextents=0
>> > DEBUG:ceph-disk:Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.DQ8nOj
>> > with options noatime,inode64 INFO:ceph-disk:Running command:
>> > /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1
>> > /var/lib/ceph/tmp/mnt.DQ8nOj DEBUG:ceph-disk:Preparing osd data dir
>> > /var/lib/ceph/tmp/mnt.DQ8nOj DEBUG:ceph-disk:Creating symlink
>> > /var/lib/ceph/tmp/mnt.DQ8nOj/journal -> /dev/sdb1 DEBUG:ceph-
>> > disk:Unmounting /var/lib/ceph/tmp/mnt.DQ8nOj INFO:ceph-disk:Running
>> > command: /bin/umount -- /var/lib/ceph/tmp/mnt.DQ8nOj INFO:ceph-
>> > disk:Running command: /usr/sbin/sgdisk
>> > --typecode=1:4fbd7e29-9d25-41b8- afd0-062c0ceff05d -- /dev/sdc The
>> operation has completed successfully.
>> > INFO:ceph-disk:calling partx on prepared device /dev/sdc
>> > INFO:ceph-disk:re- reading known partitions will display errors
>> > INFO:ceph-disk:Running
>> > command: /usr/sbin/partx -a /dev/sdc
>> > partx: /dev/sdc: error adding partition 1
>> > [root@ceph0 ceph]#
>> >
>> > > -----Original Message-----
>> > > From: Loic Dachary [mailto:loic@xxxxxxxxxxx]
>> > > Sent: Saturday, June 27, 2015 1:08 AM
>> > > To: Bruce McFarland; ceph-users@xxxxxxxxxxxxxx
>> > > Subject: Re:  RHEL 7.1 ceph-disk failures creating OSD
>> > >
>> > > Hi Bruce,
>> > >
>> > > I think the problem comes from using /dev/disk/by-id/wwn-
>> > > 0x500003959bd02f56 instead of /dev/sdw for the data disk, because
>> > > ceph- disk has a device name parsing logic that works with /dev/XXX.
>> > > Could you run the ceph-disk prepare command again with --verbose to
>> > > confirm ? If that's the case there should be an error instead of
>> > > what appears to be something that only does part of the work.
>> > >
>> > > Cheers
>> > >
>> > > On 26/06/2015 18:56, Bruce McFarland wrote:
>> > > > Loic,
>> > > > Thank you very much for the partprobe workaround. I rebuilt the
>> > > > cluster
>> > > using 94.2.
>> > > >
>> > > > I've created partitions on the journal SSDs with parted and then
>> > > > use
>> > > > ceph-
>> > > disk prepare as below. I'm not seeing all of the disks with the tmp
>> > > mounts when I check 'mount' but I also don't see any of the mount
>> > > directory mount points at /var/lib/ceph/osd. I'm see the following
>> > > output
>> > from prepare.
>> > > When I attempt to 'activate' it errors out saying the devices don't
>>exist.
>> > > >
>> > > > ceph-disk prepare --cluster ceph --cluster-uuid
>> > > > b2c2e866-ab61-4f80-b116-20fa2ea2ca94 --fs-type xfs --zap-disk
>> > > > /dev/disk/by-id/wwn-0x500003959bd02f56
>> > > > /dev/disk/by-id/wwn-0x500080d91010024b-part1
>> > > > Caution: invalid backup GPT header, but valid main header;
>> > > > regenerating backup header from main header.
>> > > >
>> > > >
>> > >
>> >
>> **************************************************************
>> > > ********
>> > > > ******
>> > > > Caution: Found protective or hybrid MBR and corrupt GPT. Using
>> > > > GPT, but disk verification and recovery are STRONGLY recommended.
>> > > >
>> > >
>> >
>> **************************************************************
>> > > ********
>> > > > ****** GPT data structures destroyed! You may now partition the
>> > > > disk using fdisk or other utilities.
>> > > > Creating new GPT entries.
>> > > > The operation has completed successfully.
>> > > > partx: specified range <1:0> does not make sense WARNING:ceph-
>> > > disk:OSD
>> > > > will not be hot-swappable if journal is not the same device as the
>> > > > osd data WARNING:ceph-disk:Journal /dev/disk/by-id/wwn-
>> > > 0x500080d91010024b-part1 was not prepared with ceph-disk. Symlinking
>> > > directly.
>> > > > The operation has completed successfully.
>> > > > partx: /dev/disk/by-id/wwn-0x500003959bd02f56: error adding
>> > > > partition
>> > 1
>> > > > meta-data=/dev/sdw1              isize=2048   agcount=4,
>> > agsize=244188597
>> > > blks
>> > > >          =                       sectsz=512   attr=2,
>>projid32bit=1
>> > > >          =                       crc=0        finobt=0
>> > > > data     =                       bsize=4096   blocks=976754385,
>>imaxpct=5
>> > > >          =                       sunit=0      swidth=0 blks
>> > > > naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
>> > > > log      =internal log           bsize=4096   blocks=476930,
>>version=2
>> > > >          =                       sectsz=512   sunit=0 blks,
>>lazy-count=1
>> > > > realtime =none                   extsz=4096   blocks=0,
>>rtextents=0
>> > > > The operation has completed successfully.
>> > > > partx: /dev/disk/by-id/wwn-0x500003959bd02f56: error adding
>> > > > partition
>> > > > 1
>> > > >
>> > > >
>> > > > [root@ceph0 ceph]# ceph -v
>> > > > ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>> > > > [root@ceph0 ceph]# rpm -qa | grep ceph
>> > > > ceph-radosgw-0.94.2-0.el7.x86_64
>> > > > libcephfs1-0.94.2-0.el7.x86_64
>> > > > ceph-common-0.94.2-0.el7.x86_64
>> > > > python-cephfs-0.94.2-0.el7.x86_64
>> > > > ceph-0.94.2-0.el7.x86_64
>> > > > [root@ceph0 ceph]#
>> > > >
>> > > >
>> > > >
>> > > >> -----Original Message-----
>> > > >> From: Loic Dachary [mailto:loic@xxxxxxxxxxx]
>> > > >> Sent: Friday, June 26, 2015 3:29 PM
>> > > >> To: Bruce McFarland; ceph-users@xxxxxxxxxxxxxx
>> > > >> Subject: Re:  RHEL 7.1 ceph-disk failures creating
>> > > >> OSD
>> > > >>
>> > > >> Hi,
>> > > >>
>> > > >> Prior to firefly v0.80.8 ceph-disk zap did not call partprobe and
>> > > >> that was causing the kind of problems you're experiencing. It was
>> > > >> fixed by
>> > > >>
>> > >
>> >
>> https://github.com/ceph/ceph/commit/e70a81464b906b9a304c29f474e672
>> > > >> 6762b63a7c and is described in more details at
>> > > >> http://tracker.ceph.com/issues/9665. Rebooting the machine
>> > > >> ensures the partition table is up to date and that's what you
>> > > >> probably want to do after that kind of failure. You can however
>> > > >> avoid the failure by
>> > > running:
>> > > >>
>> > > >>  * ceph-disk zap
>> > > >>  * partproble
>> > > >>  * ceph-disk prepare
>> > > >>
>> > > >> Cheers
>> > > >>
>> > > >> P.S. The "partx: /dev/disk/by-id/wwn-0x500003959ba80a4e: error
>> > > >> adding partition 1" can be ignored, it does not actually matter.
>> > > >> A message was added later to avoid confusion with a real error.
>> > > >> .
>> > > >> On 26/06/2015 17:09, Bruce McFarland wrote:
>> > > >>> I have moved storage nodes to RHEL 7.1 and used the basic server
>> > > >>> install. I
>> > > >> installed ceph-deploy and used the ceph.repo/epel.repo for
>> > > >> installation of ceph 80.7. I have tried ceph-disk with issuing
>>"zap"
>> > > >> on the same command line as "prepare" and on a separate command
>> > > line
>> > > >> immediately before the ceph-disk prepare. I consistently run into
>> > > >> the partition errors and am unable to create OSD's on RHEL 7.1.
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>> ceph-disk prepare --cluster ceph --cluster-uuid
>> > > >>> 373a09f7-2070-4d20-8504-
>> > > >> c8653fb6db80 --fs-type xfs --zap-disk /dev/disk/by-id/wwn-
>> > > >> 0x500003959ba80a4e /dev/disk/by-id/wwn-0x500080d9101001d6-
>> > part1
>> > > >>>
>> > > >>> Caution: invalid backup GPT header, but valid main header;
>> > > >>> regenerating
>> > > >>>
>> > > >>> backup header from main header.
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>
>> > >
>> >
>> **************************************************************
>> > > >> **************
>> > > >>>
>> > > >>> Caution: Found protective or hybrid MBR and corrupt GPT. Using
>> > > >>> GPT, but
>> > > >> disk
>> > > >>>
>> > > >>> verification and recovery are STRONGLY recommended.
>> > > >>>
>> > > >>>
>> > > >>
>> > >
>> >
>> **************************************************************
>> > > >> **************
>> > > >>>
>> > > >>> GPT data structures destroyed! You may now partition the disk
>> > > >>> using fdisk
>> > > >> or
>> > > >>>
>> > > >>> other utilities.
>> > > >>>
>> > > >>> The operation has completed successfully.
>> > > >>>
>> > > >>> WARNING:ceph-disk:OSD will not be hot-swappable if journal is
>> > > >>> not the
>> > > >> same device as the osd data
>> > > >>>
>> > > >>> The operation has completed successfully.
>> > > >>>
>> > > >>> meta-data=/dev/sdc1              isize=2048   agcount=4,
>> > > agsize=244188597
>> > > >> blks
>> > > >>>
>> > > >>>          =                       sectsz=512   attr=2,
>>projid32bit=1
>> > > >>>
>> > > >>>          =                       crc=0        finobt=0
>> > > >>>
>> > > >>> data     =                       bsize=4096   blocks=976754385,
>>imaxpct=5
>> > > >>>
>> > > >>>          =                       sunit=0      swidth=0 blks
>> > > >>>
>> > > >>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
>> > > >>>
>> > > >>> log      =internal log           bsize=4096   blocks=476930,
>>version=2
>> > > >>>
>> > > >>>          =                       sectsz=512   sunit=0 blks,
>>lazy-count=1
>> > > >>>
>> > > >>> realtime =none                   extsz=4096   blocks=0,
>>rtextents=0
>> > > >>>
>> > > >>> The operation has completed successfully.
>> > > >>>
>> > > >>> partx: /dev/disk/by-id/wwn-0x500003959ba80a4e: error adding
>> > > >>> partition 1
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>> _______________________________________________
>> > > >>> ceph-users mailing list
>> > > >>> ceph-users@xxxxxxxxxxxxxx
>> > > >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > > >>>
>> > > >>
>> > > >> --
>> > > >> Loïc Dachary, Artisan Logiciel Libre
>> > > >
>> > >
>> > > --
>> > > Loïc Dachary, Artisan Logiciel Libre
>
>_______________________________________________
>ceph-users mailing list
>ceph-users@xxxxxxxxxxxxxx
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux