Re: Very frustrated with Ceph!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
> 
> Hi Mark,
> 
> I have been very impressed with ceph's overall philosophy but it seems there
> are many loose ends. Many times a presence of "/etc/ceph" is assumed as is
> evident from the error/log messages but is not documented/assumed in the

This is a bit of a canary.  /etc/ceph is created by the deb or rpm at 
install time, so if you *ever* see that it is missing it is because an 
install step was missed/skipped somewhere on that machine.  e.g.,

 ceph-deploy install HOST

We deliberately avoid creating it in other places to avoid seeing more 
confusing errors later down the line.  (Probably we should have more 
helpful/informative error messages when it's not found, though!)

> official documentation, other times ceph-deploy throws errors that
> apparently ain't a big deal and then logs are not that detailed and leave
> you in lurch . What took me 3 hours? Maybe since I am behind proxy, getting
> 4 nodes ready, making sure networking is all setup on these nodes and then
> installing ceph only to get in the end.  Anyway, I am pasting log of my

BTW, making the private network or proxy installs work is the next major 
item for us to tackle for ceph-deploy.  You're not the only one to feel 
this pain, but hopefully you'll be one of the last!

Hope this helps-
sage



> keystrokes to get everything ready. What do 2 lines towards the end mean??
> Could you please let me know what did I wrong based on what I have pasted
> below?.  
> 
>  
> 
> 1) Create a ceph user. All commands should be executed as a ceph user. This
> should be done on all nodes:
> 
>  
> 
> [root@ceph-node1-mon-centos-6-4 ~]# sudo useradd -d /home/ceph -m ceph
> 
> [root@ceph-node1-mon-centos-6-4 ~]# passwd ceph
> 
> Changing password for user ceph.
> 
> New password:
> 
> Retype new password:
> 
> passwd: all authentication tokens updated successfully.
> 
> [root@ceph-node1-mon-centos-6-4 ~]# echo "ceph ALL = (root) NOPASSWD:ALL" |
> sudo tee /etc/sudoers.d/ceph
> 
> ceph ALL = (root) NOPASSWD:ALL
> 
> [root@ceph-node1-mon-centos-6-4 ~]# sudo chmod 0440 /etc/sudoers.d/ceph
> 
>  
> 
> 2) Login as ceph and from now on execute the commands as the ceph user. Some
> commands need to be executed as root. Use sudo for them.
> 
>  
> 
> 3) For all the nodes add the following lines to ~/.bash_profile and
> /root/.bash_profile
> 
>  
> 
> http_proxy=http://10.12.132.208:8080
> 
> https_proxy=https://10.12.132.208:8080
> 
>  
> 
> export http_proxy
> 
> export https_proxy
> 
>  
> 
> 4) On all the nodes add the following lines to /root/.rpmmacros (use sudo
> vim):
> 
>  
> 
> %_httpproxy 10.12.132.208
> 
> %_httpport 8080
> 
>  
> 
> 5) On all the nodes add the following line to /root/.curlrc (use sudo vim):
> 
>  
> 
> proxy=http://10.12.132.208:8080
> 
>  
> 
> 6) On all the nodes add the following line to /etc/yum.conf (use sudo vim):
> 
>  
> 
> proxy=http://10.12.132.208:8080
> 
>  
> 
> 7) On all the nodes, add dd the following line to /etc/wgetrc (use sudo
> vim):
> 
>  
> 
> http_proxy=http://10.12.132.208:8080
> 
> https_proxy=http://10.12.132.208:8080
> 
> ftp_proxy=http://10.12.132.208:8080
> 
> use_proxy = on (uncomment this one)
> 
>  
> 
> 8) Execute sudo visudo to add the following line to the sudoers file:
> 
>  
> 
> Defaults env_keep += "http_proxy https_proxy"
> 
>  
> 
> 9) In the sudoers file, comment the following line (use sudo visudo):
> 
>  
> 
> Defaults requiretty
> 
>  
> 
> 10) Install ssh on all nodes
> 
>  
> 
> [ceph@ceph-node1-mon-centos-6-4 ~]# sudo yum install openssh-server
> 
> Loaded plugins: fastestmirror, refresh-packagekit, security
> 
> Determining fastest mirrors
> 
> * base: centos.mirror.constant.com
> 
> * extras: bay.uchicago.edu
> 
> * updates: centos.aol.com
> 
> base | 3.7 kB 00:00
> 
> base/primary_db | 4.4 MB 00:05
> 
> extras | 3.4 kB 00:00
> 
> extras/primary_db | 18 kB 00:00
> 
> updates | 3.4 kB 00:00
> 
> updates/primary_db | 5.0 MB 00:07
> 
> Setting up Install Process
> 
> Package openssh-server-5.3p1-84.1.el6.x86_64 already installed and latest
> version
> 
> Nothing to do
> 
>  
> 
> 12) Since the Ceph documentation recommends using hostnames instead of IP
> addresses, we need to for now enter the /etc/hosts of the four nodes:
> 
>  
> 
> admin node:
> 
>  
> 
> 127.0.0.1 localhost.localdomain localhost ceph-admin-node-centos-6-4
> 
> ::1 localhost.localdomain localhost6 localhost ceph-admin-node-centos-6-4
> 
> 10.12.0.70 ceph-node1-mon-centos-6-4
> 
> 10.12.0.71 ceph-node2-osd0-centos-6-4
> 
> 10.12.0.72 ceph-node3-osd1-centos-6-4
> 
>  
> 
> monitor node:
> 
>  
> 
> 127.0.0.1 localhost.localdomain localhost ceph-node1-mon-centos-6-4
> 
> ::1 localhost.localdomain localhost6 localhost ceph-node1-mon-centos-6-4
> 
> 10.12.0.71 ceph-node2-osd0-centos-6-4
> 
> 10.12.0.72 ceph-node3-osd1-centos-6-4
> 
> 10.12.0.73 ceph-admin-node-centos-6-4
> 
>  
> 
> osd0 node:
> 
>  
> 
> 127.0.0.1 localhost.localdomain localhost ceph-node2-osd0-centos-6-4
> 
> ::1 localhost.localdomain localhost6 localhost ceph-node2-osd0-centos-6-4
> 
> 10.12.0.70 ceph-node1-mon-centos-6-4
> 
> 10.12.0.72 ceph-node3-osd1-centos-6-4
> 
> 10.12.0.73 ceph-admin-node-centos-6-4
> 
>  
> 
> osd1 node:
> 
>  
> 
> 127.0.0.1 localhost.localdomain localhost ceph-node3-osd1-centos-6-4
> 
> ::1 localhost.localdomain localhost6 localhost ceph-node3-osd1-centos-6-4
> 
> 10.12.0.70 ceph-node1-mon-centos-6-4
> 
> 10.12.0.71 ceph-node2-osd0-centos-6-4
> 
> 10.12.0.73 ceph-admin-node-centos-6-4
> 
>  
> 
> restart the networking services on all the nodes. After this, the nodes can
> ping each other with the hostnames.
> 
>  
> 
> 13) Below should only be executed on the admin node:
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 ~]# ssh-keygen
> 
> Generating public/private rsa key pair.
> 
> Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
> 
> Enter passphrase (empty for no passphrase):
> 
> Enter same passphrase again:
> 
> Your identification has been saved in /home/ceph/.ssh/id_rsa.
> 
> Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
> 
> The key fingerprint is:
> 
> 54:dd:e8:9c:74:63:e5:3e:6f:0e:dd:7f:ef:d1:4e:36
> ceph@ceph-node1-mon-centos-6-4
> 
> The key's randomart image is:
> 
> +--[ RSA 2048]----+
> 
> | .. o ..|
> 
> | . + =. |
> 
> | . + + ..|
> 
> | . + . |
> 
> | S ..|
> 
> | .=|
> 
> | .EO|
> 
> | B=|
> 
> | .O|
> 
> +-----------------+
> 
>  
> 
> Now, copy the keys to all nodes for password-less ssh.
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 ~]# chmod 600 ~/.ssh/config
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 ~]# ssh-copy-id
> ceph@ceph-node1-mon-centos-6-4
> 
> ceph@ceph-node1-mon-centos-6-4's password:
> 
> Now try logging into the machine, with "ssh
> 'ceph@ceph-node1-mon-centos-6-4'", and check in:
> 
>  
> 
> .ssh/authorized_keys
> 
>  
> 
> to make sure we haven't added extra keys that you weren't expecting.
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 ~]# ssh-copy-id
> ceph@ceph-node2-osd0-centos-6-4
> 
> ceph@ceph-node2-osd0-centos-6-4's password:
> 
> Now try logging into the machine, with "ssh
> 'ceph@ceph-node2-osd0-centos-6-4'", and check in:
> 
>  
> 
> .ssh/authorized_keys
> 
>  
> 
> to make sure we haven't added extra keys that you weren't expecting.
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 ~]# ssh-copy-id
> ceph@ceph-node3-osd1-centos-6-4
> 
> ceph@ceph-node3-osd1-centos-6-4's password:
> 
> Now try logging into the machine, with "ssh
> 'ceph@ceph-node3-osd1-centos-6-4'", and check in:
> 
>  
> 
> .ssh/authorized_keys
> 
>  
> 
> to make sure we haven't added extra keys that you weren't expecting.
> 
>  
> 
> Now, modify the ~/.ssh/config file of your admin node so that it logs in to
> Ceph Nodes as the user you created i.e. ceph:
> 
>  
> 
> Host ceph-node1-mon-centos-6-4
> 
> Hostname ceph-node1-mon-centos-6-4
> 
> User ceph
> 
>  
> 
> Host ceph-node2-osd0-centos-6-4
> 
> Hostname ceph-node2-osd0-centos-6-4
> 
> User ceph
> 
>  
> 
> Host ceph-node2-osd1-centos-6-4
> 
> Hostname ceph@ceph-node3-osd1-centos-6-4
> 
> User ceph
> 
>  
> 
> 14) On all the nodes, add package to your repository by adding the following
> lines to /etc/yum.repos.d/ceph.repo file:
> 
>  
> 
> [ceph-noarch]
> 
> name=Ceph noarch packages
> 
> baseurl=http://ceph.com/rpm-dumpling/el6/noarch
> 
> enabled=1
> 
> gpgcheck=1
> 
> type=rpm-md
> 
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> 
>  
> 
> Admin Node Setup:
> 
> Update your repository and install ceph-deploy:
> 
>  
> 
> sudo yum update && yum install ceph-deploy
> 
> This should print the following towards the end:
> 
>  
> 
> ===========================================================================
> =======================================================
> 
> Package Arch Version Repository Size
> 
> ===========================================================================
> =======================================================
> 
> Installing:
> 
> ceph-deploy noarch 1.2.7-0 ceph-noarch 176 k
> 
> Installing for dependencies:
> 
> pushy noarch 0.5.3-1 ceph-noarch 75 k
> 
> python-argparse noarch 1.2.1-2.el6 ceph-noarch 48 k
> 
> python-setuptools noarch 0.6.10-3.el6 base 336 k
> 
>  
> 
> Transaction Summary
> 
> ===========================================================================
> =======================================================
> 
> Install 4 Package(s)
> 
>  
> 
> Total download size: 635 k
> 
> Installed size: 2.6 M
> 
> Is this ok [y/N]:
> 
>  
> 
> Downloading Packages:
> 
> (1/4): ceph-deploy-1.2.7-0.noarch.rpm | 176 kB 00:00
> 
> (2/4): pushy-0.5.3-1.noarch.rpm | 75 kB 00:00
> 
> (3/4): python-argparse-1.2.1-2.el6.noarch.rpm | 48 kB 00:00
> 
> (4/4): python-setuptools-0.6.10-3.el6.noarch.rpm | 336 kB 00:00
> 
> ---------------------------------------------------------------------------
> -------------------------------------------------------
> 
> Total 262 kB/s | 635 kB 00:02
> 
> warning: rpmts_HdrFromFdno: Header V4 RSA/SHA1 Signature, key ID 17ed316d:
> NOKEY
> 
> Retrieving key from
> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> 
> Importing GPG key 0x17ED316D:
> 
> Userid: "Ceph Release Key <sage@xxxxxxxxxxxx>"
> 
> From : https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> 
> Is this ok [y/N]:
> 
>  
> 
> Storage Cluster Setup:
> 
> In order to create a cluster, issue the following command from the admin
> node (only the initial members of the monitor quorum are the target of the
> ceph-deploy new command):
> 
>  
> 
> Create a directory on the admin node maintaining the configuration that
> ceph-deploy generates for the storage cluster we're going to create next: 
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 ~]# mkdir my-cluster
> 
> [ceph@ceph-admin-node-centos-6-4 ~]# cd my-cluster/
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 ~]# ceph-deploy new
> ceph-node1-mon-centos-6-4
> 
>  
> 
> The above command creates a ceph.conf file with the cluster information in
> it. A log file by the name of ceph.log will also be created
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 my-cluster]# ceph-deploy install
> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
> ceph-node3-osd1-centos-6-4
> 
>  
> 
> This will install ceph on all the nodes
> 
>  
> 
> Add a ceph monitor node:
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy mon create
> ceph-node1-mon-centos-6-4
> 
>  
> 
> Gather keys:
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy gatherkeys
> ceph-node1-mon-centos-6-4
> 
>  
> 
> After gathering keys, make sure the directory should have - monitoring,
> admin, osd, mds keyrings:
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ls
> 
> ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring
> ceph.client.admin.keyring ceph.conf ceph.log ceph.mon.keyring
> 
>  
> 
> Add two OSDs:
> 
>  
> 
> ssh to both OSD nodes i.e. ceph-node2-osd0-centos-6-4 and
> ceph-node3-osd1-centos-6-4 and create two directories to be used as Ceph OSD
> daemons:
> 
>  
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ssh ceph-node2-osd0-centos-6-4
> 
> Last login: Wed Oct 30 10:51:11 2013 from ceph-admin-node-centos-6-4
> 
> [ceph@ceph-node2-osd0-centos-6-4 ~]$ sudo mkdir /tmp/osd0
> 
> [ceph@ceph-node2-osd0-centos-6-4 ~]$ exit
> 
> logout
> 
> Connection to ceph-node2-osd0-centos-6-4 closed.
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ssh ceph-node3-osd1-centos-6-4
> 
> Last login: Wed Oct 30 10:51:20 2013 from ceph-admin-node-centos-6-4
> 
> [ceph@ceph-node3-osd1-centos-6-4 ~]$ sudo mkdir /tmp/osd1
> 
> [ceph@ceph-node3-osd1-centos-6-4 ~]$ exit
> 
> logout
> 
> Connection to ceph-node3-osd1-centos-6-4 closed.
> 
> [ceph@ceph-admin-node-centos-6-4 mycluster]$
> 
>  
> 
> Use ceph-deploy to prepare the OSDs:
> 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy osd prepare
> ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1
> 
> [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare
> ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1
> 
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> ceph-node2-osd0-centos-6-4:/tmp/osd0: ceph-node3-osd1-centos-6-4:/tmp/osd1:
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host:
> ceph-node2-osd0-centos-6-4
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote
> host
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type
> 
> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
> 
> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node2-osd0-centos-6-4
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] write cluster configuration to
> /etc/ceph/{cluster}.conf
> 
> [ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet,
> creating one
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file
> 
> [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] connected to host:
> ceph-node3-osd1-centos-6-4
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote
> host
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type
> 
> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
> 
> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node3-osd1-centos-6-4
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] write cluster configuration to
> /etc/ceph/{cluster}.conf
> 
> [ceph-node3-osd1-centos-6-4][WARNIN] osd keyring does not exist yet,
> creating one
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] create a keyring file
> 
> [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
> 
> [ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs
> 
>  
> 
> What are OSError and GenericError in this case?
> 
>  
> 
> Thanks a lot in advance!
> 
> Narendra
> 
>  
> 
> -----Original Message-----
> From: ceph-users-bounces@xxxxxxxxxxxxxx
> [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Mark Nelson
> Sent: Friday, November 01, 2013 8:28 PM
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Very frustrated with Ceph!
> 
>  
> 
> Hey Narenda,
> 
>  
> 
> Sorry to hear you've been having trouble.  Do you mind if I ask what took
> the 3 hours of time?  We definitely don't want the install process to take
> that long.  Unfortunately I'm not familiar with the error you are seeing,
> but the folks that work on ceph-deploy may have some advice.
> 
>   Are you using the newest version of ceph-deploy?
> 
>  
> 
> Thanks,
> 
> Mark
> 
>  
> 
> On 11/01/2013 08:17 PM, Trivedi, Narendra wrote:
> 
> > I created new VMs and re-installed everything from scratch. Took me 3
> 
> > hours. Executed all the steps religiously all over again in the links:
> 
>
> 
> > http://ceph.com/docs/master/start/quick-start-preflight/
> 
>
> 
> > http://ceph.com/docs/master/start/quick-ceph-deploy/
> 
>
> 
> > When the time came to prepare OSDs after 4 long hours, I get the same
> 
> > weird error:
> 
>
> 
>
> 
> > [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy osd prepare
> 
> > ceph-node2-osd0-centos-6-4:/tmp/osd0
> 
> > ceph-node3-osd1-centos-6-4:/tmp/osd1
> 
>
> 
> > [*ceph_deploy.cli*][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy osd
> 
> > prepare ceph-node2-osd0-centos-6-4:/tmp/osd0
> 
> > ceph-node3-osd1-centos-6-4:/tmp/osd1
> 
>
> 
> > [*ceph_deploy.osd*][DEBUG ] Preparing cluster ceph disks
> 
> > ceph-node2-osd0-centos-6-4:/tmp/osd0:
> ceph-node3-osd1-centos-6-4:/tmp/osd1:
> 
>
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] connected to host:
> 
> > ceph-node2-osd0-centos-6-4
> 
>
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] detect platform information
> 
> > from remote host
> 
>
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] detect machine type
> 
>
> 
> > [*ceph_deploy.osd*][INFO  ] Distro info: CentOS 6.4 Final
> 
>
> 
> > [*ceph_deploy.osd*][DEBUG ] Deploying osd to
> 
> > ceph-node2-osd0-centos-6-4
> 
>
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] write cluster configuration to
> 
> > /etc/ceph/{cluster}.conf
> 
>
> 
> > [*ceph-node2-osd0-centos-6-4*][WARNIN] osd keyring does not exist yet,
> 
> > creating one
> 
>
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] create a keyring file
> 
>
> 
> > [*ceph_deploy.osd*][ERROR ] OSError: [Errno 2] No such file or
> 
> > directory
> 
>
> 
> > [*ceph-node3-osd1-centos-6-4*][DEBUG ] connected to host:
> 
> > ceph-node3-osd1-centos-6-4
> 
>
> 
> > [*ceph-node3-osd1-centos-6-4*][DEBUG ] detect platform information
> 
> > from remote host
> 
>
> 
> > [*ceph-node3-osd1-centos-6-4*][DEBUG ] detect machine type
> 
>
> 
> > [*ceph_deploy.osd*][INFO  ] Distro info: CentOS 6.4 Final
> 
>
> 
> > [*ceph_deploy.osd*][DEBUG ] Deploying osd to
> 
> > ceph-node3-osd1-centos-6-4
> 
>
> 
> > [*ceph-node3-osd1-centos-6-4*][DEBUG ] write cluster configuration to
> 
> > /etc/ceph/{cluster}.conf
> 
>
> 
> > [*ceph-node3-osd1-centos-6-4*][WARNIN] osd keyring does not exist yet,
> 
> > creating one
> 
>
> 
> > [*ceph-node3-osd1-centos-6-4*][DEBUG ] create a keyring file
> 
>
> 
> > [*ceph_deploy.osd*][ERROR ] OSError: [Errno 2] No such file or
> 
> > directory
> 
>
> 
> > [*ceph_deploy*][ERROR ] GenericError: Failed to create 2 OSDs
> 
>
> 
> > What does it even mean??? Seems ceph is not production ready with lot
> 
> > of missing links, error messages that don?t make any sense and
> 
> > gazillion problems. Very frustrating!!
> 
>
> 
> > *Narendra Trivedi | savvis**cloud*
> 
>
> 
>
> 
> > This message contains information which may be confidential and/or
> 
> > privileged. Unless you are the intended recipient (or authorized to
> 
> > receive for the intended recipient), you may not read, use, copy or
> 
> > disclose to anyone the message or any information contained in the
> 
> > message. If you have received the message in error, please advise the
> 
> > sender by reply e-mail and delete the message and any attachment(s)
> 
> > thereto without retaining any copies.
> 
>
> 
>
> 
> > _______________________________________________
> 
> > ceph-users mailing list
> 
> > ceph-users@xxxxxxxxxxxxxx
> 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
>
> 
>  
> 
> _______________________________________________
> 
> ceph-users mailing list
> 
> ceph-users@xxxxxxxxxxxxxx
> 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> This message contains information which may be confidential and/or
> privileged. Unless you are the intended recipient (or authorized to receive
> for the intended recipient), you may not read, use, copy or disclose to
> anyone the message or any information contained in the message. If you have
> received the message in error, please advise the sender by reply e-mail and
> delete the message and any attachment(s) thereto without retaining any
> copies.
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux