Re: Ceph User Teething Problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/03/2015 20:27, Datatone Lists wrote:
I have been following ceph for a long time. I have yet to put it into
service, and I keep coming back as btrfs improves and ceph reaches
higher version numbers.

I am now trying ceph 0.93 and kernel 4.0-rc1.

Q1) Is it still considered that btrfs is not robust enough, and that
xfs should be used instead? [I am trying with btrfs].
XFS is still the recommended default backend (http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/#filesystems)

I followed the manual deployment instructions on the web site
(http://ceph.com/docs/master/install/manual-deployment/) and I managed
to get a monitor and several osds running and apparently working. The
instructions fizzle out without explaining how to set up mds. I went
back to mkcephfs and got things set up that way. The mds starts.

[Please don't mention ceph-deploy]
This kind of comment isn't very helpful unless there is a specific issue with ceph-deploy that is preventing you from using it, and causing you to resort to manual steps. I happy to find ceph-deploy very useful, so I'm afraid I'm going to mention it anyway :-)

The first thing that I noticed is that (whether I set up mon and osds
by following the manual deployment, or using mkcephfs), the correct
default pools were not created.
This is not a bug. The 'data' and 'metadata' pools are no longer created by default. http://docs.ceph.com/docs/master/cephfs/createfs/
  I get only 'rbd' created automatically. I deleted this pool, and
  re-created data, metadata and rbd manually. When doing this, I had to
  juggle with the pg- num in order to avoid the 'too many pgs for osd'.
  I have three osds running at the moment, but intend to add to these
  when I have some experience of things working reliably. I am puzzled,
  because I seem to have to set the pg-num for the pool to a number that
  makes (N-pools x pg-num)/N-osds come to the right kind of number. So
  this implies that I can't really expand a set of pools by adding osds
  at a later date.
You should pick an appropriate number of PGs for the number of OSDs you have at the present time. When you add more OSDs, you can increase the number of PGs. You would not want to create the larger number of PGs initially, as they could exceed the resources available on your initial small number of OSDs.
Q4) Can you give me an idea of what is wrong that causes the mds to not
play properly?
You have to explicitly enable the filesystem now (also http://docs.ceph.com/docs/master/cephfs/createfs/)
I think that there are some typos on the manual deployment pages, for
example:

ceph-osd id={osd-num}

This is not right. As far as I am aware it should be:

ceph-osd -i {osd-num}
ceph-osd id={osd-num} is an upstart invokation (i.e. it's prefaced with "sudo start" on the manual deployment page). In that context it's correct afaik, unless you're finding otherwise?

John
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux