Hello, Le 17/02/2015 05:55, Christian Balzer wrote : >> 1. I have read "10 GB per daemon for the monitor". But is >> I/O disk performance important for a monitor? Is it unreasonable >> to put the working directory of the monitor in the same partition >> of the root filesystem (ie /)? >> > Yes, monitors are quite I/O sensitive, they like their leveldb to be on a > fast disk, preferably an SSD. > So if your OS in on SSD(s), no worries. > If your OS is on plain HDDs w/o any caching controller, you may run into > problems if your cluster gets busy. Ok, I see. So, for instance, if I have a server with: - 4 spinning HDD of 500GB, one osd per disk, - 2 SSD for the journals of the osd (2 journals per SSD) I can put the working directory in one of the SSD without problem, is that correct? >> 2. I have exactly the same question for the mds daemon. >> > No idea (not running MDS), but I suspect it would be fine as well as long > as the OS is on SSD(s). Ok. >> I'm asking these questions because if these daemons must have >> dedicated disks, with the OS too, it consumes disks which could >> not be used for osd daemons. >> >> Off chance, here is my third question: >> >> 3. Is there a web site which lists precise examples of hardwares >> "ceph-approved" by "ceph-users" with the kernel and ceph version? >> > Searching this mailing list is probably your best bet. > Never mind that people tend to update things constantly. Ok. It could be interesting to have a centralized page. > In general you will want the newest stable kernel you can run, from what I > remember the 3.13 in one Ubuntu version was particular bad. Ah? But Ubuntu 14.04 Trusty seems to be well supported and tested by ceph (for the Firefly version which is the version I use): http://ceph.com/docs/master/start/os-recommendations/#platforms Should I use another distribution (use a LTS distribution seemed to me a good idea)? Or should I keep Trusty and upgrade the kernel (with "apt-get linux-headers-3.16.0-30-generic")? Thanks for your help Christian. -- François Lafont _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com