Re: Dedicated disks for monitor and mds?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 16 Feb 2015 17:13:40 +0100 Francois Lafont wrote:

> Hi,
> 
> I'm trying to plan the hardware for a little ceph cluster.
> We don't have a lot of financial means. In addition, we will
> have to pay attention to the electric consumption. At first,
> it will probably be a cluster with 3 physical servers and
> on each server will be osd node and monitor node (and maybe
> mds node). It will probably be a ceph cluster of ~ 8 TB (raw size).
> 
> I have read this page :
> 
> http://ceph.com/docs/master/start/hardware-recommendations/
> 
> but I still have questions:
> 
> 1. I have read "10 GB per daemon for the monitor". But is
> I/O disk performance important for a monitor? Is it unreasonable
> to put the working directory of the monitor in the same partition
> of the root filesystem (ie /)?
> 
Yes, monitors are quite I/O sensitive, they like their leveldb to be on a
fast disk, preferably an SSD. 
So if your OS in on SSD(s), no worries.
If your OS is on plain HDDs w/o any caching controller, you may run into
problems if your cluster gets busy.

> 2. I have exactly the same question for the mds daemon.
> 
No idea (not running MDS), but I suspect it would be fine as well as long
as the OS is on SSD(s).

> I'm asking these questions because if these daemons must have
> dedicated disks, with the OS too, it consumes disks which could
> not be used for osd daemons.
> 
> Off chance, here is my third question:
> 
> 3. Is there a web site which lists precise examples of hardwares
> "ceph-approved" by "ceph-users" with the kernel and ceph version?
>
Searching this mailing list is probably your best bet.
Never mind that people tend to update things constantly.

In general you will want the newest stable kernel you can run, from what I
remember the 3.13 in one Ubuntu version was particular bad.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux