Re: Small cluster for VMs hosting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/11/17 13:16, Gandalf Corvotempesta wrote:
> Hi to all
> I've been far from ceph from a couple of years (CephFS was still unstable)
> 
> I would like to test it again, some questions for a production cluster for VMs hosting:
> 
> 1. Is CephFS stable?

Yes, CephFS is stable and safe (though it can have performance issues relating to creating/removing files if your layout requires very large numbers of files in a single directory)

> 2. Can I spin up a 3 nodes cluster with mons, MDS and osds on the same machine?

Recommended practice is not to co-locate OSDs with other ceph daemons, but realistically lots of people do this (me included) and it works fine. Just don't overload your nodes. In recent versions (kraken, luminous) there's a new ceph-mgr daemon to keep in mind too.

> 3. Hardware suggestions?

Depends quite a lot on your budget and what performance you need. Ceph's relatively CPU-heavy as these storage solutions go so good CPUs is advised, I understand that single-threaded performance is probably more important than having lots of cores if you're dealing with very very fast OSDs (like on NVMe). Default memory requirements are 1GB/HDD OSD and 3GB/SSD OSD when using the bluestore backend, but add maybe 50% for overhead due to fragmentation etc. plus the resource cost of your other daemons.

> 4. How can I understand the ceph health status output, in details? I've not seen any docs about this

Read up on http://docs.ceph.com/docs/master/rados/operations/monitoring-osd-pg/ and http://docs.ceph.com/docs/master/rados/operations/pg-states/ - understanding what the different states that PGs and OSDs can be in mean should be enough for you to grok ceph status output.

> 5. How can I know if cluster is fully synced or if any background operation (scrubbing, replication, ...) Is running?

"ceph status" ("ceph -s" for short) will give you a point in time report of your cluster state including PG states. If things are scrubbing or whatever that will represented in the PG states. "ceph -w" will give you status and then a rolling output of status changes/reports if the cluster does anything interesting. One of the functions available in the newer ceph-mgr daemon is an http dashboard giving you a quick overview of cluster health.
 
> 6. Is 10G Ethernet mandatory? Currently I only have 4 gigabit nic (2 for public traffic, 2 for cluster traffic)

It's not mandatory, but the more bandwidth you can throw at ceph generally the happier it is. If you expect relatively lightweight usage I wouldn't worry - but if performance was an issue and nodes otherwise healthy, 1G links as bottlenecks would be the first thing I checked.

You seem interested in cephfs but you mention you're looking at ceph as a backend for VM hosting, is that coincidental or are you intending to use disk images stored as files in cephfs? Using RBDs would be a much more sensible idea if so.

-- 
Rich

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux