Best Practices for Managing Multiple Pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm wondering if anyone has some tips for managing different types of
pools, each of which fall on a different type of OSD.

Right now, I have a small cluster running with two kinds of OSD nodes,
ones with spinning disks (and SSD journals) and another with all SATA
SSD.  I'm currently running cache tiering and looking to move away
from that.

My end goal is to have a general purpose block storage pool on the
spinning disks along with object storage.  Then I'd like to do a
separate pool of low-latency block storage against the SSD nodes.
Finally, I'd like to add a third node type that has a high number of
spinning disks, no SSD journals and runs object storage on an EC pool.
This final pool would be for backup purposes.

I can envision running all these in the same cluster with a crushmap
that allocates the pools to the correct OSDs.  However, I'm concerned
about the radius of failure running all these different use cases on a
single cluster.

I have for example, had an instance where a single full OSD caused the
entire cluster to stop accepting writes, which affected all the pools
in the cluster, regardless of whether those pools had PGs on the
affected OSD.

It's simple enough to run separate clusters for these, but then I'd be
faced with that complexity as well, including some number of mons for
each.  I'm wondering if I'm overstating the risks and benefits of
having a single crushmap.  i.e. instead of cache tiering, I can do a
primary SSD secondary and tertiary on spinning disk.

Any thoughts and experiences on this topic would be welcome.


-H
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux