Re: some ceph general questions about the design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 
> 1. shoud i use a raid controller a create for example a raid 5 with all disks on each osd server? or should i passtrough all disks to ceph osd?
> 
> If your OSD servers have HDDs, buy a good RAID Controller with a battery-backed write cache and configure it using multiple RAID-0 volumes (1 physical disk per volume). That way, reads and write will be accelerated by the cache on the HBA.

I’ve lived this scenario and hated it.  Multiple firmware and manufacturing issues, batteries/supercaps can fail and need to be monitored, bugs causing staged data to be lost before writing to disk, another bug that required replacing the card if there was preserved cache for a failed drive, because it would refuse to boot, difficulties in drive monitoring, HBA monitoring utility that would lock the HBA or peg the CPU, the list goes on.

For the additional cost of RoC, cache RAM, supercap to (fingers crossed) protect the cache, all the additional monitoring and hands work … you might find that SATA SSDs on a JBOD HBA are no more expensive.

> 3. if i have a 3 physically node osd cluster, did i need 5 physicall mons?
> No. 3 MON are enough

If you have good hands and spares.  If your cluster is on a different continent and colo hands can’t find their own butts …..  it’s nice to survive a double failure.

ymmv
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux