Re: Hardware difference in the same Rack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 21, 2019 at 03:22:56PM -0300, Fabio Abreu wrote:
:Hi Everybody,
:
:It's recommended join different hardwares in the same rack  ?

This is based on my somewhat subjective experience not rigorous
testing or deep understanding of the code base so your results may
vary but ...

Physical location probably doesn't matter too much here, but pool
performance is generally limitted by the lowest performing set of
disks in the pool (it's a bit more complex but that's my experience).

I'm vioulating that recommendation now as I replace older HDD with
SSD.  We've written crush rules to require one replica on ssd and the
rest on HDD for teh mixed pools and then set primary affinity such
that the SSD are always thw primary replica, this enhnaces read
performace and the HDDs are already using SSD WAL so writes are also
all SSD.  But that's a very specific use case and long term were are
going all SSD just not all at once.

If you have one set of hardware that's causing performance problems
it's probably best to create a pool that is all the low performance
disk and try to use that for less demanding applications if you can.

-Jon

:For example I have a sata rack with Apollo 4200 storage and I will get
:another hardware type to expand this rack, Hp 380 Gen10.
:
:I was made a lot tests to  understand the performance and these new disks
:have 100% of utilization in my environment and the cluster recovery is
:worst than another hardware.
:
:Can Someone recommend a best practice or configuration in this scenario? I
:make this issue because if these disks not performing as hope, I will
:configure another pools to my openstack and maybe that's not make sense to
:me because I will split nova process in the computes node if I have two
:pools .
:
:
:Regards,
:Fabio Abreu

:_______________________________________________
:ceph-users mailing list
:ceph-users@xxxxxxxxxxxxxx
:http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux