Re: CEPH 16.2.x: disappointing I/O performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



These are valid points, thank you for the input!

/Z

On Wed, Oct 6, 2021 at 11:39 AM Stefan Kooman <stefan@xxxxxx> wrote:

> On 10/6/21 09:23, Zakhar Kirpichenko wrote:
> > Hi,
> >
> > Indeed, that's a lot of CPU and RAM, the idea was to put sufficient
> > resources in case we want to expand the nodes with more storage and do
> > EC. I guess having excessive resources shouldn't hurt performance? :-)
>
> That was also my take. Untill an (Oracle) DBA explained to me that in
> cases were you don't use / need the cache, it can hurt performance as it
> still costs resources to manage the cache. When using linux (with buffer
> cache enabled) RAM probably won't go to waste.
>
> My modus operandi is "no better kill than overkill" ... but on the other
> hand, you seem to want to optimize performance, not necessarily fill
> both CPU sockets. For a Ceph storage node I would want to have more OSDs
> instead of more memory / CPU (as long as there is plenty). Not sure if
> you run into any NUMA issues here, but a single socket system with say
> anywhere between 16-64 cores seem like a better fit. If you're going to
> attach a JBOD and add way more disks, then sure, you might need all
> these resources. But in that case I would still wonder if more nodes
> with less resources would be a better fit. Just thinking out loud here,
> although indirectly related to performance, not directly answering your
> disappointing I/O performance thread.
>
> Gr. Stefan
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux