Re: CEPH 16.2.x: disappointing I/O performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Indeed, that's a lot of CPU and RAM, the idea was to put sufficient
resources in case we want to expand the nodes with more storage and do EC.
I guess having excessive resources shouldn't hurt performance? :-)

/Z

On Wed, Oct 6, 2021 at 9:26 AM Stefan Kooman <stefan@xxxxxx> wrote:

> On 10/5/21 17:06, Zakhar Kirpichenko wrote:
> > Hi,
> >
> > I built a CEPH 16.2.x cluster with relatively fast and modern hardware,
> and
> > its performance is kind of disappointing. I would very much appreciate an
> > advice and/or pointers :-)
> >
> > The hardware is 3 x Supermicro SSG-6029P nodes, each equipped with:
> >
> > 2 x Intel(R) Xeon(R) Gold 5220R CPUs
> > 384 GB RAM
> > 2 x boot drives
> > 2 x 1.6 TB Micron 7300 MTFDHBE1T6TDG drives (DB/WAL)
> > 2 x 6.4 TB Micron 7300 MTFDHBE6T4TDG drives (storage tier)
> > 9 x Toshiba MG06SCA10TE 9TB HDDs, write cache off (storage tier)
> > 2 x Intel XL710 NICs connected to a pair of 40/100GE switches
>
> That's a lot of CPU cores and a lot of RAM for mostly spinners. Why is
> that? Will this be a hyperconverged solution?
>
> Can you repeat the tests with fio with rados and rbd as ioengines on a
> bare metal host?
>
> Gr. Stefan
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux