Re: Performance impact of Heterogeneous environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+1 to this, great article and great research. Something we've been keeping a very close eye on ourselves. 

Overall we've mostly settled on the old keep it simple stupid methodology with good results. Especially as the benefits have gotten less beneficial the more recent your ceph version, and have been rocking with single OSD/NVMe, but as always everything is workload dependant and there is sometimes a need for doubling up 😊

Regards,

Bailey


> -----Original Message-----
> From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
> Sent: January 17, 2024 4:59 PM
> To: Mark Nelson <mark.nelson@xxxxxxxxx>; ceph-users@xxxxxxx
> Subject:  Re: Performance impact of Heterogeneous
> environment
> 
> Very informative article you did Mark.
> 
> IMHO if you find yourself with very high per-OSD core count, it may be logical
> to just pack/add more nvmes per host, you'd be getting the best price per
> performance and capacity.
> 
> /Maged
> 
> 
> On 17/01/2024 22:00, Mark Nelson wrote:
> > It's a little tricky.  In the upstream lab we don't strictly see an
> > IOPS or average latency advantage with heavy parallelism by running
> > muliple OSDs per NVMe drive until per-OSD core counts get very high.
> > There does seem to be a fairly consistent tail latency advantage even
> > at moderately low core counts however.  Results are here:
> >
> > https://ceph.io/en/news/blog/2023/reef-osds-per-nvme/
> >
> > Specifically for jitter, there is probably an advantage to using 2
> > cores per OSD unless you are very CPU starved, but how much that
> > actually helps in practice for a typical production workload is
> > questionable imho.  You do pay some overhead for running 2 OSDs per
> > NVMe as well.
> >
> >
> > Mark
> >
> >
> > On 1/17/24 12:24, Anthony D'Atri wrote:
> >> Conventional wisdom is that with recent Ceph releases there is no
> >> longer a clear advantage to this.
> >>
> >>> On Jan 17, 2024, at 11:56, Peter Sabaini <peter@xxxxxxxxxx> wrote:
> >>>
> >>> One thing that I've heard people do but haven't done personally with
> >>> fast NVMes (not familiar with the IronWolf so not sure if they
> >>> qualify) is partition them up so that they run more than one OSD
> >>> (say 2 to 4) on a single NVMe to better utilize the NVMe bandwidth.
> >>> See
> >>> https://ceph.com/community/bluestore-default-vs-tuned-
> performance-co
> >>> mparison/
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> >> email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
> to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux