Re: crimson-osd vs legacy-osd: should the perf difference be already noticeable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Put another way, consider the savings:

o If today you have to deploy dual-socket servers for enough cycles, especially with dmcrypt and during recovery.
o Tomorrow you could deploy cost-effective servers with much less expensive single-socket CPUs (and thus freedom from NUMA hassles)

alternately: 

o Denser servers with more OSDs per node but the same number of sockets / cores

For larger clusters the CapEx efficiency will be substantial.  Maybe a cluster that economics previously limited to HDDs can now be all-SSD.

— aad


> 
>> I do not understand.  I talk about simple comparison metric for any
>> storage application - IOPS.  Since both storage applications
>> (legacy-osd,
>> crimson-osd) share absolutely the same Ceph spec - that is a fair
>> choice.
> 
> That way you're actually thinking about IOPS from an OSD instance
> *disregarding how much HW resources it spends* to serve your
> workload. This comparison ignores absolutely fundamental difference
> in architecture:
> 
>  * crimson-osd is single-threaded at the moment. It won't eat more
>    than 1 CPU core. That's by design.
>  * ceph-osd is multi-threaded. By default single instance has up to 16
>    `tp_osd_tp` and 3 `msgr-worker-n` threads. This translates into upper,
>    theoretical bound of 19 CPU cores. In practice it's of course much
>    lower but still far above than for crimson-osd.
> 
> Both implementations share the same restriction: amount invested on
> hardware resources to run the cluster. How much IOPS you will get from
> it is determined by the OSD's *computational efficiency*.
> The goal is to maximize IOPS from fixed set of hardware OR, rephrased,
> to minimize the hardware resources needed to provide a given amount
> of IOPS.
> 
> The problem is awfully similar to the performance-per-watt metric and
> CPU's power efficiency. Electrical / cooling power is scarce resource
> just like number of CPU cores is in a Ceph cluster.
> 
> Regards,
> Radek
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux