Re: NVMe's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I would put that data on the ceph.com website. Eg. A performance/test 
page with every release and compared to the previous release. Some 
default fio tests like you now have in the spreadsheet. And maybe some 
io patterns that relate to real world use cases like databases. Like eg 
how these guys from storagereview publish stuff[1]

I think it is bad that someone needs to ask here on the mailing list, 
whether or not he can put mariadb on a rbd image. I also had to search 
the mailing list to discover info about the impact of using encrypted 
osd's. Would have been great to look at charts of a reference cluster 
from before and after such a change. I assume your choice for 
aes-xts-plain64 was done on the basis of some default tests. If 
performed on the reference cluster, they could be published straight 
away on the internet. (And nobody needed to ask here)


[1]
https://www.storagereview.com/review/hgst-4tb-deskstar-nas-hdd-review


-----Original Message-----
Subject: Re:  Re: NVMe's

On 9/23/20 8:05 AM, Marc Roos wrote:

>> I'm curious if you've tried octopus+ yet?
> Why don't you publish results of your test cluster? You cannot expect 
> all new users to buy 4 servers with 40 disks, and try if the 
> performance is ok.
>
> Get a basic cluster and start publishing results, and document changes 

> to the test cluster.
>
>
>

By publish do you mean write up a report based on the spreadsheet I 
linked?  I periodically do that (both internally and externally) but 
there's not enough time in the day to do it for everything I look at if 
I also want to actually fix anything ala 
https://github.com/ceph/ceph/pull/28597. ; If you mean more reference 
architecture validation type stuff we have a team at Red Hat that does 
that with RHCS and OCS.  It's one of the benefits you get when you go 
down the RH support path.


Mark


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux