Hi Chip,
Regarding CephFS performance, it really depends on the io patterns and
what you are trying to accomplish. Can you talk a little bit more about
what you are seeing?
Thanks,
Mark
On 2/16/21 8:42 AM, Schweiss, Chip wrote:
For the past several months I had been building a sizable Ceph cluster that
will be up to 10PB with between 20 and 40 OSD servers this year.
A few weeks ago I was informed that SUSE is shutting down SES and will no
longer be selling it. We haven't licensed our proof of concept cluster
that is currently at 14 OSD nodes, but it looks like SUSE is not going to
be the answer here.
I'm seeking recommendations for consulting help on this project since SUSE
has let me down.
I have Ceph installed and operating, however, I've been struggling with
getting the pool configured properly for CephFS and getting very poor
performance. The OSD servers have TLC NVMe for DB, and Optane NVMe for
WAL, so I should be seeing decent performance with the current cluster.
I'm not opposed to completely switching OS distributions. Ceph on SUSE was
our first SUSE installation. Almost everything else we run is on CentOS,
but that may change thanks to IBM cannibalizing CentOS.
Please reach out to me if you can recommend someone to sell us consulting
hours and/or a support contract.
-Chip Schweiss
chip.schweiss@xxxxxxxxx
Washington University School of Medicine
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx