Re: High ceph_osd_commit_latency_ms on Toshiba MG07ACA14TE HDDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/06/2020 5:10 pm, Frank Schilder wrote:
I was pondering with that. The problem is, that on Centos systems it seems to be ignored, in general it does not apply to SAS drives, for example, and that it has no working way of configuring which drives to exclude.

For example, while for data disks for ceph we have certain minimum requirements, like functioning power loss protection, for an OS boot drive I really don't care. Power outages on cheap drives that loose writes has not been a problem since ext4. A few log entries or contents of swap - who cares. Here, performance is more important than data security on power loss.

I would require a configurable option that works in the same way for all types of protocols, SATA, SAS, NVMe disks, you name it. At time of writing, I don't know of any.


Yes, I can see that would be an issue for more upmarket systems to mine :) Fortunately my cluster is small potatoes compared to most here, just 34TB across 23 OSD's, all SATA. Given that, its easy enough to turn write caching of by default for my nodes and enable the OS drive via a startup script - I presume there are no cache flush issues when turning it on.


I did set this for the whole cluster, can't say I noticed any particular improvement in performance when testing from my VM's, but it certainly didn't degrade it either. And I felt it safe given the OSD safety issues mentioned earlier.


Everything is on a UPS, but nevertheless, stuff happens - turns out in our new office we share the switchboard with the office next door and the new load of our servers popped the circuit breakers overnight. So the neighbour took it upon himself to let himself in and turn our UPS off, taking our nodes down hard. No damage done fortunately, but words were spoken later.

--
Lindsay
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux