Re: Decrepit ceph cluster performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




>> As per recent isdct/intelmas/sst?  The web site?
> 
> Yes.  It's all "Solidigm" now, which has made information harder to
> find and firmware harder to get, but these drives aren't exactly
> getting regular updates at this point.

Exactly.  "isdct" more or less became "intelmas", and post-separation Solidigm offers "sst".
I mention this since I encountered someone at Cephalocon who was puzzled why intelmas wasn't applying the known-latest revision:  he hadn't known about "sst".

For drives that old, I would think that one intelmas release or the other would contain the blobs.

> 
>> Newer SSD controllers / models are better than older models at housekeeping over time, so the secure-erase might freshen performance.
> 
> I mean... I don't have much else to try, so I may give it a shot!  My
> only hesitation is that there's not really any problem indicator I
> could check afterward. So I don't know how I would tell if it made a
> difference unless I did them all and then the problem went away.

There's an obscure "localpool" concept where you can define a pool for testing that lives only within a single failure domain.  Naturally this would be rather inadvisable for production, but you might create a localpool containing say 3 secure-erased drives and another containing 3 as-is drives, and run the bench against both.  That wouldn't take nearly as long.

It's not unusual for SSD manufacturers to recommend a secure-erase after any firmware update.  Probably most of the time it isn't really important, but ya never know.  When I worked with a certain manufacturer (ahem) to resolve a slippery workload-dependent firmware design flaw, this actually was important.

> Which at the speed this thing rebuilds might well be a 3-month
> project. :-/
> 
> Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux