Re: SED drives , poor performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is where an understanding of how an encrypting drive works needs to be taken into account.

An SED always encrypts all data written to it it uses a long encryption key that is built into the drive.

When you “turn on encryption” what you are doing is setting a pin to allow access to the encryption key to read your data back.

I would be looking at some of the heparin parameters to see if there are any differences in these parameters. Things like drive caching being on for one drive and off for another.

Having tested many different drives over the years I have never seen a performance difference between SED and non SED when all parameters on the drive are the same.

Sent from my iPhone

> On 8 Aug 2020, at 12:12, Edward kalk <ekalk@xxxxxxxxxx> wrote:
> 
> been reading up on the Seagate Constellation ES SED, don’t see anything saying that can be done. I plan to swap one with a spare non-SED I have next week to see if perf goes normal.
> 
> -Ed
> 
>> On Aug 8, 2020, at 5:56 AM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>> 
>> 
>> Maybe to obvious suggestion, but what about disabling SED on one of 
>> these drives?
>> 
>> 
>> -----Original Message-----
>> Cc: ceph-users@xxxxxxx
>> Subject:  SED drives , poor performance
>> 
>> Im getting poor performance with 5 of my OSDs, Seagate Constellation ES 
>> SED (1) 10k  SAS 2TB 3.5 drives.
>> disk write latency keeps drifting high , 100ms-230ms on writes. the 
>> other 30 OSDs are performing well. avg latency 10-20ms
>> 
>> We observe stats via “iostat -xtc 2” on CEPH server. w_await showing 
>> writes to disk latency. 
>> 
>> **anyone else have poor latency on SEDs?
>> 
>> The disks were included on a server we acquired. Didn’t know they would 
>> be SEDs.
>> I’ve read that seagate SED vs. non SED shouldn’t perform differently, 
>> but looks like they are awful.  All the SEDs are in the newly added CEPH 
>> node. A Dell R510 with H700 raid card. all disks in entire cluster are 
>> R0, direct write to disk, no cache.
>> The new server and disk control spec out at about 10x the ability and 
>> IOPs of other servers, so I suspect the SEDs.
>> 
>> -Ed
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
>> email to ceph-users-leave@xxxxxxx
>> 
>> 
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux