НА: НА: which SSD / experiences with Samsung 843T vs. Intel s3700

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi!

>Meaning you're limited to 360MB/s writes per node at best.

We use ceph as OpenNebula rbd datastote for running VMs,
so bandwith constraints are not as important as iops limits.

>I use 1:2 or 1:3 journals and haven't made any dent into 
>my 200GB S3700 yet.

We started to move nodes one-by-one from oldes Firefly installation
to newer Hammer, and, as a part of this process, remove 2 OSDs
fron every node, add second ssd for journals and move system 
volume on separate 2disk mirror. So it will be 1:5 ratio soon.

>As I wrote a few days ago, unless you go for the 400GB version the the
>200GB S3710 is actually slower (for journal purposes) than the 3700, as
>sequential write speed is the key factor here.

Yes, I know, but there are no any 200Gb S3700 at our HW supplier - only S3710. :|
So we have to use them anyway.


Megov Igor
CIO, Yuterra



________________________________________
От: Christian Balzer <chibi@xxxxxxx>
Отправлено: 5 сентября 2015 г. 5:36
Кому: ceph-users
Копия: Межов Игорь Александрович
Тема: Re:  НА:  which SSD / experiences with        Samsung 843T vs. Intel s3700

Hello,

On Fri, 4 Sep 2015 22:37:06 +0000 Межов Игорь Александрович wrote:

> Hi!
>
>
> Have worked with Intel DC S3700 200Gb. Due to budget restrictions, one
>
> ssd hosts a system volume and 1:12 OSD journals. 6 nodes, 120Tb raw
> space.
>
Meaning you're limited to 360MB/s writes per node at best.
But yes, I do understand budget constraints. ^o^

> Cluster serves as RBD storage for ~100VM.
>
>
> Not a  single failure per year - all devices are healthy.
>
> The remainig resource (by smart) is ~92%.
>
I use 1:2 or 1:3 journals and haven't made any dent into my 200GB S3700
yet.

>
> Now we're try to use DC S3710 for journals.

As I wrote a few days ago, unless you go for the 400GB version the the
200GB S3710 is actually slower (for journal purposes) than the 3700, as
sequential write speed is the key factor here.

Christian
--
Christian Balzer        Network/Systems Engineer
chibi@xxxxxxx           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux