Re: SSD question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 21, 2013 at 7:05 PM, Martin Catudal <mcatudal@xxxxxxxxxx> wrote:
> Hi,
>      I have purchase my hardware for my Ceph storage cluster but did not
> open any of my 960GB SSD drive box since I need to answer my question first.
>
> Here's my hardware.
>
> THREE server Dual 6 core Xeon 2U capable with 8 hotswap tray plus 2 SSD
> mount internally.
> In each server I will have :
> 2 x SSD 840 Pro Samsung 128 GB in RAID 1 for the OS
> 2 x SSD 840 Pro Samsung for journal
> 4 x 4TB Hitachi 7K4000 7200RPM
> 1 x 960GB Crucial M500 for one fast OSD pool.
>
> Configuration : One SSD journal for two 4TB so If I lost one SSD
> journal, I will only lost Two OSD instead of all my storage for that
> particular node.
>
> I have also bought 3 x 960GB M500 SSD from Crucial for the creation of a
> fast Pool of OSD made from SSD's. So One 960GB per server for database
> application.
> It is advisable to do that but it is better to return them and for the
> same price buy 6 more 4TB Hitachi?
>
> Since the write acknowledgment is made from the SSD journal, does I have
> a huge improvement by using SSD as OSD?
> My goal is to have solid fast performance for database ERP and 3D
> modeling of mining gallery run in VM.

The specifics depend on a lot of factors, but for database
applications you are likely to have better performance with an SSD
pool. This is because even though the journal can do fast
acknowledgements, that's for evening out write bursts — on average it
will restrict itself to the speed of the backing store. A good SSD can
generally do much more than 6x a HDD's random IOPS.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

>
> Thank's
> Martin
>
> --
> Martin Catudal
> Responsable TIC
> Ressources Metanor Inc
> Ligne directe: (819) 218-2708
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux