Using Crucial MX100 for journals or cache pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You really do want power-loss protection on your journal SSDs.  Data
centers do have power outages, even with all the redundant grid
connections, UPSes, and diesel generators.

Losing an SSD will lose of all of the OSDs that are using it as a journal.
 If the data center loses power, you're probably going to lose more than
one SSD.  It's a probability, so the likelihood of multiple failures goes
up as you add more SSDs.

For me, the possibility of losing data after a sudden power outage isn't
worth the cost savings.




On Fri, Aug 1, 2014 at 1:38 AM, Andrei Mikhailovsky <andrei at arhont.com>
wrote:

> Hello guys,
>
> Was wondering if anyone has tried using the Crucial MX100 ssds either for
> osd journals or cache pool? It seems like a good cost effective alternative
> to the more expensive drives and read/write performance is very good as
> well.
>
> Thanks
>
> --
> Andrei Mikhailovsky
> Director
> Arhont Information Security
>
> Web: http://www.arhont.com
> http://www.wi-foo.com
> Tel: +44 (0)870 4431337
> Fax: +44 (0)208 429 3111
> PGP: Key ID - 0x2B3438DE
> PGP: Server - keyserver.pgp.com
>
> DISCLAIMER
>
> The information contained in this email is intended only for the use of
> the person(s) to whom it is addressed and may be confidential or contain
> legally privileged information. If you are not the intended recipient you
> are hereby notified that any perusal, use, distribution, copying or
> disclosure is strictly prohibited. If you have received this email in error
> please immediately advise us by return email at andrei at arhont.com and
> delete and purge the email and any attachments without making a copy.
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140805/bcbc8373/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux