-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Not supported at the moment, but it is in the eventual plans and I think some of the code has been written such that it will help facilitate the development. - ---------------- Robert LeBlanc GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Wed, May 20, 2015 at 10:06 PM, Reid Kelley wrote: > Could be a stupid/bad question; is a three tier cache/mid/cold setup supported? Example would be: > 1. Fast NVME drives —> (write-back)—> 2. Mid grade MLC SSD for primary working set —>(write-back)—> 3. Super-Cold EC Pool for cheapest-deepest-oldest data > > Theoretically, that middle tier of quality consumer or lower end enterprise SSDs would be cost effective to scale to a large, replicated size, while still maintaining fast performance. NVME could absorb heavy writes and fastest interaction, with the EC pool scaling behind it all. > > Thanks > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -----BEGIN PGP SIGNATURE----- Version: Mailvelope v0.13.1 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJVXgCUCRDmVDuy+mK58QAA70YP/38jkWUtNlhQdkNPqlsZ m37cnvvlfoehEBbUkRJQgx2m+2UxkbNcKKFBWkuX+DqZy6TSndnnQjp+pQ1f 10n4c99Z8Gr2Q1NP2yP5DtQZw6Pg1v/50uQz/vm+XBFBWukfjCBO9sLg7k4u JEbonUu24HnOmzeQXwXzPwFRKuaYHGIiyRzxX5BI3k5xSW/yViIJQV/ifth0 82wMNfeLwd7/anSm+TUiGlMIXBaJyNcCS4qIP2SyRUEoS2Rjvw98/qisosef K3riWWdH102AXDrP0ME05W3J+b+2hNMS4p2rNYGKZaigohJNqNRNMdrqmySi wz1l2nLJTCLIkse9WbxLeJz7/rAs1F/42J5Le6W22FI0PjyDx9zDo61Cp4rs HbO7WEf1kuI1gpn9s1BsfTo/aZQHiJmwzGAg9TMW4HOga6lggZuVq1EENXj2 1EnZWwfBkflU2BfSNu8m2YY3bLB/QTYlUlhu7HiemidcIu786eRTOvEydVxU xm8/D8KZAY9zHJfwhqDUKnEJ/YtYzTH8Z4L/cfyJuzqjWtt56pxoBQAgbmu2 u29fXM7BW0raDjoS7RYiIiyK5WHt905bYUpT5pTprfw7BPCwCtye7MYcIl0m 7SqRMsf1GTUuxvAj7LW1l31jNln42nTbf/YVpajLej9G2hchr3YiXNyyun9D rDrQ =o9hn -----END PGP SIGNATURE----- _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com