Hi David,
Thanks for the clarification. Reminded me of some details I forgot
to mention.
In my case, the replica-3 and k2m2 are stored on the same spinning
disks. (Mainly using EC for "compression" b/c with the EC k2m2 setting
PG only takes up the same amount of space as a replica-2 while allowing
2 disks to fail like replica-3 without loss.)
I'm using this setup as RBDs and cephfs to store things like local
mirrors of linux packages and drive images to be broadcast over network.
Seems to be about as fast as a normal hard drive. :)
So is this the situation where the "cache tier [is] ont the same root
of osds as the EC pool"?
Thanks for the advice!
Chad.
On 09/30/2017 12:32 PM, David Turner wrote:
I can only think of 1 type of cache tier usage that is faster if you are
using the cache tier on the same root of osds as the EC pool. That is
cold storage where the file is written initially, modified and read door
the first X hours, and then remains in cold storage for the remainder of
its life with rate reads.
Other than that there are a few use cases using a faster root of osds
that might make sense, but generally it's still better to utilize that
faster storage in the rest of the osd stack either as journals for
filestore or Wal/DB partitions for bluestore.
On Sat, Sep 30, 2017, 12:56 PM Chad William Seys
<cwseys@xxxxxxxxxxxxxxxx <mailto:cwseys@xxxxxxxxxxxxxxxx>> wrote:
Hi all,
Now that Luminous supports direct writing to EC pools I was
wondering
if one can get more performance out of an erasure-coded pool with
overwrites or an erasure-coded pool with a cache tier?
I currently have a 3 replica pool in front of a k2m2 erasure coded
pool. Luminous documentation on cache tiering
http://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/#a-word-of-caution
makes it sound like cache tiering is usually not recommonded.
Thanks!
Chad.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com