Re: CephFS with erasure coding, do I need a cache-pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 17, 2018 at 3:40 AM Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:

I'd be interested to hear more from Greg about why cache pools are best
avoided...

While performance has improved over many releases, cache pools still don't do well on most workloads that most people use them for. As a result we've moved away from their current implementation; we continue to run their tests and don't merge code which fails them, but bugs which pop up in the community or are intermittent get a lot less attention than other areas of RADOS do.

On Tue, Jul 17, 2018 at 6:32 AM Oliver Schulz <oliver.schulz@xxxxxxxxxxxxxx> wrote:
> But you could also do workaround like letting it choose (K+M)/2 racks
> and putting two shards in each rack.

I probably have this wrong - wouldn't it reduce durability
to put two shards in one failure domain?

Oh yes, you are more susceptible to top-of-rack switch failures in this case or whatever. It's just one option — many people are less concerned about their switches than their hard drives, especially since two lost switches are an accessibility but not a durability issue.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux