Re: Redirect snapshot COW to alternative pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 26, 2016 at 3:13 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
>
> Evening All,
>
> I’ve been testing the RBD snapshot functionality and one thing that I have seen is that once you take a snapshot of a RBD and perform small random IO on the original RBD, performance is really bad due to the amount of write amplification going on doing the COW’s. ie every IO to the parent no matter what size, equals 12MB of writes.
>
> I was wondering if there was anyway to redirect these writes to a different pool. Since only a small capacity would be required, a SSD/NVME pool could be provisioned very cheaply and would hopefully provide enough performance to allow the IO operations to the parent to be unaffected.
>
> I’ve looked at the RBD layering, which sort of looks like you can do stuff like this and also change the order. But it looks like you have to base it on an existent snapshot, so I believe I would still have the same problem. Or is there a “hidden feature” to make normal snapshots use this layering functionality?
>
> Nick

This isn't quite making sense to me. When you do a snapshot, then as
you say it's copy-on-write and every operation copies the data to new
blocks (whole-object copies with XFS; mere local blocks with btrfs)
inside of the OSD. With RBD layering, you do whole-object
copy-on-write from the client.
Doing it from the client does let you put "child" images inside of a
faster pool, yes. But creating new objects doesn't make the *old* ones
slow, so why do you think there's still the same problem? (Other than
"the pool is faster" being perhaps too optimistic about the
improvement you'd get under this workload.) There's definitely nothing
integrated into the Ceph codebase about internal layering, or a way to
redirect snapshots outside of the OSD, though you could always
experiment with flashcache et al.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux