On Fri, Sep 27, 2013 at 4:00 PM, Ric Wheeler <rwheeler@xxxxxxxxxx> wrote: > I think that you are an order of magnitude off here in thinking about the > scale of the operations. > > An enabled, synchronize copy offload to an array (or one that turns into a > reflink locally) is effectively the cost of the call itself. Let's say no > slower than one IO to a S-ATA disk (10ms?) as a pessimistic guess. > Realistically, that call is much faster than that worst case number. > > Copying any substantial amount of data - like the target workload of VM > images or media files - would be hundreds of MB's per copy and that would > take seconds or minutes. Will a single splice-copy operation be interruptible/restartable? If not, how should apps size one request so that it doesn't take too much time? Even for slow devices (usb stick)? If it will be restartable, how? Can remote copy be done with this? Over a high latency network? Those are the questions I'm worried about. > > We should really work on getting the basic mechanism working and robust > without any complications, then we can look at real, measured performance > and see if there is any justification for adding complexity. Go for that. But don't forget that at the end of the day actual apps will need to be converted like file managers and "dd" and "cp" and we definitely don't wont a userspace library to be able to figure out how the copy is done most efficiently; it's something for the kernel to figure out. Thanks, Miklos -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html