On 05/03/2017 11:13 PM, John Ferlan wrote: > > > On 04/30/2017 02:43 AM, Michal Privoznik wrote: >> On 04/29/2017 06:46 PM, John Ferlan wrote: >>> >>> >>> On 04/20/2017 06:01 AM, Michal Privoznik wrote: >>>> This function takes a FD and determines whether the current >>>> position is in data section or in a hole. In addition to that, >>>> it also determines how much bytes are there remaining till the >>>> current section ends. >>>> >>>> Signed-off-by: Michal Privoznik <mprivozn@xxxxxxxxxx> >>>> --- >>>> src/libvirt_private.syms | 1 + >>>> src/util/virfile.c | 81 +++++++++++++++++++ >>>> src/util/virfile.h | 3 + >>>> tests/virfiletest.c | 203 +++++++++++++++++++++++++++++++++++++++++++++++ >>>> 4 files changed, 288 insertions(+) >>>> >>> >>> FWIW: This feels like an "inflection point" from which the remainder of >>> the algorithms are built around. I've forgotten anything from the >>> discussion for the original set of patches, but since I had a few cycles >>> to think... Apologies in advance for the length. >>> >>> I guess I was under the impression that the purpose for sparse streams >>> is to find hole's in a file|stream and ensure those don't get copied >>> instead generating the hole on the target. If so, a bunch of different >>> small API's come to mind after reading this patch and trying to >>> internalize what's taking place: >>> >>> vir{File|Stream}SeekHoleSupported >>> -> If SEEK_{DATA|HOLE} returns -1 and EINVAL, then either don't >>> use sparse streams and force usage of the non sparse option or just fail. >> >> We shouldn't fail. All the tools that I've seen and do support sparse >> files do accept fully allocated files gracefully, e.g. cp --sparse >> allways or rsync -S. I mean, even though this function you're suggesting >> here would fail or something, the whole stream transmission should just >> carry on. >> > > My "default" thought was, if sparse wasn't supported, then fallback to > full copy; however, someone else may say no I don't want that. If I had > a 100G file I knew was sparse, I'm not sure I would want that fallback > to full copy! Sure. And that is what happens. Well, sort of. With this patches the following can happen: Sparse Stream Req Yes | No FS -----------+------------ supports Yes Success | Fallback holes No Error | Fallback A problem might be if FS doesn't support holes (=lseek(SEEK_HOLE) returns an error) as we error out in that case. On the other hand, that's libvirt philosophy - if user requested a feature we cannot meet, we should error out. They can always retry with the feature turned off. Or, we can introduce another flag: VIR_STORAGE_VOL_UPLOAD_SPARSE_STREAM_OPT which silently falls back to non-sparse streams if underlying FS doesn't support holes. BTW: I don't think there's a filesystem now that wouldn't support holes. Right, ext2 doesn't but is there a machine that still uses ext2 and runs latest libvirt? I don't think so. Therefore, the following table is more important: Sparse Stream Req Yes | No -----------+------------ sparse Yes Success | Fallback file No Success* | Fallback * - This is where we effectively fallback to old method; simply because VIR_NET_STREAM_SKIP is never sent and thus the new code paths are never triggered. Basically, at the beginning of the transfer we detect that there's X bytes long data section. So we send bare data (no skip/hole). After we've sent all the data we ask again and find out that we're at EOF so the stream is finished. > >>> >>> vir{File|Stream}HasHole >>> -> From beginning of file, find first HOLE < EOF >>> -> If none, then sparse streams won't matter, right? >> >> No. I will not. But it doesn't matter if the first hole is at EOF or >> before it, if we know how to transfer holes effectively. >> >>> >>> vir{File|Stream}IsInHole >>> -> From @cur if SEEK_DATA > SEEK_HOLE >>> >>> vir{File|Stream}IsInData >>> -> From @cur if SEEK_HOLE > SEEK_DATA >>> >>> vir{File|Stream}GetDataEnd >>> -> From @cur of DATA return @length >>> >>> vir{File|Stream}GetHoleEnd >>> -> From @cur of HOLE return @length >>> >>> We should know where we are and be able to keep track, right? Having to >>> determine @cur each time just seems to be overhead. >> >> Firstly, this is merely what I'm implementing later in the series (see >> patch 33/38 [and I just realized that due to a mistake patch 34/38 has >> wrong commit message]). > > Yes, I need to get further in the series, but I'm also easily distracted > by external forces. > >> Secondly, getting current position in a file should be as quick as this >> (imagine me snapping my fingers :-)). The VFS has to keep track of the >> current position in a file for sake of all file operations. /me goes and >> looks into kernel sources... Yep, it looks like so: >> generic_file_llseek() from fs/read_write.c. But the main point is .. >> >>> For the purpose of >>> what's being done - I would think we start at 0 and know we have a >>> findable @end - then it would be "just" (hah!) a matter of ensuring we >>> have holes and then a sequence of finding hole/data from whence we start >>> (since theoretically we could start in a hole, right?). >>> >>> Maybe the following is too simplistic... >>> >>> if (supportsSparse) >>> if ((cur = lseek(fd, 0, SEEK_SET)) != 0) >>> goto error; >>> >>> if (!hasHoles) >>> return fullCopy? (although I think the rest would still work) >>> >>> if ((end = lseek(fd, 0, SEEK_END)) < 0) >>> goto error; >>> >>> inHole = virFile|StreamIsInHole(fd, cur) >>> while (cur < end) >>> if (inHole) >>> get size of hole >>> if ret < 0, then size would be end-cur, right? >>> update remote to write hole >>> update cur to include size of hole >>> else /* assume? in data */ >>> get size of data >>> if ret < 0, then size would be end-cur, right? >>> update remote to include data >>> update cur to include size of data >> >> ... that here, in this branch, we should send the data from the data >> section to client, right? But how can we do that if we lseek()-ed away >> (to the end of the section)?. I mean virFileStreamIsInHole() would need >> to seek back to the original current position in the file anyway. Just >> like virFileInData() does. Otherwise, if a file is in data section we >> would not read it. >> > > OK fair enough. To some degree the concern was over too many possible > failure paths/reasons in one function that may not allow flexibility to > allow some consumer to "handle" them... > > Still the premise in this code is that the 'fd' keeps track of where it > is/was. Hopefully nothing can cause that to change... Well, it's up to caller to ensure that. But since this function is called from basically one place when transferring files, we are safe here - the FD that is passed to this function is dealt with from just one threat. > The first open > would start at 0 (theoretically), then the reads and "hole mgmt" would > update for that fd. The model being I know where I am, you take what I > give you. > > Still does this play well if you ever wanted to support some sort of > check-pointing feature for a transfer that got interrupted? Then you'd > need to pass a current spot and support the option of - I'll tell you > where to go/start and you then send from that spot. A 100G sparse file > xfr could benefit from that. Of course that leaves data integrity > questions... We don't support interrupting streams. > >>> >>> I guess I just became concerned about the various decision/exit points >>> in virFileInData below and started wondering if they were really needed. >>> >>> >>>> diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms >>>> index 181e178..7132b3a 100644 >>>> --- a/src/libvirt_private.syms >>>> +++ b/src/libvirt_private.syms >>>> @@ -1620,6 +1620,7 @@ virFileGetHugepageSize; >>>> virFileGetMountReverseSubtree; >>>> virFileGetMountSubtree; >>>> virFileHasSuffix; >>>> +virFileInData; >>>> virFileIsAbsPath; >>>> virFileIsDir; >>>> virFileIsExecutable; >>>> diff --git a/src/util/virfile.c b/src/util/virfile.c >>>> index cbfa384..093125c 100644 >>>> --- a/src/util/virfile.c >>>> +++ b/src/util/virfile.c >>>> @@ -3793,6 +3793,87 @@ virFileComparePaths(const char *p1, const char *p2) >>>> cleanup: >>>> VIR_FREE(res1); >>>> VIR_FREE(res2); >>>> + >>>> + return ret; >>>> +} >>>> + >>>> + >>> >>> s/b >>> int >>> virFileIndata(...) >>> >>> Lacking any sort of description of input/returns. Lot's of variables >>> here too - whether fd valid/invalid could cause error... Setting *inData >>> based on where we are... Setting *length based on no returning a -1... >>> and of course returning -1 based on some lseek call failing for various >>> reasons. >> >> Okay, I'll add some description. >> >>> >>>> +int virFileInData(int fd, >>>> + int *inData, >>> >>> Usage seems to be only boolean... >> >> Yeah, it can be a bool. >> >>> >>>> + unsigned long long *length) >>>> +{ >>>> + int ret = -1; >>>> + off_t cur, data, hole, end; >>>> + >>>> + /* Get current position */ >>>> + cur = lseek(fd, 0, SEEK_CUR); >>>> + if (cur == (off_t) -1) { >>>> + virReportSystemError(errno, "%s", >>>> + _("Unable to get current position in file")); >>>> + goto cleanup; >>>> + } >>> >>> One would hope this doesn't fail, still could @cur be input? I guess I >>> wonder why we're looking for current position if the purpose is to copy >>> from start to end ensuring that holes aren't copied, but rather just >>> created on the target. I am assuming this function is for the "source" >>> side. >> >> It's for both. So the whole point of sparse streams is to preserve >> sparseness as volume images are copied from daemon to client (virsh >> vol-download) or from client to daemon (virsh vol-upload). Currently >> what we have is something among the following lines: >> >> while (true) { >> got = read(file, buf, sizeof(buf)); >> if (got < 0 || got == 0) >> break; >> if (virStreamSend(stream, buf, got) < 0) >> break; >> } >> >> (Simplified very much, look at virStreamSendAll + virStreamRecvAll). >> >> As you can see, we don't care about the holes. We don't know about holes >> at all. So the point of this whole feature is to still copy the data AND >> preserve sparseness at the same time: >> >> while (true) { >> virFileInData(fd, &inData, &offset); >> >> if (inData) { >> while (offset) { >> got = read(fd, buf, sizeof(buf)); >> virStreamSend(stream, buf, got); >> offset -= got; >> } >> } else { >> virStreamSendHole(stream, offset); >> lseek(fd, offset, SEEK_CUR); >> } >> } >> >> Here we can clearly see that virFileInData() must leave the position in >> @fd unchanged upon its return. >> > > I know I missed the lseek in virFileInData on my initial pass through... > > Regardless of up/down-load it's still be a copy from 0 to EOF. While this is what 99.9% users do, we still offer possibility to transfer just [X:Y] bytes of a file. > I was > trying to avoid the terms client/server and think source/target. Getting > @cur each time and needing to check status each time just seemed > unnecessary. Given this logic getting an @cur that differs from what > what you expect would be far more tragic I would think! Again, we are not getting @cur each time we want to read from the file. In the end, this function is called at section boundaries. So if you have: 0 -> data -> hole -> data -> hole -> data -> hole -> data -> EOF this function is going to be called 8 times in total. Regardless of size of each section. That is not bad. I still don't think lseek() is a problem. I've ran a test on my laptop and was able to do as much as 20M lseek()-s per second. IOW, one lseek() call was about 50ns. So I don't see any performance issue there. There are places in libvirt where we sleep for seconds. Compare that to 50ns spent here. > > Still in both cases 'offset' could be used as in expected spot > calculation even though (as you put it) it's only time wise a snap of > the fingers - it's still a snap and a trip to get data that could be > saved. Having a failure possibility "just because" we decided not to > save @cur is what I was remarking on. So as written things work. I guess > I just want to be sure that we won't be down in this code again what > some other adjustment is requested. I don't think so. Consider the following: kernel already has to keep a map of data/hole sections. I don't see a reason for us to duplicate that in userspace. We would have keep track of position in the file. Well, duplicate - kernel already does that. Then keep track of data/hole sections. Again, duplicate. And for what? Saving couple of nanoseconds? Well, we would lose them while maintaining our own copies of information that kernel already has. > > One such adjustment that comes to mind is check-pointing which I would > think would utilize an input location... > >>> >>>> + >>>> + /* Now try to get data and hole offsets */ >>>> + data = lseek(fd, cur, SEEK_DATA); >>>> + >>>> + /* There are four options: >>>> + * 1) data == cur; @cur is in data >>>> + * 2) data > cur; @cur is in a hole, next data at @data >>>> + * 3) data < 0, errno = ENXIO; either @cur is in trailing hole, or @cur is beyond EOF. >>>> + * 4) data < 0, errno != ENXIO; we learned nothing >>> >>> if data < 0 and errno = EINVAL, seeking data/holes isn't supported, >>> true? If so then it might be a possible function to check if data/hole >>> seeking is supported. >>> >>>> + */ >>>> + >>>> + if (data == (off_t) -1) { >>>> + /* cases 3 and 4 */ >>>> + if (errno != ENXIO) { >>>> + virReportSystemError(errno, "%s", >>>> + _("Unable to seek to data")); >>>> + goto cleanup; >>>> + } >>>> + >>>> + *inData = 0; >>>> + /* There are two situations now. There is always an >>>> + * implicit hole at EOF. However, there might be a >>>> + * trailing hole just before EOF too. If that's the case >>>> + * report it. */ >>>> + if ((end = lseek(fd, 0, SEEK_END)) == (off_t) -1) { >>>> + virReportSystemError(errno, "%s", >>>> + _("Unable to seek to EOF")); >>>> + goto cleanup; >>> >>> So for the purpose of what you're trying to do should we care? I suppose >>> if you're in a hole and cannot find the end that might be odd. Still if >>> we know upfront what @cur and what @end are, then does some crazy >>> trailing hole/end/eof need to cause an error? >> >> Because of file size. Imagine a file with the following structure >> >> |- data 1MB -||- hole 1MB -||- EOF -| >> >> The file is clearly 2MBs in size, right? Now, my function would report 2 >> holes: the first one of 1MB size and the second implicit one at EOF. If >> we were to merge those two holes into one, the file would shrink in size >> and basically become a different file. The problem is we don't transfer >> any kind of metadata throughout the copying process. The other side >> doesn't know how big is the file we're copying; what is its data/hole >> structure, nothing. >> > > Similar to the starting point or @cur feedback... Can EOF "change" > whilst you're copying? If you know at the start you have a 2MB file to > xfr, then you xfr 1MB of data, but then don't find another data section > then you know there's a 1MB hole at the end. Having an error here causes > me to wonder why and what if... > > Perhaps the wording of the comment has thrown me off a bit. AIUI, the > only way to get this far is if ENXIO has been provided. So what would > cause a return of -1 on a SEEK_END at this point in time? Is it: > > 3a: start ... data ... hole ... cur ... eof > 3b: start ... data ... hole ... eof ... cur > > How would @cur > @end? Well, as I've said above, it's not always 0->EOF copy of a file. Users can specify "I want copy of this volume form offset O, X bytes long". At the first call of virFileInData, the @fd is positioned at offset O. What if users chose O beyond EOF? > If we keep track of @cur and @end when we start, > then one would hope that only by our actions do they change. Trying to > account for a case where something else causes change to the @cur whilst > we're reading/copy data perhaps would imply that what we cannot control > what we copy. Sure, we cannot. It's a file. Nothing stops you from overwriting it under libvirt's hands. At that point, any track (copy of metadata) we have is stale. > > >>> >>>> + } >>>> + *length = end - cur; >>>> + } else if (data > cur) { >>>> + /* case 2 */ >>>> + *inData = 0; >>>> + *length = data - cur; >>>> + } else { >>>> + /* case 1 */ >>>> + *inData = 1; >>>> + >>>> + /* We don't know where does the next hole start. Let's >>>> + * find out. Here we get the same 4 possibilities as >>>> + * described above.*/ >>>> + hole = lseek(fd, data, SEEK_HOLE); >>>> + if (hole == (off_t) -1 || hole == data) { >>> >>> Now I'm wondering about the implicit hole @ EOF described above? and the >>> error checks from that that aren't done here. >> >> What do you mean? What error checks? This code can be simpler because >> when entering this branch we know that we are in a data section. And we >> just want to know its length. So we call SEEK_HOLE. >> > > Well cases 3&4 had an error scenario that was called out - checks vs. > ENXIO which aren't done here. My question was more towards the > subsequent comment [1] and checks done above for them related to errno > value and seeking to end. > >>> >>> Still does this matter in the "case" you want to use this for? That is >>> if there's no HOLE from @data to @end, then for the purpose of stream >>> copying would you'd just be copying everything from data -> end (e.g. >>> @length = (end - data)? IOW, does it really matter? >> >> Well, that's the point. The phrasing of the question is very important. >> How can we know there's no hole from @cur to EOF if we don't go through >> this branch? And after we've gone through it (and after 33/38) we will >> no longer call this function at all. >> >>> >>>> + /* cases 1, 3 and 4 */ > > ^^^^ [1] > >>>> + /* Wait a second. The reason why we are here is >>>> + * because we are in data. But at the same time we >>>> + * are in a trailing hole? Wut!? Do the best what we >>>> + * can do here. */ >>>> + virReportSystemError(errno, "%s", >>>> + _("unable to seek to hole")); >>> >>> Would the "best" we can do is just copy everything from this point to >>> the @end regardless of whether that's copying a hole? Again it's just >>> another error that perhaps we could avoid. >> >> Not sure. If the kernel is giving me spurious data how can I believe any >> subsequent read()? Probably the other condition hole == data is never >> going to happen, but I thought better be sure than sorry. >> > > So if we don't find a hole while we're in data, then something is > seriously wrong, I see - gotcha. Especially since the definition of EOF > is an implicit hole. Would "hole == data" even matter/occur? > Perhaps no. But I wanted to be safe than sorry. > For cases 2, 3, 4 => @*length is the size of the hole? > For case 1 => @*length is the size of the data Yes. > >>> >>> >>> I'll be OOO on Mon/Tue - just to "set" expectations of any chance at me >>> reading a response! >>> >> >> Fair enough. This is not a trivial feature. It touches parts of libvirt >> that perhaps we all are scared to touch. So I expect some misconceptions >> need to be cleared. >> >> Michal >> > > Still it's also easy enough to "not" use it by not requesting sparse > stream too, so while it does touch some common areas we can mitigate > risk to existing algorithms. > > I'm all for this - just don't want to see us have an oh snap moment too. > I don't want to derail the process, but also be sure we don't have to > revisit it because of something forgotten. Maybe something here triggers > an oh snap moment. > > Perhaps some more things to think about or what jiggled around the back > of my head the last couple of days. > > ... The primary consumer I assume would be 10G+ xfrs of sparse files. > After all 1G xfrs are fairly "snappy" in todays technology... > > ... Is there a desire to eventually know how far along in the copy you > might be? Not now. There might be one in the future. But user can already assemble this information. I mean, they can fetch volume info and learn how big the volume is. Then when they are doing the copy they know how much data has been transferred in the stream (the sum of all virStreamRecv() values). > For a 100G file I might like to know whether it's still > copying or somehow stuck. For a 100G file that has a break in > communication it sure would be nice to "restart" from the last 100M > rather that restart from point 0. You can do that with the current APIs. > > ... Is there a desire to know stats such as # of data segments, # of > sparse segments? Or even more details regarding each segment? Not that > we want to necessarily become that kind of provide, but we have the data > and could pass it back-n-forth. I just copied 10M of data, I just > skipped 100M of sparse space (details) or out of 10G there was 1G of > data an 9G of sparse space. I don't think we will ever need to expose that kind of information. But never say never. But that has nothing to do with this feature anyway. > > Would it be better to have an API that has an extendable data structure > that allows for future flexibility? Or do we just create a new API to > handle that that is a superset of the those being added via this series. The latter. Michal -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list