On 2/28/24 19:03, Richard Fontana wrote:
On Tue, Feb 27, 2024 at 5:58 PM Tim Flink <tflink@xxxxxxxxxxxxxxxxx> wrote:
On 2/26/24 19:06, Richard Fontana wrote:
<snip>
4. Is it acceptable to package code which downloads pre-trained weights from a non-Fedora source upon first use post-installation by a user if that model and its associated weights are
a. For a specific model?
What do you mean by "upon first use post-installation"? Does that mean
I install the package, and the first time I launch it or whatever, it
automatically downloads some set of pre-trained weights, or is this
something that would be controlled by the user? The example you gave
suggests the latter but I wasn't sure if I was misunderstanding.
Once the package is installed, pre-trained weights would downloaded if and only if code written to use a specific model with pre-trained weights is run. In the cases I'm aware of, code that would cause the weights to be downloaded is not directly part of the packaged libraries and anything that could trigger the downloading of pre-trained weights would have to be written by a user or contained in a separate package. If a specific model with pre-trained weights is not used and not executed by another library/application, the weights will not be downloaded. With the ViT example, the vitb16 weights would be downloaded when that code (not included in the package) is run but the vitb32 weights would not be downloaded unless the example was changed or something else specified a pre-trained ViT model with the vitb32 weights. Similarly, the weights for other models (googlenet, as an example) would not be downloaded unless code that uses that specific model in its pre-trained form is executed post-installation.
The implementations that I'm familiar with will check for downloaded weights as the code is initialized. When done in this way, the download is transparent to the user and unless code using these models/weights is written in such a way that the user a choice, there is not much a user could do to change the download URL or prevent the weights from being downloaded. The only ways I can think of off hand would be to modify the underlying libraries to override the hard-coded URLs or maybe put identically named files in the cache location but that would end up being dependant on model implementation. For the specific libraries I used as examples, I don't know what the local download folder is off the top of my head, nor do I know if they do any verification of downloads so putting files into the cached location may not work if they don't match the intended file contents.
This is just my opinion but I doubt that many people writing code that uses pre-trained models are going to go out of their way to help users avoid downloading pre-trained weights. I know that for code that I've written using pre-trained models, it might be able to execute without the pre-trained weights but the output would just be noise in that situation. I would have a hard time justifying the work needed to make those downloads optional since it would make the code useless for what it was intended to do.
It may also be worth noting that some models with pre-trained weights are almost useless without those weights. For some (mostly older) models, it's feasible to train a model from scratch but for many of the recent models, it's just not feasible. As an example, the weights for Meta's Llama 2 took 3.3 million hours of GPU time to train [1] with a cost into the millions of USD ignoring what it would take to obtain enough data to train a model that large.
Apologies for my verbosity but I hope that I answered your question and the extra bits weren't entirely useless.
Tim
Richard
b. For a user-defined model which may or may not exist at the time of packaging?
I can provide examples of any of these situations if that would be helpful.
Can you elaborate on 4a/4b with examples?
There are 2 simple examples for the two cases I mentioned (4a and 4b) at the bottom of this email
Tim
-----------------------------------------------------------------
4a - code that downloads pre-trained weights for a specific model
-----------------------------------------------------------------
torchvision [1] is a pytorch adjacent library which contains "Datasets, Transforms and Models specific to Computer Vision". torchvision contains code to implement several pre-defined model structures which can be used with or without pre-trained weights [2]. torchvision is distributed under a BSD 3-clause license [3] and is currently packaged in Fedora as python-torchvision but all of the specific model code is removed at package build time and not distributed as a Fedora package.
As an example, to instantiate a vision transformer (ViT) base model variant with 16x16 input patch size and download pre-trained weights, the following python code could be used:
```
import torchvision
vitb16 = torchvision.models.vit_b_16()
```
The code describing the vit_b_16 model is included in torchvision but the weights are downloaded from an external site when the model is first used. At the time I write this, the weights are downloaded from https://download.pytorch.org/models/vit_b_16-c867db91.pth
In this case and for all the other models contained in torchvision, the exact links to the pretrained weights are all contained within the torchvision code.
Something worthy of note is that the weights for vit_b_16 are from Facebook's SWAG project [4] which is distributed as CC-BY-NC-4.0 [5] and would not be acceptable for use in a Fedora package. For the other models in torchvision, some of the pre-trained weights have an explicit license (like ViT) but many of them are not distributed under any explicit license (ResNet[6] as an example).
[1] https://github.com/pytorch/vision
[2] https://github.com/pytorch/vision/tree/main/torchvision/models
[3] https://github.com/pytorch/vision/blob/main/LICENSE
[4] https://github.com/facebookresearch/SWAG
[5] https://github.com/facebookresearch/SWAG/blob/main/LICENSE
[6] https://pytorch.org/hub/pytorch_vision_resnet/
----------------------------------------------------
4b - code that downloads an somewhat arbitrary model
----------------------------------------------------
One of the newer features of pytorch (which is still considered to be in beta) is the ability to interface with "PyTorch Hub" [7] to use pre-defined and pre-trained models which have been uploaded by other users. At the time of this writing, the pytorch hub appears to be moderated by the pytorch team but the underlying code which supports loading of semi-arbitrary models from user-defined locations at runtime.
As an example, this code loads a MiDaS v3 large model with pre-trained weights directly from intel's github repo [8].
```
model_type = "DPT_Large"
midas = torch.hub.load("intel-isl/MiDaS", model_type)
```
Similar to the ViT example above, this model will download weights from a url (https://github.com/isl-org/MiDaS/releases/download/v3/dpt_large_384.pt at the time of this writing) but unlike the ViT example, the definitions of the model and where the weights are located are determined by code contained in the github repository specified by the user [9] and downloaded at runtime to determine the exact link to any code and pre-trained weights. The MiDaS repository is distributed under an MIT license [10].
[7] https://pytorch.org/hub/
[8] https://github.com/isl-org/MiDaS
[9] https://github.com/isl-org/MiDaS/blob/master/hubconf.py#L218
[10] https://github.com/isl-org/MiDaS/blob/master/LICENSE
--
--
_______________________________________________
legal mailing list -- legal@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to legal-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/legal@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
--
_______________________________________________
legal mailing list -- legal@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to legal-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/legal@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue