Re: Looking for help with packaging RamaLama for Fedora.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/30/24 12:33, Adam Williamson wrote:
On Mon, 2024-09-30 at 09:24 -0700, Adam Williamson wrote:
On Mon, 2024-09-30 at 11:18 -0400, Daniel Walsh wrote:
RamaLama is an open source competitor to Ollama. The goal is to make the
use of AI Models as simple as Podman or Docker. But able to support any
AI Model registry.  HuggingFace, Ollama as well as OCI Registries
(quay.io, docker hug, artifactory ...)

It uses either Podman or Docker under the hood to run your AI Models in
containers, but can also run containers natively on the host.

We are looking for contributors in any form, but really could use some
help getting it packaged for Fedora, PyPy and Brew for Macs.

We have setup a discord room for discussions on RamaLama.
https://t.co/wdJ2KWJ9de <https://t.co/wdJ2KWJ9de>

The code is all written in Python.

Join the initiative to make running Open Source AI Models simple and boring.
Having a quick look at it...I assume for packaging purposes we should
avoid that yoiks-inducing `install.py` like the plague? Is the setup.py
file sufficient to install it properly in a normal way? On the face of
it it doesn't look like it would be, but maybe I'm missing something.
Given that we're in the 2020s, why doesn't it have a pyproject.toml ?

Thanks!
Erf...and then ramalama.py goes to the trouble of adding the non-
standard directory the Python lib was installed to the path before
importing it:

https://github.com/containers/ramalama/blob/main/ramalama.py#L10-L15

why all this? Why not just have it set up as a perfectly normal Python
lib+script project and let all the infrastructure Python-world has been
building for decades handle installing it on various platforms? Is
there something I'm missing here, or should I send a PR?

Is it because this was written for macOS originally? But surely there's
a standard way to install a normal Python project on macOS that doesn't
require a custom install script?!

We are trying to run ramalama inside of a container, this means that we are volume mounting in the directory from the host into the container along with the ramalama executable.

We want to make sure that the ramalama executable finds the python libraries ,which is why we are sticking the library into first place in the path.  (It is a little hacky and we could probably work around it using environment variables rather then inserting it.

Bottom line we want the executable from the host running inside of the container, to try to avoid drift between container images and the executable.

--
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux