1. Yes, use_lvmetad is 0, and its systemd units for it are stopped/disabled.
2. Yes, everything on the host machine i.e(/proc, /sys etc) are getting mounted on the pod.
ubuntu@ip-172-31-89-47:~$ kubectl exec -it openebs-lvm-node-v6jrb -c openebs-lvm-plugin -n kube-system -- sh
# ls
bin boot dev etc home host lib lib32 lib64 libx32 media mnt opt plugin proc root run sbin srv sys tmp usr var
# cd /host
# ls
bin boot dev etc home lib lib32 lib64 libx32 lost+found media mnt opt proc root run sbin snap srv sys tmp usr var
#
On Fri, 3 Jun 2022 at 12:48, Roger Heflin <rogerheflin@xxxxxxxxx> wrote:
Random thoughts.Make sure use_lvmetad is 0, and its systemd units for it are stopped/disabled.Are you mounting /proc and /sys and /dev into the /host chroot?/run may also be needed.you might add a "-ttt" to the strace command to give timing data._______________________________________________On Thu, Jun 2, 2022 at 1:41 AM Abhishek Agarwal <mragarwal.developer@xxxxxxxxx> wrote:These are not different LVM processes. The container process is using the LVM binary that the node itself has. We have achieved this by using scripts that point to the same lvm binary that is used by the node.Configmap(~shell script) used for the same has the following contents where `/host` refers to the root directory of the node:get_bin_path: |
#!/bin/sh
bin_name=$1
if [ -x /host/bin/which ]; then
echo $(chroot /host /bin/which $bin_name | cut -d ' ' -f 1)
elif [ -x /host/usr/bin/which ]; then
echo $(chroot /host /usr/bin/which $bin_name | cut -d ' ' -f 1)
else
$(chroot /host which $bin_name | cut -d ' ' -f 1)
filvcreate: |
#!/bin/sh
path=$(/sbin/lvm-eg/get_bin_path "lvcreate")
chroot /host $path "$@"Also, the above logs in the pastebin link have errors because the vg lock has not been acquired and hence creation commands will fail. Once the lock is acquired, the `strace -f` command gives the following output being stuck. Check out this link for full details -> https://pastebin.com/raw/DwQfdmr8P.S: We at OpenEBS are trying to provide lvm storage to cloud native workloads with the help of kubernetes CSI drivers and since all these drivers run as pods and help dynamic provisioning of kubernetes volumes(storage) for the application, the lvm commands needs to be run from inside the pod. Reference -> https://github.com/openebs/lvm-localpvRegards_______________________________________________On Wed, 1 Jun 2022 at 13:06, Demi Marie Obenour <demi@xxxxxxxxxxxxxxxxxxxxxx> wrote:On Wed, Jun 01, 2022 at 12:20:32AM +0530, Abhishek Agarwal wrote:
> Hi Roger. Thanks for your reply. I have rerun the command with `strace -f`
> as you suggested. Here is the pastebin link containing the detailed output
> of the command: https://pastebin.com/raw/VRuBbHBc
Even if you can get LVM “working”, it is still likely to cause data
corruption at some point, as there is no guarantee that different LVM
processes in different namespaces will see each others’ locks.
Why do you need to run LVM in a container? What are you trying to
accomplish?
--
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/