/var/lib/postgresql
, /var/lib/redis
and /home/git/data
to their respective PersistentVolumeClaim (PVC), which i created in Kubernetes. After this was done, the Redis db permission issue went away. Seems having all the above three paths mapped to single volume (which anyways wasn't the right thing to start with but since i wasn't aware of how things work in GlusterFS) in GlusterFS was messing up things somehow.Getting this answer back on the list in case anyone else is trying to share storage.Thanks for the docs pointer, Tanner.-JohnOn Thu, Sep 7, 2017 at 6:50 PM, Tanner Bruce <tanner.bruce@xxxxxxxxxxxxxx> wrote:You can set a security context on your pod to set the guid as needed: https://kubernetes.io/
docs/tasks/configure-pod-conta iner/security-context/
This should do what you need
Tanner
Edit This Page. Configure a Security Context for a Pod or Container. A security context defines privilege and access control settings for a Pod or Container.
From: gluster-users-bounces@gluster.org <gluster-users-bounces@gluster.org > on behalf of John Strunk <jstrunk@xxxxxxxxxx>
Sent: September 7, 2017 2:28:50 PM
To: Gaurav Chhabra
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Redis db permission issue while running GitLab in Kubernetes with GlusterI don't think this is a gluster problem...Each container is going to have its own notion of user ids, hence the mystery uid 1000 in the redis container. I suspect that if you exec into the gitlab container, it may be the one running as 1000 (guessing based on the file names). If you want to share volumes across containers, you're going to have to do something explicitly to make sure each of them (with their own uid/gid) can read/write the volume, for example by sharing the same gid across all containers.
I'm going to suggest not sharing the same volume across all 3 containers unless they need shared access to the data.
-John
On Thu, Sep 7, 2017 at 12:13 PM, Gaurav Chhabra <varuag.chhabra@xxxxxxxxx> wrote:
Hello,
I am trying to setup GitLab, Redis and PostgreSQL containers in Kubernetes using Gluster for persistence. GlusterFS nodes are setup on machines (CentOS) external to Kubernetes cluster (running on RancherOS host). Issue is that when GitLab tries starting up, the login page doesn't load. It's a fresh setup and not something that stopped working now.
root@gitlab-2797053212-ph4j8:/
var/log/gitlab/gitlab# tail -50 sidekiq.log ... 2017-09-07T11:53:03.099Z 547 TID-1fdf1k ERROR: Error fetching job: ERR Error running script (call to f_7b91ed9f4cba40689cea7172d1fd 3e08b2efd8c9): @user_script:7: @user_script: 7: -MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error. 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle /ruby/2.3.0/gems/redis-3.3.3/l ib/redis/client.rb:121:in `call' 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle /ruby/2.3.0/gems/peek-redis-1. 2.0/lib/peek/views/redis.rb:9: in `call' 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle /ruby/2.3.0/gems/redis-3.3.3/l ib/redis.rb:2399:in `block in _eval' 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle /ruby/2.3.0/gems/redis-3.3.3/l ib/redis.rb:58:in `block in synchronize' 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /usr/lib/ruby/2.3.0/monitor.rb :214:in `mon_synchronize' 2017-09-07T11:53:03.100Z 547 TID-1fdf1k ERROR: /home/git/gitlab/vendor/bundle /ruby/2.3.0/gems/redis-3.3.3/l ib/redis.rb:58:in `synchronize' ... So i checked for Redis container logs.
[root@node-a ~]# docker logs -f 67d44f585705 ... ... [1] 07 Sep 14:43:48.140 # Background saving error [1] 07 Sep 14:43:54.048 * 1 changes in 900 seconds. Saving... [1] 07 Sep 14:43:54.048 * Background saving started by pid 2437 [2437] 07 Sep 14:43:54.053 # Failed opening .rdb for saving: Permission denied ...
Checked online for this issue and then noticed the following permissions and owner details insideof Redis pod:
[root@node-a ~]# docker exec -it 67d44f585705 bash groups: cannot find name for group ID 2000 root@redis-2138096053-0mlx4:/# ls -ld /var/lib/redis/ drwxr-sr-x 12 1000 1000 8192 Sep 7 11:51 /var/lib/redis/ root@redis-2138096053-0mlx4:/# root@redis-2138096053-0mlx4:/# ls -l /var/lib/redis/ total 22 drwxr-sr-x 2 1000 1000 6 Sep 6 10:37 backups drwxr-sr-x 2 1000 1000 6 Sep 6 10:37 builds drwxr-sr-x 2 redis redis 6 Sep 6 10:14 data -rw-r--r-- 1 redis redis 13050 Sep 7 11:51 dump.rdb -rwxr-xr-x 1 redis redis 21 Sep 5 11:00 index.html drwxrws--- 2 1000 1000 6 Sep 6 10:37 repositories drwxr-sr-x 5 1000 1000 55 Sep 6 10:37 shared drwxr-sr-x 2 root root 8192 Sep 6 10:37 ssh drwxr-sr-x 3 redis redis 70 Sep 7 10:20 tmp drwx--S--- 2 1000 1000 6 Sep 6 10:37 uploads root@redis-2138096053-0mlx4:/# root@redis-2138096053-0mlx4:/# grep 1000 /etc/passwd root@redis-2138096053-0mlx4:/#
Ran following and all looked fine.
root@redis-2138096053-0mlx4:/# chown redis:redis -R /var/lib/redis/
However, when i deleted and ran the GitLab deployment YAML again, the permissions inside the Redis container again got skewed up. I am not sure whether Gluster is messing up with the Redis file/folders permissions but i can't think of any other reason
except for mount.One thing i would like to highlight is that all the three containers are using the same PVC
- name: gluster-vol1 persistentVolumeClaim: claimName: gluster-dyn-pvc
Above is common for all three. What differs is shown below:
a) postgresql-deployment.yaml volumeMounts: - name: gluster-vol1 mountPath: /var/lib/postgresql b) redisio-deployment.yaml volumeMounts: - name: gluster-vol1 mountPath: /var/lib/redis c) gitlab-deployment.yaml volumeMounts: - name: gluster-vol1 mountPath: /home/git/data
Any suggestion? Also, i
guess this is notthe right way to use the samePVC
/Storage Class
for all three containers because i just noticed that all contents reside in the same dir inside Gluster nodes.
I know there are many things involved besides Gluster so this may not be _the_ right forum but amongst all, my gut feeling is that Gluster might be the reason for the permission issue.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users