Re: Read-only CephFs on a k8s cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

Thanks for the response.

 

Is there a way to see auth failures in some logs ?

 

I am looking at ceph auth ls and nothing jumps at me as suspicious.

 

I’ve tried to write a file not from a container, but from actual node that forwards directory to container, and it is indeed gives permission denied error.

 

I think we are running ceph 13.2.4-20190109 and kernel version is

4.4.0-140-generic #166-Ubuntu SMP Wed Nov 14 20:09:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

 

 

Sent from Mail for Windows 10

 

From: Gregory Farnum
Sent: Tuesday, May 7, 2019 7:17 PM
To: Ignat Zapolsky
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Read-only CephFs on a k8s cluster

 

On Tue, May 7, 2019 at 6:54 AM Ignat Zapolsky <ignat.zapolsky@xxxxxxxxxx> wrote:

> 

> Hi,

> 

> 

> 

> We are looking at how to troubleshoot an issue with Ceph FS on k8s cluster.

> 

> 

> 

> This filesystem is provisioned via rook 0.9.2 and have following behavior:

> 

> If ceph fs is mounted on K8S master, then it is writeable

> If ceph fs is mounted as PV to a POD, then we can write a 0-sized file to it, (or create empty file) but bigger writes do not work.

 

This generally means your clients have CephX permission to access the

MDS but not the RADOS pools. Check what auth caps you've given the

relevant keys ("ceph auth list"). Presumably your master node has an

admin key and the clients have a different one that's not quite right.

-Greg

 

> 

> Following is reported as ceph -s :

> 

> 

> 

> # ceph -s

> 

>   cluster:

> 

>     id:     18f8d40e-1995-4de4-96dc-e905b097e643

> 

>     health: HEALTH_OK

> 

>   services:

> 

>     mon: 3 daemons, quorum a,b,d

> 

>     mgr: a(active)

> 

>     mds: workspace-storage-fs-1/1/1 up  {0=workspace-storage-fs-a=up:active}, 1 up:standby-replay

> 

>     osd: 3 osds: 3 up, 3 in

> 

>   data:

> 

>     pools:   3 pools, 300 pgs

> 

>     objects: 212  objects, 181 MiB

> 

>     usage:   51 GiB used, 244 GiB / 295 GiB avail

> 

>     pgs:     300 active+clean

> 

>   io:

> 

>     client:   853 B/s rd, 2.7 KiB/s wr, 1 op/s rd, 0 op/s wr

> 

> 

> 

> 

> 

> I wonder what can be done for further diagnostics ?

> 

> 

> 

> With regards,

> 

> Ignat Zapolsky

> 

> 

> 

> Sent from Mail for Windows 10

> 

> 

> 

> 

> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.

> 

> _______________________________________________

> ceph-users mailing list

> ceph-users@xxxxxxxxxxxxxx

> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 


This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux