Read-only CephFs on a k8s cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

We are looking at how to troubleshoot an issue with Ceph FS on k8s cluster.

 

This filesystem is provisioned via rook 0.9.2 and have following behavior:

  • If ceph fs is mounted on K8S master, then it is writeable
  • If ceph fs is mounted as PV to a POD, then we can write a 0-sized file to it, (or create empty file) but bigger writes do not work.

Following is reported as ceph -s :

 

# ceph -s

  cluster:

    id:     18f8d40e-1995-4de4-96dc-e905b097e643

    health: HEALTH_OK

  services:

    mon: 3 daemons, quorum a,b,d

    mgr: a(active)

    mds: workspace-storage-fs-1/1/1 up  {0=workspace-storage-fs-a=up:active}, 1 up:standby-replay

    osd: 3 osds: 3 up, 3 in

  data:

    pools:   3 pools, 300 pgs

    objects: 212  objects, 181 MiB

    usage:   51 GiB used, 244 GiB / 295 GiB avail

    pgs:     300 active+clean

  io:

    client:   853 B/s rd, 2.7 KiB/s wr, 1 op/s rd, 0 op/s wr

 

 

I wonder what can be done for further diagnostics ?

 

With regards,

Ignat Zapolsky

 

Sent from Mail for Windows 10

 


This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux