"rbd create" hangs for specific pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I was running ceph cluster with hdds for OSDs, now I've created new
dedicated SSD pool within same cluster, everything looks fine, cluster
is "healthy", but if I try to create new rbd image in this new ssd
pool it just hangs, I've tried both "rbd" command and within proxmox
gui, " rbd" just hangs, proxmox says "rbd error: got lock timeout".
Creating volumes in old pool is no problem. Are there any way to see
what's wrong? I've grepped  ceph logs, but haven't found anything
useful.


Ceph 11.2, here is my crushmap: https://pastebin.com/YVUVCvqu


ceph01:/etc/ceph# ceph -s
    cluster 4f23f683-21e6-49f3-ae2c-c95b150b9dc6
     health HEALTH_OK
     monmap e4: 3 mons at
{ceph02=10.1.8.32:6789/0,ceph03=10.1.8.33:6789/0,ceph04=10.1.8.34:6789/0}
            election epoch 38, quorum 0,1,2 ceph02,ceph03,ceph04
        mgr no daemons active
     osdmap e1100: 24 osds: 24 up, 24 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v3606818: 528 pgs, 2 pools, 1051 GB data, 271 kobjects
            3243 GB used, 116 TB / 119 TB avail
                 528 active+clean
  client io 28521 B/s rd, 1140 kB/s wr, 6 op/s rd, 334 op/s wr

ceph01:/etc/ceph# ceph osd lspools
0 rbd,1 ssdpool

Thanks,
Stan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux