Hi All,
Before go on the issue description, here is our hardware configurations:
- Physical machine * 3: each has quad-core CPU * 2, 64+ GB RAM, HDD * 12 (500GB ~ 1TB per drive; 1 for system, 11 for OSD). ceph OSD are on physical machines.- Each physical machine runs 5 virtual machines. One VM as ceph MON (i.e. totally 3 MONs), the other 4 VMs provides either iSCSI or FTP/NFS service
- Physical machines and virtual machines are based on the same software condition: Ubuntu 12.04 + kernel 3.6.11, ceph v0.61.7
The issues we met are,
I had success and failed straces logged on the same virtual machine (the one provides FTP/NFS):
2. The second issue is to create two images (AAA and BBB) under one pool (xxx), if we map "rbd -p xxx image AAA", the result is success but it shows BBB under /dev/rbd/xxx/. Use "rbd showmapped", it shows "AAA" of pool xxx is mapped. I am not sure which one is really mapped because both images are empty. This issue is hard to reproduce but once happens /dev/rbd/ are mess-up.
One more question but not about rbd map issues. Our usage is to map one rbd device and mount in several places (in one virtual machine) for iSCSI, FTP and NFS, does that cause any problem to ceph operation?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com