On Mon, Jul 21, 2014 at 1:58 PM, Wido den Hollander <wido at 42on.com> wrote: > On 07/21/2014 11:32 AM, ??? wrote: >> >> Hi,all: >> I 'm dong tests with firefly 0.80.4,I want to test the performance >> with tools such as FIO,iozone?when I decided to test the rbd storage >> performance with fio,I ran commands on a client node as follows: >> > > Which kernel on the client? Can you try the trusty 3.13 kernel on the 12.04 > client? > > Wido > >> rbd create img1 --size 1024 --pool data (this command went on well) >> >> rbd map img1 --pool data -id admin --keyring >> /etc/ceph/ceph.client.admin.keyring >> then unexpected thing happened,the client node crashed ,the screen >> showed messages that with: >> [ffffffffffffffffffffffffffff81077ae0]?flush_kthread _worker >> and many other messages like that. >> I have to stop the node and restart to make it work again. >> >> similary thing happed when I tried to mount ceph FS with kernel driver . >> I ran command as follows: >> mkdir /mnt/test >> mount -t ceph 192.168.50.191:/ /mnt/test -o >> name=admin,secret=AQATSKdNGBcwLhAAnNDKnH65FmVKpXZJVasUeQ== >> the node also crashed and can't work any more ,then I had to restart the >> node . >> I 'm puzzled about the problem,I wonder if the problem lies in my linux >> kernal or any other issues.Thanks for any help! >> >> my cluster are made up of one monitor ,six osds and a client. >> os:ubuntu 12.04 LTS >> ceph version:firefly 0.80.4 It looks like 12.04 LTS kernel can be as old as 3.2. This is most probably a known bug in kernels older than 3.8 (I think) which manifests as a crash on 'rbd map' or cephfs mount if the kernel misses required feature bits, which it of course does in this case because you are running the latest firefly point release against a couple years old kernel. I'll see if the fix can be cleanly backported. A word of advice: when you think the problem may lie in your kernel (and even when you don't), specify the kernel version you are running, not just the "os". Thanks, Ilya