Hello again! I understand that it's not recommended running osd and rbd-nbd on the same host and i actually moved my rbd-nbd to a completely clean host (same kernel and OS though), but with same result. I hope someone can resolve this and you seem to indicate it is some kind of known error but i didn't really understand the github commit that you linked. If other logs or info is needed i'm happy to provide it. //Stefan ________________________________________ Från: Ilya Dryomov [idryomov@xxxxxxxxx] Skickat: den 25 april 2016 17:31 Till: Stefan Lissmats; Mykola Golub Kopia: Mika c; ceph-users Ämne: Re: RBD image mounted by command "rbd-nbd" the status is read-only. On Mon, Apr 25, 2016 at 1:53 PM, Stefan Lissmats <stefan@xxxxxxxxxx> wrote: > Hello! > > Running a completely new testcluster with status HEALTH_OK i get the same > error. > I'm running Ubuntu 14.04 with kernel 3.16.0-70-generic and ceph 10.2.0 on > all hosts. > The rbd-nbd mapping was done on the same host having one osd and mon. (This > is a small cluster with 4 virtual hosts and one osd per host). > > Steps after creating cluster. > > Created rbd device with standard options. > #rbd create --size 50G nbd2 > > Map the device (became device /dev/nbd2) > #rbd-mbd map nbd2 > > Create ext4 filesystem > #mkfs.ext4 /dev/nbd2 > > During creation of filesystem there was alot of errors in dmesg but mkfs > indicated done. > The errors was block nbd2: Other side returned error (5) > > I was able to mount the ext4 filesystem but that created even more errors in > dmesg. > > Here is a selection of dmesg that probably contains the intresting bits. > > [13864.102569] block nbd2: Other side returned error (5) > [13951.186296] block nbd2: Other side returned error (5) > [13951.186443] blk_update_request: 2157 callbacks suppressed > [13951.186445] end_request: I/O error, dev nbd2, sector 0 > [13951.186598] quiet_error: 271152 callbacks suppressed > [13951.186600] Buffer I/O error on device nbd2, logical block 0 > [13951.186780] lost page write due to I/O error on nbd2 > [13951.187816] EXT4-fs (nbd2): mounted filesystem with ordered data mode. > Opts: (null) > [13952.049103] block nbd2: Other side returned error (5) > [13952.049323] end_request: I/O error, dev nbd2, sector 8464 > [13952.070722] block nbd2: Other side returned error (5) > [13952.071009] end_request: I/O error, dev nbd2, sector 8720 > [13952.074069] block nbd2: Other side returned error (5) > [13952.074392] end_request: I/O error, dev nbd2, sector 8976 > [13952.075283] block nbd2: Other side returned error (5) > [13952.075635] end_request: I/O error, dev nbd2, sector 9232 > [13952.076249] block nbd2: Other side returned error (5) > [13952.076636] end_request: I/O error, dev nbd2, sector 9488 > [13952.077108] block nbd2: Other side returned error (5) > [13952.077606] end_request: I/O error, dev nbd2, sector 9744 > [13952.078064] block nbd2: Other side returned error (5) > [13952.078537] end_request: I/O error, dev nbd2, sector 10000 > [13952.079038] block nbd2: Other side returned error (5) > [13952.079583] end_request: I/O error, dev nbd2, sector 10256 > [13952.080301] block nbd2: Other side returned error (5) > [13952.080869] end_request: I/O error, dev nbd2, sector 10512 > [13952.081474] block nbd2: Other side returned error (5) > [13952.082088] block nbd2: Other side returned error (5) > [13952.082701] block nbd2: Other side returned error (5) > [13952.083316] block nbd2: Other side returned error (5) > [13952.083943] block nbd2: Other side returned error (5) > [13952.084654] block nbd2: Other side returned error (5) > [13952.085301] block nbd2: Other side returned error (5) Looks like this has come up before: https://github.com/ceph/ceph/pull/7215/commits/3ff60a61bf68516983c0b6ea6791ce712c98a073 Do we set rval to the length of the request for aio writes? I thought we did this only for reads and that it's always <= 0 on writes. Mykola, could you look into this? I certainly wouldn't advise running rbd-nbd on OSD hosts. Thanks, Ilya _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com