Re: upstream/firefly exporting the same snap 2 times results in different exports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/21/2015 12:22 PM, Stefan Priebe wrote:

Am 21.07.2015 um 19:19 schrieb Jason Dillaman:
Does this still occur if you export the images to the console (i.e.
"rbd export cephstor/disk-116@snap - > dump_file")?

Would it be possible for you to provide logs from the two "rbd export"
runs on your smallest VM image?  If so, please add the following to
the "[client]" section of your ceph.conf:

   log file = /valid/path/to/logs/$name.$pid.log
   debug rbd = 20

I opened a ticket [1] where you can attach the logs (if they aren't
too large).

[1] http://tracker.ceph.com/issues/12422

Will post some more details to the tracker in a few hours. It seems it
is related to using discard inside guest but not on the FS the osd is on.

That sounds very odd. Could you verify via 'rados listwatchers' on an
in-use rbd image's header object that there's still a watch established?

Have you increased pgs in all those clusters recently?

Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux