Re: upstream/firefly exporting the same snap 2 times results in different exports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Does this still occur if you export the images to the console (i.e. "rbd export cephstor/disk-116@snap - > dump_file")?  

Would it be possible for you to provide logs from the two "rbd export" runs on your smallest VM image?  If so, please add the following to the "[client]" section of your ceph.conf:

  log file = /valid/path/to/logs/$name.$pid.log
  debug rbd = 20

I opened a ticket [1] where you can attach the logs (if they aren't too large).

[1] http://tracker.ceph.com/issues/12422

-- 

Jason Dillaman 
Red Hat 
dillaman@xxxxxxxxxx 
http://www.redhat.com 


----- Original Message -----
> From: "Stefan Priebe" <s.priebe@xxxxxxxxxxxx>
> To: "Jason Dillaman" <dillaman@xxxxxxxxxx>
> Cc: ceph-devel@xxxxxxxxxxxxxxx
> Sent: Tuesday, July 21, 2015 12:55:43 PM
> Subject: Re: upstream/firefly exporting the same snap 2 times results in different exports
> 
> 
> Am 21.07.2015 um 16:32 schrieb Jason Dillaman:
> > Any chance that the snapshot was just created prior to the first export and
> > you have a process actively writing to the image?
> >
> 
> Sadly not. I executed those commands exactly as i've posted manually at
> bash.
> 
> I can reproduce this at 5 different ceph cluster and 500 vms each.
> 
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux