Fixing NTFS index in snapshot for new and existing clones

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

I would like some guidance about how to proceed with a problem inside of a snap which is used to clone images. My sincere apologies if what I am asking isn't possible.

I have snapshot which is used to create clones for guest virtual machines. It is a raw object with an NTFS OS contained within it.

My understanding is that when you clone the snap, all children become bound to the parent snap via layering.

We had a system problem in which I was able to recover almost fully. I could go into details, but I figure if I do, the advice will be to upgrade past dumpling (I can see you shaking your head :D). It is in very short term plan to upgrade. I just want to be sure my cluster is totally as clean as I can make it before I do it.

Recently, new clones and old clones started having a problem with the drive inside of windows. It seems to be a NTFS Index issue, which I can fix. (i've exported, verified the fix)

So I only have 4 pretty simple questions:

1) Would it be right to assume that if I fix the snapshot NTFS problem, that would 'cascade' to all cloned VMs? If not, I'm assuming I have to repair all clones individually (which I can script). 2) Am I off-base if I think the problem is in the snapshot? Could it be in the source image all along? 3) If there is no relationship with this snap or master image, then am I correct to assume that this is an individual problem on each of these guests? Or is there a source I should look at?
4) Would upgrading to at least firefly resolve this issue?

I've run many checks on the cluster and the data seems fully accessible and correct. No inconsistent pages, everything exports, snaps, can be moved. I also have the gdb debugger attached to watch for things which may arise in this version of ceph. I'll be upgrading once I find the answer to this.

I have also attempted to ensure the parent/child relationship is intact at HEAD by rolling back to the snap as mentioned on this mailing list in January.

Many thanks for your time!

--
John Holder
Trapp Technology
Developer, Linux, & Mail Operations
Complacency kills innovation, but ambition kills complacency.
Office: 602-443-9145 x2017
On Call Cell: 480-548-3902
Skype: z_jholder
Alt-Email: jholder@xxxxxxxxxxxxx

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux