Re: off topic -- vm issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/12/2018 06:34 PM, bruce wrote:
> On Mon, Feb 12, 2018 at 9:21 PM, Cameron Simpson <cs@xxxxxxxxxx> wrote:
>> On 12Feb2018 21:08, bruce <badouglas@xxxxxxxxx> wrote:
>>>
>>> I'm testing digital ocean creating droplets/snapshots/etc..
>>>
>>> I've create droplets, and from them - snapshots which have allowed me
>>> to regenerate additional droplets. Droplets are VM/copies of servers..
>>> Snapshots are compressed images.
>>>
>>> Recently, I decided to try to "replicate" a droplet of size X to a
>>> smaller size. For the most part, the process works. I have a smaller
>>> working droplet that I can ping/ssh into.
>>>
>>> The issue I'm now facing, is that generating a snapshot, followed by
>>> regenerating a new droplet, seems to fail.
>>>
>>> The resulting new droplet has an IP, but pinging/ssh'ing into it
>>> fails/hangs.
>>>
>>> I'm posting to see if anyone has any clue as to what might be
>>> happening. I suspect that in my rsync process xfering files from the
>>> initial droplet to the new droplet (used for the snapshot) that I may
>>> have screwed up some files that are used for the snapshot to droplet
>>> process.
>>
>>
>> Just wondering: if you're making droplets from snapshots, why do you need to
>> rsync any files? You're effectively cloning the previous droplet, so all the
>> files should already be there.
>>
> 
> Hey Cameron.
> 
> DO permits creating droplets of different configurations (cpu/mem/drive)
> 
> In my case the initial droplet (droplet1) has a larger hard drive (and
> a higher cost)
> I've got a bunch of stuff on it, but the drive is only about 20G

Keep in mind that the vast majority of VM implementations use sparse
filesystems (e.g. qcow2). This can confuse the filesystem massively if
you simply take a sparse 60G filesystem (where only 20G is used) and
stick it on a 40G drive. Yes, only 20G is actually used, but the FS
stack thinks it's got all 60G and that can lead to big problems. I'm
not saying that's necessarily your issue, but it's, uhm, "bad practice"
to do that sort of thing.

I don't know if you can do it, but you might try to resize the FS in
question down to the size your target droplets are going to have, then
snapshot and restore to the new droplets. I haven't really used Digital
Ocean's environment as we run our own kubernetes, docker and
proxmox/qemu/libvirt clusters. We do have some stuff on AWS and Azure
(which are essentially libvirt-ish environments).

> I've tested creating a smaller droplet (call it droplet2) by doing all
> the rsync/copying to create essentially the same server as droplet1.
> 
> This works.. I can access it via ssh/ping with no issues.
> 
> My overall process requires me to run multiple copies of droplet2 in
> parallel which can be accomplished by creating a snapshot of droplet2
> and then spinning up multiple instances. I've done this 1000s of times
> with basic droplets.. but never one where I've rsyn/copied data from
> one server to another..
> 
> I'm thinking that I've screwed up some "dir/files" that are unique to
> a droplet and used for the snapshot process.. I can't find any data on
> how the "snapshot/vm" process really is implemented within
> digitalocean. My last hope is to iteratively rsync, testing out the
> snapshot/droplet process until i figure out the real issues..
> 
> thanks!
> 
> 
>> Um, how are you defining "smaller"? Less memory? Less disc?
>>
>> A snapshot with less disc might have a corrupted filesystem if you cut off a
>> portion of the disc with files in it.----------------------------------------------------------------------
- Rick Stevens, Systems Engineer, AllDigital    ricks@xxxxxxxxxxxxxx -
- AIM/Skype: therps2        ICQ: 22643734            Yahoo: origrps2 -
-                                                                    -
-       Blessed are the peacekeepers...for they shall be shot at     -
-                 from both sides. --A.M. Greeley                    -
----------------------------------------------------------------------
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux