[kwlug-disc] Multi-bay docks ... 1 at a time??? [Was: Re: Linux-compatible eSATA expansion cards]

unsolicited unsolicited at swiz.ca
Tue Aug 5 16:57:53 EDT 2014



On 14-08-05 01:50 PM, Khalid Baheyeldin wrote:
> On Tue, Aug 5, 2014 at 1:30 PM, unsolicited <unsolicited at swiz.ca> wrote:
>
> Related: I tried snapshots in libvirt + KVM on the lowly 4GB disks that I
> use. They were sloooow compared to just copying the whole 4GB from a saved
> pristine copy.

But that makes sense, doesn't it? Presumably snapshots are happening on 
live / simultaneously working vms. I would guess that no matter the v 
solution, they should be pretty similar.

Would be interesting to see a comparable test result. Stop the vm, copy 
the disk locally. Start the vm, let it get quiet. Now copy that copy of 
the disk - substantial copy time difference?

And I wonder if snapshots would go faster if the snapshots were going to 
a separate spindle.

> With VirtualBox, I could not do that, because there is a UUID for the
> storage that changes. With libvirt, there is no such restriction, making it
> very easy to swap images around.

vbox - but for backup purposes, that shouldn't matter, right?

I get that it does when transiting the learning curve / cloning machines.

The UUID problem sure is irritating. I do wonder at the design choice 
since vmware didn't seem to have that problem when I was using it, and 
you say libvirt doesn't. My read is they're protected the unvarnished 
users (not admins) of vm farms from themselves.

Certainly I found 'vboxmanage internalcommands sethduuid {file}' handy.

But that doesn't help for cloning the machine itself. The only solution 
I found for that was to delete the drives from the .vbox file, create a 
new one via GUI, then cut/paste the UUID from that new machine to the 
clone, and delete the new vm. PITA.

Would have helped if media manager had an add media setting. I ended up 
firing up a new vm, adding all the media in one go to get it into the 
manager, then deleted that vm. <sigh>

> Now, this may not scale to the size of "real" images for real workload
> (compared to the "just testing" scenarios that is my use case).

Such instances presumably being 24x7 and taking down vm groups to do a 
backup being a bad thing ...

> And that is why you see that data centers use storage servers, and not
> store data inside the images (e.g. Amazon EC2 uses EBS, ...etc.).

Eh. Probably not so much. They use them as for them a vm is just a 
(customer's) file. And when you're schlepping files that big around, ... 
everything has to scale up. I've no doubt they do continuous snapshots 
on the fly and back up the snapshots.

Course they're of a size where you do your live fs snap / journal as 
fast as possible, probably from/to SSD (previously fibre), then 
replicate off that to tertiary storage / backup, rather than the live.

Let alone those snapshots will be at a vm 'withined' (containing) 
filesystem level. e.g. btrfs - soon if not yet. {i.e. can't recall what 
was most popular pre-btrfs, and btrfs seem midst becoming accepted for 
prime time}.

Actually, same applies to you / your testing. btrfs or lvm snapshots, 
not vm snapshots?

Begs the question, at least for vm's / backups ... do snapshots even 
enter the picture for them. vis a vis fs journalling, snapshots, or others.

Do agree snapshots / cloning make sense for moment in time capturing. 
Let alone automated testing where each test must start from a known state.





More information about the kwlug-disc mailing list