[kwlug-disc] Virtualization technology

unsolicited unsolicited at swiz.ca
Mon Jul 28 22:53:05 EDT 2014


Interesting timeliness on this post. Finally got around to playing with 
VirtualBox on a P4 / Ubuntu 14.04 server + kde-desktop-plasma + 
kubuntu-default-settings over the weekend.

I chose vbox as for what I'm doing in this instance I can't know what 
hardware I may move the vm to. In this case, an XP workstation. (No 
Linux or > XP hardware drivers for an Adaptec USB2exchange SCSI <-> USB 
converter for an old large flatbed scanner.)

- entirely unsuccessfully thus far. Even after repair, it BSOD's with 
either nonpaged fault or boot device inaccessible. Have googled that 
some have had more success with vmware converter, then running it under 
vbox, and trying that now. In the past, have found vmware more 
successfully just works, even under Linux. (But concluded at that time 
that vbox seemed to work better under linux than vmware, while vmware 
worked better under xp than vbox. Perhaps that was just then / the 
times.) [A fresh XP install 'just works'.]


I have a bigger main box running kubuntu 12.04 desktop, and part of the 
vbox choice surrounded apparently ubuntu / kvm / jeos 
(https://en.wikipedia.org/wiki/Ubuntu_JeOS ?) requires VT support in the 
CPU, and I don't know that will be true with this vm.

I had a sense that KVM was a kernel thing, thus not a Red Hat thing? And 
that there are certain restrictions with it, e.g. can't run a 64-bit OS 
if host cpu doesn't have certain things, even if a 64-bit CPU. Probably 
a non-issue on any current 64-bit CPU, but that hasn't always been true. 
I remember Paul running into this, and needing to turn on a BIOS setting 
that defaults off. If the BIOS setting was even present on that 
particular box.

For the OP's use case, though, or if I had a dedicated vm server, I 
wondered if KVM wouldn't make more sense. I guess I don't know how well 
vbox takes advantage of underlying hardware acceleration to know if it's 
at least as good as kvm in that aspect.


I can say, having played with vmware in the past ... the configuration / 
complexity is all much the same / to be expected. Whether a single vbox 
.xml file, or the vmware (multiple?) text configuration files, it all 
seems pretty much of a muchness - the complexities of needing to specify 
operational parameters for a guest being 'it is what it is.' Nature of 
the beastie.


Had thought about all of this not long ago, when considering what I 
might want / need for a test LAMP server, but couldn't quite wrap my 
head around what I might need to make it secure / private. So in the end 
I just fired up lamp on another spare P4 kicking around. For a vm I 
figured I would need to use NAT, which meant host bridging (not sure I 
wanted to put that into production just then), and I hadn't quite 
wrapped my head around how to have the vm only able to talk to the 
internet, yet be able to get to it from the local net to maintain it. 
e.g. VNC. Haven't gotten back to it yet to see if I'm now comfortable. 
Being standalone means I can just add/remove a single port forward on 
the router, and not worry about it in the mean time. Ended up not 
worrying about local net participation, and the port has long since been 
closed on the router.


On 14-07-28 05:15 PM, Digimer wrote:
> On 28/07/14 05:05 PM, Khalid Baheyeldin wrote:
>> So, while staying within the free and open source alternatives, what
>> are the options that people use? KVM? Xen? What else? With a bit of
>> details on what you like and what you don't like where possible.
>
> I used Xen back in <= RHEL5, but switched to KVM with RHEL 6+. Xen was
> great in that it made virtualization possible on pretty much any
> computer, but it came with a non-trivial complexity, particularly with
> regard to networking and kernel compatibility.
>
> With hardware virtualization support, KVM became much more attractive.
> With the virtio network and block drivers, even Microsoft OS clients ran
> very fast. The networking got a lot simpler and you no longer had to
> worry about dom0/domU kernel issues.
>
> When you add to this that Xen is, though open, supported by a decidedly
> less open company versus KVM being backed by Red Hat, a decidedly open
> company (if not perfect), KVM is a clear choice, I would argue.
>





More information about the kwlug-disc mailing list