DRivard's picture

Hi everyone,

I was trying to get the turnkey bugzilla template working on my OpenVZ server.
So I copied the link of the template and execute those command lines:

cd /vz/template/cache
wget http://downloads.sourceforge.net/project/turnkeylinux/openvz/turnkey-bug...

Then I executed the command lines to create my vz container by hand. When the container was initialized, I entered it with the normale vzctl enter 19

# vzlist -a 
19         70 running   192.168.2.19    bugzilla-turnkey

When I tried to reach it through the web browser, it didn't workout. I though it was a network probleme. So I started debuging to find out that it is the networking configuration that was wrong. Here is what the /etc/network/interface file contained:

 

# UNCONFIGURED INTERFACES
# remove the above line if you edit this file
 
auto lo
iface lo inet loopback
 
auto eth0
iface eth0 inet dhcp
 
auto eth1
iface eth1 inet dhcp
 
To really get it to work I needed to completely switch the configuration file to:
 
auto venet0
iface venet0 inet manual
        up ifconfig venet0 up
        up ifconfig venet0 127.0.0.2
        up route add default dev venet0
        down route del default dev venet0
        down ifconfig venet0 down
 
auto venet0:0
iface venet0:0 inet static
        address 192.168.2.19
        netmask 255.255.255.255
 
Why isn't it by default the venet adapter? How do we get it to work under plain OpenVZ by default or automatically? Is it a proxmox requirement?
 
thank you!
Forum: 
Jeremy Davis's picture

I have only ever used OVZ via Proxmox and in there it defaults to venet so not sure why you're experiencing this behaviour.

AFAIK these templates haven't been extensively tested under vanilla OVZ (the OVZ users I've come across use Proxmox) so perhaps it is a quirk that is only noticable in your usage scenario (ie manual OVZ). I'm sure the devs would be open to modifying it if it means a better end user experience.

When I get a chance I'll test launching from the commandline. And perhaps I could modify a template like you have explained above and see how it goes, both manually and via the WebUI. If we can make it work better for you, whilst maintaining the OOTB experience for PVE users then we all win! :)

BTW I've lodged a bug report.

DRivard's picture

Later when I'll be home I'll paste the exacte command lines to create the VM from Turnkey templates.

thank you for your support.
Dom

Jeremy Davis's picture

But obviously you'll need to change the "192.168.1.19" to the appropriate IP address that you want for your VM.

I'm assuming that what's happening here is that under Proxmox this file is created when lauching the container (hence why it works flawlessly with OpenVZ under Proxmox) but vanilla OpenVZ obviously doesn't do that step for you.

Out of interest, do you have another (untouched) OpenVZ template to look at? If so could you post the contents of /etc/network/interface. Otherwise I'll download one at some point and have a look.

As the only extensive testing these OVZ templates have had is when launched by Proxmox, I think that is why this bug has snuck through. We'll work it out and ask the devs to push through a bugfix. If my suspicion is correct about Proxmox creating the interface file on launch then changing this won't effect the PVE OVZ templates (ie they can remain the same).

DRivard's picture

Good morning guys,

Here is what the /etc/network/interfaces looks like when you initiate a new container following these commands: 

 

/usr/sbin/vzctl create 103 --ostemplate turnkey-mysql-11.3-lucid-x86-openvz --config basic
/usr/sbin/vzctl set 103 --onboot yes --save
/usr/sbin/vzctl set 103 --hostname mysql-turnkey --save
/usr/sbin/vzctl set 103 --ipadd 192.168.1.103 --save
/usr/sbin/vzctl set 103 --nameserver 4.2.2.2 --save
/usr/sbin/vzctl set 103 --vmguarpages $((256 * 512)) --save
/usr/sbin/vzctl set 103 --privvmpages $((256 * 1024)) --save
/usr/sbin/vzctl set 103 --diskspace 3G:4G --save
/usr/sbin/vzctl set 103 --capability sys_time:on --save
/usr/sbin/vzctl start 103
 
/etc/network/interfaces 
# UNCONFIGURED INTERFACES
# remove the above line if you edit this file
 
auto lo
iface lo inet loopback
 
auto eth0
iface eth0 inet dhcp
 
auto eth1
iface eth1 inet dhcp
 
If you modify this file to be like the following it should work as a backup plan.
 
Modified /etc/network/interfaces 
# Auto generated lo interface
auto lo
iface lo inet loopback
 
# Auto generated venet0 interface
auto venet0
iface venet0 inet manual
        up ifconfig venet0 up
        up ifconfig venet0 127.0.0.2
        up route add default dev venet0
        down route del default dev venet0
        down ifconfig venet0 down
 
auto venet0:0
iface venet0:0 inet static
        address 192.168.1.103
        netmask 255.255.255.255
 
 
Regards.
Jeremy Davis's picture

And good morning to you too! :)

Out of interest here is the commands that the PVE WebUI does behind the scenes when a new (working) OVZ container is created (from the WebUI):

/usr/bin/pvectl vzcreate 125 --disk 8 --ostemplate local:vztmpl/ubuntu-10.04-turnkey-core_11.3-1_i386.tar.gz
 --rootpasswd $1$vIesVMsh$i.lTdT/px.RNc0S5ZrjRM0 --hostname tkl-official-core-test.home.lan 
 --nameserver 192.168.1.1 --nameserver 8.8.8.8 --searchdomain home.lan --onboot no --ipset 192.168.1.125
 --swap 512 --mem 512 --cpus 1
vzctl set 125 --vmguarpages 262144:9223372036854775807 --oomguarpages 262144:9223372036854775807
 --privvmpages 262144:274644 --lockedpages 131072:131072 --diskspace 8388608:9227468
 --diskinodes 1600000:1760000 --hostname tkl-official-core-test.home.lan --searchdomain home.lan
 --ipadd 192.168.1.125 --onboot yes --nameserver 192.168.1.1 --nameserver 192.168.1.254 --save
Saved parameters for CT 125
VM 125 created

Note I included the line breaks to make it easier to read - the lines that start with a space followed by a switch actually belong to the line above.

And here is the contents of the (working) /etc/network/interface file:

# This configuration file is auto-generated.
#
# WARNING: Do not edit this file, your changes will be lost.
# Please create/edit /etc/network/interfaces.head and
# /etc/network/interfaces.tail instead, their contents will be
# inserted at the beginning and at the end of this file, respectively.
#
# NOTE: it is NOT guaranteed that the contents of /etc/network/interfaces.tail
# will be at the very end of this file.
#

# Auto generated lo interface
auto lo
iface lo inet loopback

# Auto generated venet0 interface
auto venet0
iface venet0 inet manual
    up ifconfig venet0 up
    up ifconfig venet0 127.0.0.2
    up route add default dev venet0
    down route del default dev venet0
    down ifconfig venet0 down

iface venet0 inet6 manual
    up route -A inet6 add default dev venet0
    down route -A inet6 del default dev venet0

auto venet0:0
iface venet0:0 inet static
    address 192.168.1.125
    netmask 255.255.255.255

The only significant difference I can see (besides that the interface file is populated properly) is that PVE first uses the vzcreate command with --ipset as a switch. I'm not sure if the vzcreate command is unique to PVE (I'm suspecting that perhaps it is?) or whether that is just another OVZ command? What happens if you try to use that command?

I'm assuming that the commands that you use normally work for other (non TKL) OVZ templates (or I guess otherwise this wouldn't be an issue).

I should have been clearer, when I was asking for the /etc/network/interface from "another (untouched) OpenVZ template to look at" I meant a non-TKL template that you could compare. Do you have one handy?

Jeremy Davis's picture

I'm not sure what OS you are using for your OVZ server and this may not be relevant but ProxmoxVE is a fantastic (Debian based) setup IMO. It has OVZ (obviously) but also KVM (so you can also host alternative OS such as Win) and a nice WebUI up front as well as powerful commandline tools. It also supports server clustering and alternative storage such as SAN and iSCSI OOTB.

Obviously if your current setup works well and fulfills your needs, then there's no reason to fix what isn't broken. But if you're looking for a little bit more from your OVZ server then PVE could be worth a look. I'm still using the v1.9 stable, but v2.0rc1 is looking pretty sweet so I think I'll start having a play with that really soon.

Although PVE comes as an ISO download it can be installed to an existing Debian install. v1.9 is Lenny based (oldstable) but v2.0 is Squeeze based (stable). Once PVE 2.0 goes stable the PVE devs will supply a 1.9->2.0 upgrade script (and v1x will be EOL).

If you just want to have a look you can download the ISO and it installs fine to VirtualBox. If you go straight for v2.0rc1 then you'll want to make sure to run "aptitude update && aptitude full-upgrade" after install to update to the latest packages. It's under heavy development - but the latest version (updated with the above command) is pretty much feature complete AFAIK.

As an extra bonus PVE devs have just announced that following on from the official TKL release of OVZ templates, the latest v2.0 PVE (when updated as above) comes complete with a 'TKL-channel' for downloading TKL OVZ templates from within the PVE WebUI! It's almost too easy now!

PS sorry to hijack your thread with my unashamed PVE advertisement. For the record I am not affiliated with PVE in any way, just a very satisfied user since v1.3.

DRivard's picture

Hi Jeremy,

I checked out this morning and on plain OpenVZ the command "ipset" doesn't even exist neither vzcreate.
The available options are: 

 

vzctl set 10 --
--applyconfig     --devnodes        --ipadd           --netdev_del      --numpty          --quotatime       --tcpsndbuf
--bootorder       --dgramrcvbuf     --ipdel           --netif_add       --numsiginfo      --quotaugidlimit  --userpasswd
--capability      --disabled        --iptables        --netif_del       --numtcpsock      --root            --vmguarpages
--cpulimit        --diskinodes      --kmemsize        --noatime         --onboot          --save
--cpumask         --diskquota       --lockedpages     --numfile         --oomguarpages    --searchdomain
--cpus            --diskspace       --meminfo         --numflock        --othersockbuf    --setmode
--cpuunits        --features        --name            --numiptent       --physpages       --shmpages
--dcachesize      --hostname        --nameserver      --numothersock    --private         --swappages
--devices         --ioprio          --netdev_add      --numproc         --privvmpages     --tcprcvbuf
 
Actually my workarround is based on a none TKL /etc/network/interfaces
 
# This configuration file is auto-generated.
#
# WARNING: Do not edit this file, your changes will be lost.
# Please create/edit /etc/network/interfaces.head and
# /etc/network/interfaces.tail instead, their contents will be
# inserted at the beginning and at the end of this file, respectively.
#
# NOTE: it is NOT guaranteed that the contents of /etc/network/interfaces.tail
# will be at the very end of this file.
#
 
# Auto generated lo interface
auto lo
iface lo inet loopback
 
# Auto generated venet0 interface
auto venet0
iface venet0 inet manual
        up ifconfig venet0 up
        up ifconfig venet0 127.0.0.2
        up route add default dev venet0
        down route del default dev venet0
        down ifconfig venet0 down
 
 
iface venet0 inet6 manual
        up route -A inet6 add default dev venet0
        down route -A inet6 del default dev venet0
 
auto venet0:0
iface venet0:0 inet static
        address 172.30.36.253
        netmask 255.255.255.255
 
Yes all the commands above are working fine if I use the precreated templates from OVZ. 

I am running Debian Squeeze 6.0.4, I am not against testing PVE at all, I just normally install everything myself so I know the underground configuration of it and then I can debug myself. Plus I tested recently OpenVZ Web Panel that turned out to be a very complete solution with a very nice Web GUI.

How does it work for clustering because that is one thing interesting of PVE. The second good argument is the ISO to ovz conversion. I'll give it a try later next week.

Regards

Jeremy Davis's picture

I actually posted some more info that I discovered on the bug report. But here it is again here:

Ok, done a little more research...

It looks like the vzcreate command comes from a 3rd party bundle of scripts called vztools. It is available for download (I assume that the PVE devs bundle this in PVE). Have a read here: http://blog.chriswalker.devnetonline.net/2010/01/09/openvz-tools-working-with-openvz-and-virtual-private-systems?page=4

On the following page of that blog post it certainly does appear that the vzctl comand to set the IP should work... And again on the official OpenVZ wiki: http://wiki.openvz.org/User_Guide/Operations_on_Containers#Setting_Network_Parameters

So I assume that we just need to work out why the templates work fine when launched by PVE (and I would assume thus by vzcreate) and not when using vzctl directly. I still think the first step is possibly to compare the /etc/network/interface file from an Ubuntu OVZ template (which networking does work OOTB)...

As for PVE - TBH I haven't used the clustering feature myself. I just think it sounds really cool! :) From what I've read v2.0 includes the facility to create High Availablitiy clusteris too.

And perhaps I mislead you somewhat, it doesn't support ISO to OVZ conversion, it just allows you to download the TKL OVZ templates from the WebUI (and they work OOTB!) You can also install from ISO (to KVM VM though - not OVZ). TBH it sounds like you are streets ahead of me in tech knowledge so I suggest that you just test in VBox (on a desktop) to see what you think. As I say, if your server is doing what you want, then perhaps we just need to work this bug out so it works OOTB...

DRivard's picture

VBox will not give the true look and feel of testing PVE. I will install on two little server and I will test the clustering feature. We already use OpenVZ clustering with DRBD, so if one node fail the second node transfer the harddisk and all the container are restarted. So this way our clients never go down. 

I will investigate the VZ tools to see, if this is where the IP gets correctly assign.

Cheers.

Jeremy Davis's picture

Perhaps I should take it on myself to work to resolve this issue. I have updated the Bug report. Please note that the above (clunky) workaround does indeed seem to work...

Jeremy Davis's picture

I downloaded the turnkey-core-11.3-lucid-x86-openvz.tar.gz template to my PVE host.

Before I started, I untarred the template and checked /etc/network/interfaces:

# UNCONFIGURED INTERFACES
# remove the above line if you edit this file

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

auto eth1
iface eth1 inet dhcp

Obviously it was the same as yours. I then ran the commands you posted:

proxmox:~# /usr/sbin/vzctl create 206 --ostemplate turnkey-core-11.3-lucid-x86-openvz --config basic
Creating container private area (turnkey-core-11.3-lucid-x86-openvz)
Warning: configuration file for distribution turnkey-core-11.3-lucid-x86-openvz not found, using defaults from /etc/vz/dists/default
Performing postcreate actions
Saved parameters for CT 206
Container private area was created
proxmox:~# /usr/sbin/vzctl set 206 --onboot yes --save
Saved parameters for CT 206
proxmox:~# /usr/sbin/vzctl set 206 --hostname tkl-ovz-vanilla-test --save
Warning: configuration file for distribution turnkey-core-11.3-lucid-x86-openvz not found, using defaults from /etc/vz/dists/default
Saved parameters for CT 206
proxmox:~# /usr/sbin/vzctl set 206 --ipadd 192.168.1.206 --save
Warning: configuration file for distribution turnkey-core-11.3-lucid-x86-openvz not found, using defaults from /etc/vz/dists/default
Saved parameters for CT 206
proxmox:~# /usr/sbin/vzctl set 206 --nameserver 192.168.1.1 --save
Warning: configuration file for distribution turnkey-core-11.3-lucid-x86-openvz not found, using defaults from /etc/vz/dists/default
Saved parameters for CT 206
proxmox:~# /usr/sbin/vzctl set 206 --vmguarpages $((256 * 512)) --save
Saved parameters for CT 206
proxmox:~# /usr/sbin/vzctl set 206 --privvmpages $((256 * 1024)) --save
Saved parameters for CT 206
proxmox:~# /usr/sbin/vzctl set 206 --diskspace 3G:4G --save
Saved parameters for CT 206
proxmox:~# /usr/sbin/vzctl set 206 --capability sys_time:on --save
Saved parameters for CT 206
proxmox:~# /usr/sbin/vzctl start 206
Warning: configuration file for distribution turnkey-core-11.3-lucid-x86-openvz not found, using defaults from /etc/vz/dists/default
Starting container ...
Container is mounted
Adding IP address(es): 192.168.1.206
Setting CPU units: 1000
Container start in progress...
proxmox:~# 

It starts as expected and this is what /etc/network/interfaces looks like in the running container:

# This configuration file is auto-generated.
#
# WARNING: Do not edit this file, your changes will be lost.
# Please create/edit /etc/network/interfaces.head and
# /etc/network/interfaces.tail instead, their contents will be
# inserted at the beginning and at the end of this file, respectively.
#
# NOTE: it is NOT guaranteed that the contents of /etc/network/interfaces.tail
# will be at the very end of this file.
#

# Auto generated lo interface
auto lo
iface lo inet loopback

# Auto generated venet0 interface
auto venet0
iface venet0 inet manual
    up ifconfig venet0 up
    up ifconfig venet0 127.0.0.2
    up route add default dev venet0
    down route del default dev venet0
    down ifconfig venet0 down


iface venet0 inet6 manual
    up route -A inet6 add default dev venet0
    down route -A inet6 del default dev venet0

auto venet0:0
iface venet0:0 inet static
    address 192.168.1.206
    netmask 255.255.255.255

And everything works, which wasn't quite what I expected...

So not sure where to go next...

DRivard's picture

As reported in the bug I finally found the solution to this issue.

If you want the turnkey linux OVZ template to work you have to rename them starting by ubuntu:
e.g.: turnkey-postgresql-11.3-lucid-x86-openvz.tar.gz needs to be rename ubuntu-turnkey-postgresql-11.3-lucid-x86-openvz.tar.gz 

This is due to a naming convention on of OpenVZ and the distribution they support. You can look a the script I created here: http://drivard.com/2012/02/19/download-turnkey-linux-virtual-appliances-for-openvz-at-once/ it will help you download all the turnkey OVZ template in one shot and start using them within a vanilla OpenVZ without the networking issue. This is a quick fix until TKL fix their naming convention.

Regards.

 

 

Jeremy Davis's picture

They are exactly the same bexcept they use the PVE naming convention so are named like this:

ubuntu-10.04-turnkey-core_11.3-1_i386.tar.gz

The PVE naming convention is only required so the templates show correctly in the PVE WebUI (beacuse as I demonstrated even with the 'plain' TKL OVZ naming they still work ok). Seeing as they start with 'ubuntu' perhaps that is enough for them to work with vanilla OVZ? Perhaps you could give one a try to confirm. They can be on SourceForge in 'Files' under 'PVE' here: http://sourceforge.net/projects/turnkeylinux/files/pve/

If they work ok, then TKL won't have to maintain 2 versions of the OVZ templates. I think the only reason why the devs have 2 sets of OVZ templates was so the vanilla templates follow the naming convention that all the other appliance formats follow. But if that is what causes the problem, and the PVE-OVZ templates work ok, then that defaets the purpose of 2 sets...

PS thanks for all your your work on getting to the bottom of this bug.

Jeremy Davis's picture

But as OVZ containers don't have a true console that doesn't work. So you need to manually run them. The TKL devs have made a script which will run the required firstboot scripts, so to get started:

turnkey-init

DRivard's picture

Good monday morning,

Sorry I missed the PVE template being the same as OVZ, I confirmed that on a Vanilla OpenVZ server, the template works fine. I tested ubuntu-10.04-turnkey-wordpress_11.3-1_i386.tar.gz and it worked out of the box, so I think you are right saying that TKL should only support one type of OVZ templates and not two forks. But renaming them PVE/OpenVZ would be great, that way we will know which one to choose, if PVE doesn't ring a bell to us.

Regards.

Jeremy Davis's picture

Yes I think that is the solution.

I noticed that the TKL devs have recently changed the appliance page links to point straight to the downloads and as the PVE interface interacts with the TKL image repository in the background it should be fairly trivial to just put all the images in the OpenVZ folder (and adjust the PVE script to the new location).

Add new comment