Ric Moore's picture

Is there some magic incantation to restore networking back to normal?? We're getting static IP addresses so I followed some online advice to edit /etc/network/interfaces. That didn't work. Now I can't get it to reset back to dhcp.

So, I installed another container of WordPress, with a different hostname, which works just fine.

BUT! I have a ton of work sitting in the original container that I really don't want to lose. If I stop both containers and login directly at my server, can I merely copy the files from the busted container to the new container? The backup route also restored the broken stuff as well, when I tried doing that. restoring to the new WordPress container. Crap. So, I need to repair the original WordPress networking or somehow copy its files to the new WordPress container.

I'll gladly accept ALL ideas to save my bacon here! Thank to you all, Ric

Forum: 
Ric Moore's picture

I should have mentioned that. I'm using standard Debian that comes with the WordPress container. Via promox I can open a console to the busted WordPress container. I'm only showing localhost with ifconfig. So, if there is some way to re-init networking, I'd love to learn how. Thanx, Ric


Ric Moore's picture

I just copied everything working from container 102 WordPress to the blown up contain 100 of WordPress then edited the hostname. NOW I can open a terminal to it. But, Apache is complaining. I'm getting closer now!! Thanks! Ric


Jeremy Davis's picture

This is a bit of a hack, but if you stop both containers and copy the contents (on your PVE host) of /var/lib/vz/private/<old_VMID>/var/www to /var/lib/vz/private/<new_VMID>/var/www (where <old_VMID> and <new_VMID> are the VM ID numbers of the respective containers) you should be good to go with your new appliance (assuming that you haven't adjusted other stuff eg in /etc).

Also if you want a static IP then I recommend that you use venet and set the static IP from the PVE interface (also it is more secure if your container is available outside your LAN). If you want DHCP (or you want to be able to change from static to dynamic and back again) then you need to use veth. If you use that then you can swap between static and dynamic (ie DHCP) IPs using Confconsole (as you would if running the container on bare metal/KVM/VMware/VirtualBox/etc). Only difference is that as it doesn't auto launch in OVZ, you need to manually launch it. But that's as simple as 'confconsole'.

Ric Moore's picture

Thanks! It's a good thing to add the "mlocate" package to the containers. Then I could "locate wordpress" and found the directories you mentioned. But, I also copied root from the new to the old and that didn't work so well. I corrected the hostname, and got eth0 back though. But, as you mentioned copying the private/var/www did most of the trick. Funny thing that I don't get the option to "remove" the old container (which was 100) in the Proxmox dashboard?? Sure, I could manually delete all those files, but I'm afraid it'll blow up something else in Proxmox. I've run turnkey-init, to see if that would refresh things but that didn't help.


Jeremy Davis's picture

Do you mean that it's greyed out int he PVE interface? If so check that the VM is not running, or mounted (if either of these click the 'stop' or 'unmount' buttons respectively). That should then allow you to remove the container. If that still doesn't work you could try doing it via the PVE commandline. Assuming CT 100 (AKA VMID: 100) the command would be:

vzctl destroy 100

Alternatively the command 'vzctl delete <VMID>' can also be used - see here.

If that still doesn't work then try stopping the container via the commandline ('vzctl stop <VMID>' as advised in the docs I linked to above).

If you are still stuck then you could manually delete all the bits, but that would only be a last resort and ideally you should probably find out why it won't let you delete using vzctl (chances are it won't work anyway if something has locked the dirs). You may be better off just rebooting and trying again then...

edumax64's picture

Hello, I read you your instructions, but I can not delete the container, i got this error:

code:  root@virtualstore:~# vzctl destroy 100
stat(/var/lib/vz/private/100): No such file or directory
Container is currently mounted (umount first)
root@virtualstore:~# vzctl umount 100
stat(/var/lib/vz/private/100): No such file or directory
Can't umount /var/lib/vz/root/100: Invalid argument

The error and 'perhaps due to these recommendations by vzctl manual?

DESCRIPTION
       Utility  vzctl  runs  on  the  host system (otherwise known as Hardware
       Node, or HN) and performs direct manipulations with containers (CTs).

       Containers can be referred to by either numeric CTID or  by  name  (see
       --name option). Note that CT ID <= 100 are reserved for OpenVZ internal
       purposes.
 
 
 
inglese
italiano
spagnolo
It 'just time that I use the Appliance Tunkey, and the most' pleasant 'was to find the repository within Proxmox. A brilliant idea. I think I made a mistake, since I had a Server with  Proxmox 2.2, and added a second server  to create a cluster. In existingProxmox,I have a container with ownCloud Tunkey Appliance-OpenVZ, with NFS shared storage.
After creating the cluster, the container would not start, so I decided to delete it permanently, and then recreate it again, but I keep getting errors and still have the item in the menu of the web interface for the virtual machine owncloud.lan.
How can I fix?
Thank you




 

Jeremy Davis's picture

You could try rebooting the server (but make sure the container is set to not boot on start) and remove then. Or you may need to resort to manually removing it. Basically it's just the private and root areas (ie private/<VMID> and root/VMID - as mentioned above) and the <VMID>.conf file (in /etc)

Ric Moore's picture

A clean re-install fixed it all nicely!! Thanx Ric


Ric Moore's picture

I get 'Error: Command "vzctrl umount 100 failed" exit code 40' in the log. Yeow. I think I'll reboot the server and try again. Nope, same result. :( Ric


Jeremy Davis's picture

According to the OVZ man exit code 40 means "container not mounted"...! Which seems strange that you can't delete/destroy it...

My only guess is that something is accessing something within the filesystem of that container. I find it strange (and a little concerning) that it refuses to delete/destroy and it says that it's not mounted... 

You should be able to find out if a process has the folder locked using this:

lsof /var/lib/vz/private/<VMID>
lsof /var/lib/vz/root/<VMID>

Although I'm not sure if that will include all subdirs and files (I assume that it will). Also you could try deleting the container manually. AFAIK you just need to delete the conf file (should be in /etc/pve/nodes/<NodeName>/conf/<VMID>.conf or something like that... Then delete the root and private dirs (paths as above).

Good luck!

Ric Moore's picture

vzctl destroy 100 did the trick! That blew it right out of the ball park!

THANKS! Ric


Jeremy Davis's picture

Please disregard my previous post then... I missed this post of yours so what I wrote is not really relevant anymore...

Ric Moore's picture

Well, yes it was. Remember I created containter 102 which was another wordpress container and copied all of the original container 100 private files over to it?? That worked perfectly, so far.

I copied the root/ from 102 over to the 100 container trying to solve the dhcp problem that I created by dinking with it's network settings by hand. That didn't work either, even after resting it's hostname. Heh, yeah I'm learning! So, all I had to do was to rm -rf container 100's root/ directory. That had to be the reason for the failure, ~which I caused~ ...not turnkey or proxmox.


Ric Moore's picture

I did a clean reinstall to bare metal to fix the remaining problems. Proxmox doesn't seem to like me doing hand edit style security. This time I'll leave it alone. But, we DO need a good how-to for basic security improvements that doesn't blow things up. I think I would feel better not being the 'root' user and using su for one. :) Ric


Add new comment