You are here
Hello,
I'm unsure if this is possible or even a good idea from the developers POV, but I wanted to ask:
Is there a mechanism, or has it already been discussed, to have the ability to depoly the diff TKL appliances from the TKL library into a single large TKL appliance?
My need is this: I use the various appliances (DokuWiki, Redmine, etc.) for there specific purposes, however it means having all these diff running VM's and in some cases multiple ones of the same one for different customers or users. What would be nice, would be able to take the diff TKL appliances I use, DokuWiki, Redmine, etc., and have those all installed in one single TKL appliance so I only have a single appliance VM running for each particular customer or for myself.
Is this possible or just a bad idea?
IMHO, bad idea
I don't think it's possible with the way the different tkl appliances are built and delivered today.
That said, if it could be done... I would propbably not do it. Many of these systems have some pretty specifric dependencies that might conflict with that of others. This goes for software and system resources (ports, etc). The systems are tested and, for the most part, work well. TKLBAM makes backup and disaster recovery simple.
That said, some combinations make sense together (like combinig a revision control appliance and a redmine appliance). if you run a consistent set of services for customers and won't introduce conflicts, you could build your own tklpatch that installed what you needed. You'd just need to do it yourself.
Not exactly the solution you may be looking for...
But the best way to go IMO is to run some sort of hypervisor OS on bare metal. For any that regular these forums, you can probably stop reading now! (Here comes the PVE plug! :D)
Assuming that you have access to a (spare) PC with a CPU which supports virtual extensions (ie AMD-V/VT-x - AFAIK all AMD CPUs from skt AM2 on & most Intel CPUs Core2Duo on [check on Intel website] - and has them enabled in BIOS) you could run ProxmoxVE (light-weight free open source headless Debian based hypervisor). Since v2.0 (currently v2.1) it includes the TKL library (specific post here) as OVZ templates ready for download from within the WebUI. Appliances running inside OVZ containers use much less resources than a 'normal' VM. OVZ only supports Linux OSs (pre-built as OVZ templates) but PVE also includes KVM virtualisation so you can install other OSs (including Windows) from ISO as per usual.
Even if you don't have access to separate hardware (and/or want to keep running the whole thing as a VM on your desktop) PVE should install fine to VirtualBox (although last time I tested was v1.9) assuming that your PC again has virtual extensions enabled. You could then run all your TKL appliances as OVZ containers within that (your resource overhead should be much lower than currently).
The beauty of doing it that way is that you reap the benefits of having lower resource overheads, with the benefit of each VM being independant of one another.
I haven't done this before so I'm guessing
But I would imagine that option C would be the best way to go. Then you could connect one NIC to your router and have a pfSense (or similar) VM running in a DMZ. Then all your other appliances could run on a separate subnet so you aren't using the same set of IPs as your local network.
Then you could use the NAT function of pfSense to forward the relevant ports as you desire.
FYI I just have all my servers running on the same subnet as as the rest of my LAN. I have 100 IPs available for servers (ie Proxmox and it's VMs - which is overkill for me), 100 IPs for printers, modem, router, wireless access points and desktops (which is also overkill for me) and 50 IPs dedicated to DHCP (for laptops, tablets, smartphones, etc - which is overkill too).
Hardware router for virtual servers
Thanks for the tips.
I briefly tried pfsense in an VM container and got nowhere fast.
Got it work by using an old consumer Internet / wifi router. This router handles 192.168.10.x network. Works as the router, firewall, port forwarder for PVE. The WAN port has an IP from the office network and is in the DMZ on the office's Internet router.
I hear what your saying about there probably being more than enough IPs. But I figure this additional level of 'abstraction' helps makes this setup very 'portable'. (They can do whatever they want with the office network, as long as my router's IP is in their DMZ, it's all good.)
Plus if I do something amazingly creative (read: stupid) there is a bit less of a chance they will be affected. Lol
-- David
Computer Tech / Web Dev
Ontario, Canada
Good one
Sounds like you've got it all sorted then.
For future reference; for all things PVE related, their forums/mailing list are a pretty good resource and have an active community as well as very active devs. There was someone recently on the mailing list discussing using a pfSense VM as a firewall/router hence part of the reason why I thought that was probably your best bet.
Instead of 1 TKL with all
Instead of 1 TKL with all appliances combined you could use it different.
I am using a redhat/suse vmware instance in which I have multiple turnkey linux machines runnning inside lxc containers at native speed.
Its a bit difficult to explain...
advantage:
disadvantage:
But it works great:
QUOTE: ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
Personally I prefer Proxmox
Because you get the advantage of lightweight container virtualisation (OVZ rather than LXC) but you also have KVM (for non Linux OS and/or ISO installs). Having access to the TKL library (as OVZ templates) from the WebUI is really handy too I find.
LEt's say
Let's say I have the Lamp appliance, how can I install the canvas appliance onto my VPS, I really can't buy another VPS. I want it with the tweaking that comes with the TKL Canvas appliance.
Omar S. Ahmed
No easy way...
Probably best to start with the Canvas appliance and then manually build in your required additional functionality...
Just run a basic linux OS on
Just run a basic linux OS on your VPS (or TKL core)
Within that OS create a separate bridge, give it a 192.168.2.xxx address range
Now you have a complete separate network within your box only
Create 1 iptables NAT rule for outgoing from bridge to your real ethernet adapter...
create one or more LXC containers with TKL appliances (LAMP, SVN, whatever) or other linux distro's (debian, fedora, ubuntu)
Start the ones you need and they will be all available on the 192.168.2.xx address range...
Now decide which of them should be really reached from outside, create iptables routing between the bridge and the real ethernet adapter.
You can have many TKL appliances on your disk, not all of them need to be running
Stopping and starting an appliance only takes a few seconds. This it because it doesnot need to load a kernel since it is using the already loaded kernel of the host.
Each LXC container with a TKL appliance will run its processes natively on the host and will use the hosts kernel, memory is used only for the processes that are running..
So with 1 VPS, you can run many TKL appliances or have them ready to stop/start them when you need within seconds.
But... it is not straight out of the box. you need to do some thinkering....
When the new TKL 12 or 13 versions are out, I will post some scripts which will take a TKL OpenVZ tar file and will convert them to LXC container files and generate a LXC config needed for starting it...
Instead of using LXC you could also do this with OpenVZ.
OpenVZ requires an adapted linux kernel ro run, LXC is default available in the newer linux kernels of every distro. OpenVZ is more mature, LXC is newer and still in full development.
LXC is like OpenVZ without requiring to have a modified Linux kernel... Jeremy will correct me if I am wrong... :)
QUOTE: ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
THANX
Thanx Hans Harder :D
Omar S. Ahmed
Good one Hans! :)
Good thinking... It didn't even occur to me to do that!
And yes OpenVZ and LXC are very similiar. My reading suggests that LXC is still lacking in security (the container root user can execute code on the host as root) and as such is generally still considered 'experimental' and not recommended for production environment. However in this sort of usage scenario it is probably quite reasonable to use it to acheive your ends. Especially if you take the same sort of precautions that you would on a standalone TKL production system (both on the host and on the guests).
If installing an alternative OS/kernel is an option personally I would be looking at installing Proxmox (or Debian Squeeze 64 bit and install Proxmox on that)... and then use OVZ for your containers.
Jeremy, any idea why there is
Jeremy, any idea why there is no TKL Proxmox appliance
Looks like me that could be a very handy TKL appliance and not that difficult...
That would give people the base to have 1 larger VPS with multiple TKL appliances or use separate VPS for each TKL appliance.
QUOTE: ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
Running a TKL instance inside a chroot
Another solution could be to run TKL instances inside a chroot. Then you can have several TKL instances running on the same server (provided that they don't conflict with each other, for example LAMP and LEMP both use the ports 80/443). I think that running several TKL chroot instances can be more efficient and more easy to manage than running them on virtual machines.
I am working towards such a solution and maybe it can be made to work well: https://github.com/dashohoxha/B-Translator/tree/master/TKL
nope... It makes no
nope...
It makes no different what linux host you use, you can use any recent standard linux distribution
(Debian, Ubuntu, Fedora, whatever) makes no difference how its hosted, bare metal or in a VM.
Just use LXC or if possible OVZ, there is no overhead just like a chrooted instance.... but they also have the admin and config interfaces
I have been running TKL containers on multiple RH5 hosts since 1,5 year (LAMP, SVN, .....) without any problems.
QUOTE: ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
OVZ/LXC are much better than vanilla chroot
Like Hans says they require almost no additional overhead in comparison to chroot but isolate the machines much better, both security wise and resource wise.
A crash or a memory leak in a chrooted machine will bring down and/or impact the whole system. Whereas in a container the main system (and thus any other containers) are protected. Also a compromised chrooted machine can launch code on the host where this is not possible in OVZ (or LXC if it is configured properly).
Add new comment