rcd's picture

I created an privileged container with TKL fileserver 16.1-1 in proxmox 6.4 but the webmin system didn't start. After some poking around I found that stunnel4 didn't start, apparenly some problem with running in privileged containers.  Well I created another unprivileged, but as I use bind mounts to export zfs volumes to the container, i now can't write files.

I know there is a way to map user id's described here, but it is so nitty-gritty complicated I just can't wrap my head around how it works.  

Frankly I am fine with running a privileged container as it's for my homelab in my private lan, except of course then webmin doesn't work. 

Is there a solution to this?

 

 

Forum: 
Jeremy Davis's picture

If you enable the container to run "nested" it should run fine as a privileged container. FWIW the issue is that the additional security measures implemented in many Debian Buster systemd services aren't compatible with running within a privileged cotnainer (due to bugs and/or limitations in the interaction between the kernel's cgroups provision and the version of systemd in Debian Buster - which is the base of both Proxmox v6 and TurnKey v16). If I understand the issue correctly, it should "disappear" once both the host and guest move to (the soon to be released) Debian Bullseye.

badco's picture

Can I ask, how are you are using bind mounts to export zfs volumes to the fileserver container?

I am still trying to wrap my head around getting my fileserver setup on Proxmox.

Jeremy Davis's picture

I have a LVM volume called /dev/mapper-storage-a, which is mounted to /media/storage-a. I have it (bind) mounted within my fileserver LXC container to /srv/storage. To configure the bind mount, stop the container, then edit it's config file. You should find the config file at /etc/pve/lxc/<VMID>.conf. Then add this line:

mp0: /host/mountpoint,mp=/guest/mountpoint

E.g. for me, the line is:

mp0: /media/storage-a,mp=/srv/storage

Exit and save, then boot your container. To double check, within the running container, run:

mount | grep /guest/mountpoint

And that should return something like this:

/dev/actual/device on /guest/mountpoint type ext4 (rw,relatime,data=ordered)

For example, for me, it returns:

/dev/mapper/storage-a on /srv/storage type ext4 (rw,relatime,data=ordered)
badco's picture

The part I am getting stuck on is going from my ZFS pool (data1) to mounting it in the container.

You mention LVM, where does that come in? I'm not very familiar with LVM.

Jeremy Davis's picture

LVM doesn't necessarily come in and it sounds like in your scenario it may not be relevant at all. I mentioned it just so that my example made sense. (How else would I have a "device" called /dev/mapper/storage-a?!) It could have been a partition on a physical drive (e.g. /dev/sdb1).

I don't have a ton of experience, with ZFS but I would assume that to be used, it would still need to be mounted somewhere as part of the filesystem. To find it, I suggest trying this:

mount | grep "type zfs"

Hopefully that might give you enough additional info (along with my previous post) to work out whatever is still missing to get it working?! Hopefully I'm making sense. Please ask for further clarification if need be.

badco's picture

Sorry I keep assuming in my head everyone is running Proxmox with ZFS and TKL! I should be checking the Proxmox documentation.

Add new comment