mikerocket's picture


I am using Proxmox server and TKL Nextcloud container. I am having an issue with the TKL Nextcloud. I have the Nextcloud configured as unprivileged container. I added an NFS export to the Proxmox server so that the TKL Nextcloud container can mount the bind point.

On the Proxmox server, I executed this command:

pct set 108 --mp0 /mnt/pve/nas/cloud_lxc,mp=/srv/cloud_data

The the shared folder from my NAS is showing up inside of the container. However, every ~5 hours the mount point dissapears within the container. This is not true on the Proxmox host. The only way for me to re-establish the mount point back to the container is to reboot the container.

This is where the crontab issue comes in. Within the Nextcloud, I configured a crontab -e with this parameters.

0 0,4,8,12,16,20 * * * root /sbin/shutdown -r now

However, the container is not executing the cron job. I would rather the bind mount point to work than rebooting the Nextcloud, but it seems like the cron is not working as well.


mikerocket's picture

I got the cron working. I guess putting it in /etc/cron.d/ works; however, it didn't re-establish the mount point.

It seems like the only way to re-establish the mount point was executing the reboot manually.

Does anyone know why the mount point disconnects in the first place?

Jeremy Davis's picture

If the mount is set up in Proxmox (and your TurnKey Nextcloud is essentially unaware of it), then that would suggest to me some sort of issue within the Proxmox host and/or some intermittent network issue!? I can see how you could think that it might be something up with TurnKey as the mount still appears on the Proxmox host, however, if I understand correctly, then the guest is essentially unaware of the mount so it seems more likely to me that the Proxmox forums might have more luck helping with the cause of this.

As I hint above, my guess is that the NAS share is momentarily disconnecting every ~5 hours and for some reason the re-connection is not propagating to the guest. If I'm right, then you should be able to confirm that via the Proxmox host logs. FWIW, a quick google also turned up old reports of NFS shares disconnecting when the Proxmox host is under load - although I'm not sure that's related as the NFS share was mounted inside the container.

To double check my guess, you'll want to check both the host and the guest system logs. The journal (i.e. via 'journalctl') should contain info, but TurnKey still also has a /var/log/syslog (not sure if Proxmox does?).

Regardless, it might be worth asking on the Proxmox forums? Perhaps they have some insight?

Whilst the process that you have noted (i.e. mount NFS share on host, mount bind host dir into guest dir) appears to be the preferred option, perhaps mounting the NFS share directly into the container itself might be worth a try? TBH, I'm not sure what changes would be needed but I'm almost certain that it won't "just work" OOTB and also v16.x templates don't run nicely as privileged guests on Proxmox v6.x.

Finally, I'm not 100% sure, but I have read that the initial NFS disconnection can be caused by excessive latency between the remote NFS server and the host. It still doesn't explain why the files don't auto re-propagate into the container if they're visible on the host, but I have read that they may get flagged as "stale". Remounting the share on the host should also resolve the issue and remove the "stale" flag. So that might be another approach? I.e. a host cron job to check for a "stale" NFS mount and remount it when needed?

If you need more input, please feel free to post back with your logs etc. If you do post on Proxmox forums, please post a link here and ideally post back if you manage to resolve it.

Add new comment