johns67w's picture

Hi, i am awaiting activiation, in the meanwhile i would appreciate some tech support please.

My current setup:

LXC (via proxmox) 8, just updated from pve7, but issue was the same before upgrading to pve8.

turnkey-version: 

The setup worked, but my homelab has been discounted for about a year. tried to turn the server on, but cannot access webmin.

tried to do service stunnel4 restart, response:

Failed to restart stunnel4.service: Unit stunnel4.service is masked
root@gtcloud ~# service stunnel4 status
* stunnel4.service
     Loaded: masked (Reason: Unit stunnel4.service is masked.)
     Active: inactive (dead)

I have tried to follow the steps on this link, no luck , but results below: Issue accessing webmin and webshell on Turnkey File Server | TurnKey GNU/Linux (turnkeylinux.org)

root@gtcloud ~# systemctl stop webmin shellinabox stunnel4@webmin stunnel4@shellinabox
root@gtcloud ~# mkdir -p /var/lib/stunnel4
root@gtcloud ~# rm -f /var/lib/stunnel4/*.pid
root@gtcloud ~# chown stunnel4:stunnel4 /var/lib/stunnel4
root@gtcloud ~# chmod 0755 /var/lib/stunnel4
root@gtcloud ~# systemctl start webmin shellinabox

root@gtcloud ~# 
root@gtcloud ~# systemctl status webmin shellinabox stunnel4@webmin stunnel4@shellinabox | grep '^*' -A5

* webmin.service - Webmin server daemon
     Loaded: loaded (/lib/systemd/system/webmin.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/webmin.service.d
             `-override.conf
     Active: active (running) since Fri 2024-01-19 21:38:34 GMT; 21s ago
    Process: 238149 ExecStart=/usr/share/webmin/miniserv.pl /etc/webmin/miniserv.conf (code=exited, status=0/SUCCESS)
--
* shellinabox.service - Shell In A Box Daemon (aka WebShell)
     Loaded: loaded (/etc/init.d/shellinabox; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 28s ago
    Process: 238150 ExecStart=/etc/init.d/shellinabox start (code=exited, status=0/SUCCESS)
      Tasks: 2 (limit: 4532)
     Memory: 1.6M
--
* stunnel4@webmin.service - Universal SSL tunnel for network daemons (webmin)
     Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 30s ago
    Process: 238141 ExecStart=/usr/bin/stunnel4 /etc/stunnel/webmin.conf (code=exited, status=0/SUCCESS)
   Main PID: 238145 (stunnel4)
      Tasks: 2 (limit: 4532)
--
* stunnel4@shellinabox.service - Universal SSL tunnel for network daemons (shellinabox)
     Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 33s ago
    Process: 238142 ExecStart=/usr/bin/stunnel4 /etc/stunnel/shellinabox.conf (code=exited, status=0/SUCCESS)
   Main PID: 238147 (stunnel4)
      Tasks: 2 (limit: 4532)
root@gtcloud ~# 
root@gtcloud ~# df -h /
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/pve-vm--201--disk--0   95G   32G   60G  35% /


root@gtcloud ~# systemctl status webmin shellinabox stunnel4@webmin stunnel4@shellinabox | grep '^*' -A5

* webmin.service - Webmin server daemon
     Loaded: loaded (/lib/systemd/system/webmin.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/webmin.service.d
             `-override.conf
     Active: active (running) since Fri 2024-01-19 21:38:34 GMT; 21s ago
    Process: 238149 ExecStart=/usr/share/webmin/miniserv.pl /etc/webmin/miniserv.conf (code=exited, status=0/SUCCESS)
--
* shellinabox.service - Shell In A Box Daemon (aka WebShell)
     Loaded: loaded (/etc/init.d/shellinabox; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 28s ago
    Process: 238150 ExecStart=/etc/init.d/shellinabox start (code=exited, status=0/SUCCESS)
      Tasks: 2 (limit: 4532)
     Memory: 1.6M
--
* stunnel4@webmin.service - Universal SSL tunnel for network daemons (webmin)
     Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 30s ago
    Process: 238141 ExecStart=/usr/bin/stunnel4 /etc/stunnel/webmin.conf (code=exited, status=0/SUCCESS)
   Main PID: 238145 (stunnel4)
      Tasks: 2 (limit: 4532)
--
* stunnel4@shellinabox.service - Universal SSL tunnel for network daemons (shellinabox)
     Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 33s ago
    Process: 238142 ExecStart=/usr/bin/stunnel4 /etc/stunnel/shellinabox.conf (code=exited, status=0/SUCCESS)
   Main PID: 238147 (stunnel4)
      Tasks: 2 (limit: 4532)
root@gtcloud ~# 
root@gtcloud ~# df -h /
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/pve-vm--201--disk--0   95G   32G   60G  35% /
root@gtcloud ~# df -i /
Filesystem                        Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/pve-vm--201--disk--0 6291456 1239621 5051835   20% /
root@gtcloud ~# journalctl -u stunnel4@webmin | tail -40
-- Boot b4aabbf0a972487cb772f79177888c85 --
Jan 18 23:27:21 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)...
Jan 18 23:27:22 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin).
Jan 18 23:42:56 gtcloud systemd[1]: Stopping Universal SSL tunnel for network daemons (webmin)...
Jan 18 23:42:57 gtcloud stunnel[301]: LOG5[main]: Terminated
Jan 18 23:42:57 gtcloud stunnel[301]: LOG5[main]: Terminating 1 service thread(s)
Jan 18 23:42:57 gtcloud stunnel[301]: LOG5[main]: Service threads terminated
Jan 18 23:42:57 gtcloud systemd[1]: stunnel4@webmin.service: Succeeded.
Jan 18 23:42:57 gtcloud systemd[1]: Stopped Universal SSL tunnel for network daemons (webmin).
-- Boot 81003b65649b4b32bfa30be269838745 --
Jan 19 19:09:13 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)...
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: stunnel 5.56 on x86_64-pc-linux-gnu platform
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Compiled with OpenSSL 1.1.1k  25 Mar 2021
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Running  with OpenSSL 1.1.1w  11 Sep 2023
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PSK,SNI Auth:LIBWRAP
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Reading configuration from file /etc/stunnel/webmin.conf
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: UTF-8 byte order mark not detected
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: FIPS mode disabled
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Configuration successful
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Switched to chroot directory: /var/lib/stunnel4/
Jan 19 19:09:13 gtcloud systemd[1]: stunnel4@webmin.service: Can't open PID file /var/lib/stunnel4/webmin.pid (yet?) after start: Operation not permitted
Jan 19 19:09:13 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin).
Jan 19 21:37:07 gtcloud systemd[1]: Stopping Universal SSL tunnel for network daemons (webmin)...
Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Terminated
Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Terminating 1 service thread(s)
Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Service threads terminated
Jan 19 21:37:07 gtcloud systemd[1]: stunnel4@webmin.service: Succeeded.
Jan 19 21:37:07 gtcloud systemd[1]: Stopped Universal SSL tunnel for network daemons (webmin).
Jan 19 21:38:31 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)...
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: stunnel 5.56 on x86_64-pc-linux-gnu platform
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Compiled with OpenSSL 1.1.1k  25 Mar 2021
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Running  with OpenSSL 1.1.1w  11 Sep 2023
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PSK,SNI Auth:LIBWRAP
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Reading configuration from file /etc/stunnel/webmin.conf
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: UTF-8 byte order mark not detected
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: FIPS mode disabled
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Configuration successful
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Binding service [webmin] to :::12321: Address already in use (98)
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Switched to chroot directory: /var/lib/stunnel4/
Jan 19 21:38:31 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin).
root@gtcloud ~# 
root@gtcloud ~# ls -la /var/lib/stunnel4
total 16
drwxr-xr-x  2 stunnel4 stunnel4 4096 Jan 19 21:38 .
drwxr-xr-x 29 root     root     4096 Feb 25  2023 ..
-rw-r--r--  1 stunnel4 stunnel4    7 Jan 19 21:38 shellinabox.pid
-rw-r--r--  1 stunnel4 stunnel4    7 Jan 19 21:38 webmin.pid
root@gtcloud ~# 

 

root@gtcloud ~# netstat -tlnp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 172.27.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.24.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 0.0.0.0:9819            0.0.0.0:*               LISTEN      275885/docker-proxy 
tcp        0      0 127.0.0.1:32401         0.0.0.0:*               LISTEN      3188/Plex Media Ser 
tcp        0      0 172.17.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 0.0.0.0:8181            0.0.0.0:*               LISTEN      2161/docker-proxy   
tcp        0      0 127.0.0.1:32600         0.0.0.0:*               LISTEN      9046/Plex Tuner Ser 
tcp        0      0 0.0.0.0:6052            0.0.0.0:*               LISTEN      1894/python3        
tcp        0      0 0.0.0.0:8123            0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 0.0.0.0:8083            0.0.0.0:*               LISTEN      2597/docker-proxy   
tcp        0      0 192.168.16.1:40000      0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      2839/docker-proxy   
tcp        0      0 0.0.0.0:1852            0.0.0.0:*               LISTEN      2205/docker-proxy   
tcp        0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      238366/perl         
tcp        0      0 172.30.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 0.0.0.0:9443            0.0.0.0:*               LISTEN      2787/docker-proxy   
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      126/systemd-resolve 
tcp        0      0 192.168.48.1:40000      0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      126/systemd-resolve 
tcp        0      0 0.0.0.0:1114            0.0.0.0:*               LISTEN      2227/docker-proxy   
tcp        0      0 0.0.0.0:1115            0.0.0.0:*               LISTEN      2949/docker-proxy   
tcp        0      0 0.0.0.0:1116            0.0.0.0:*               LISTEN      2921/docker-proxy   
tcp        0      0 172.25.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.21.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.29.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.22.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.26.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.19.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 192.168.32.1:40000      0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.23.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.28.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 192.168.128.1:40000     0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 192.168.64.1:40000      0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 172.18.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 192.168.112.1:40000     0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 0.0.0.0:9000            0.0.0.0:*               LISTEN      2814/docker-proxy   
tcp        0      0 0.0.0.0:8980            0.0.0.0:*               LISTEN      1705/docker-proxy   
tcp        0      0 0.0.0.0:8989            0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 127.0.0.1:12319         0.0.0.0:*               LISTEN      238196/shellinaboxd 
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      928/master          
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2692/docker-proxy   
tcp        0      0 0.0.0.0:81              0.0.0.0:*               LISTEN      2661/docker-proxy   
tcp        0      0 0.0.0.0:83              0.0.0.0:*               LISTEN      2585/docker-proxy   
tcp        0      0 0.0.0.0:89              0.0.0.0:*               LISTEN      2125/docker-proxy   
tcp        0      0 0.0.0.0:12321           0.0.0.0:*               LISTEN      238145/stunnel4     
tcp        0      0 0.0.0.0:8200            0.0.0.0:*               LISTEN      2051/docker-proxy   
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      330/sshd: /usr/sbin 
tcp        0      0 127.0.0.1:37217         0.0.0.0:*               LISTEN      7236/Plex Plug-in [ 
tcp        0      0 0.0.0.0:8640            0.0.0.0:*               LISTEN      275507/docker-proxy 
tcp        0      0 172.20.0.1:40000        0.0.0.0:*               LISTEN      2512/python3        
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      2625/docker-proxy   
tcp        0      0 0.0.0.0:444             0.0.0.0:*               LISTEN      2563/docker-proxy   
tcp6       0      0 :::32400                :::*                    LISTEN      3188/Plex Media Ser 
tcp6       0      0 :::9819                 :::*                    LISTEN      275891/docker-proxy 
tcp6       0      0 fe80::42:d5ff:fec:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::8181                 :::*                    LISTEN      2167/docker-proxy   
tcp6       0      0 :::8123                 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::8083                 :::*                    LISTEN      2612/docker-proxy   
tcp6       0      0 :::8000                 :::*                    LISTEN      2849/docker-proxy   
tcp6       0      0 fe80::42:4eff:fe3:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::1852                 :::*                    LISTEN      2213/docker-proxy   
tcp6       0      0 :::9443                 :::*                    LISTEN      2793/docker-proxy   
tcp6       0      0 :::5355                 :::*                    LISTEN      126/systemd-resolve 
tcp6       0      0 fe80::42:5cff:fe5:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 fe80::42:ddff:fe9:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 fe80::42:89ff:fe1:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::1114                 :::*                    LISTEN      2233/docker-proxy   
tcp6       0      0 :::1115                 :::*                    LISTEN      2955/docker-proxy   
tcp6       0      0 :::1116                 :::*                    LISTEN      2928/docker-proxy   
tcp6       0      0 fe80::42:ff:fe61::40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 fe80::42:79ff:fef:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 fe80::42:53ff:fea:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 fe80::42:70ff:fe0:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::8840                 :::*                    LISTEN      1876/./WatchYourLAN 
tcp6       0      0 fe80::42:7ff:fe4c:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::9000                 :::*                    LISTEN      2819/docker-proxy   
tcp6       0      0 fe80::42:89ff:fee:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::8980                 :::*                    LISTEN      1712/docker-proxy   
tcp6       0      0 :::80                   :::*                    LISTEN      2701/docker-proxy   
tcp6       0      0 :::81                   :::*                    LISTEN      2669/docker-proxy   
tcp6       0      0 :::83                   :::*                    LISTEN      2596/docker-proxy   
tcp6       0      0 :::89                   :::*                    LISTEN      2130/docker-proxy   
tcp6       0      0 fe80::42:bcff:fe8:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::12320                :::*                    LISTEN      238147/stunnel4     
tcp6       0      0 :::8200                 :::*                    LISTEN      2057/docker-proxy   
tcp6       0      0 :::22                   :::*                    LISTEN      330/sshd: /usr/sbin 
tcp6       0      0 :::8640                 :::*                    LISTEN      275514/docker-proxy 
tcp6       0      0 :::443                  :::*                    LISTEN      2631/docker-proxy   
tcp6       0      0 :::444                  :::*                    LISTEN      2568/docker-proxy   
tcp6       0      0 fe80::42:87ff:fef:40000 :::*                    LISTEN      2512/python3        
tcp6       0      0 :::6415                 :::*                    LISTEN      2044/node           
root@gtcloud ~# 

 

root@gtcloud ~# service stunnel4 status
* stunnel4.service
     Loaded: masked (Reason: Unit stunnel4.service is masked.)
     Active: inactive (dead)
root@gtcloud ~# 

 

root@gtcloud ~# journalctl -b -u webmin.service -u stunnel4@webmin.service
-- Journal begins at Mon 2023-02-27 01:26:45 GMT, ends at Fri 2024-01-19 22:12:37 GMT. --
Jan 19 19:09:13 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)...
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: stunnel 5.56 on x86_64-pc-linux-gnu platform
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Compiled with OpenSSL 1.1.1k  25 Mar 2021
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Running  with OpenSSL 1.1.1w  11 Sep 2023
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PSK,S>
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Reading configuration from file /etc/stunnel/webmin.conf
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: UTF-8 byte order mark not detected
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: FIPS mode disabled
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Configuration successful
Jan 19 19:09:13 gtcloud stunnel[301]: LOG5[ui]: Switched to chroot directory: /var/lib/stunnel4/
Jan 19 19:09:13 gtcloud systemd[1]: stunnel4@webmin.service: Can't open PID file /var/lib/stunnel4/webmin.pid (yet?) a>
Jan 19 19:09:13 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin).
Jan 19 19:09:13 gtcloud systemd[1]: Starting Webmin server daemon...
Jan 19 19:09:15 gtcloud perl[329]: pam_unix(webmin:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rh>
Jan 19 19:09:17 gtcloud webmin[329]: Webmin starting
Jan 19 19:09:18 gtcloud systemd[1]: webmin.service: Can't open PID file /var/webmin/miniserv.pid (yet?) after start: O>
Jan 19 19:09:18 gtcloud systemd[1]: Started Webmin server daemon.
Jan 19 21:37:07 gtcloud systemd[1]: Stopping Webmin server daemon...
Jan 19 21:37:07 gtcloud systemd[1]: webmin.service: Main process exited, code=exited, status=1/FAILURE
Jan 19 21:37:07 gtcloud systemd[1]: webmin.service: Failed with result 'exit-code'.
Jan 19 21:37:07 gtcloud systemd[1]: Stopped Webmin server daemon.
Jan 19 21:37:07 gtcloud systemd[1]: webmin.service: Consumed 1.698s CPU time.
Jan 19 21:37:07 gtcloud systemd[1]: Stopping Universal SSL tunnel for network daemons (webmin)...
Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Terminated
Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Terminating 1 service thread(s)
Jan 19 21:37:07 gtcloud stunnel[324]: LOG5[main]: Service threads terminated
Jan 19 21:37:07 gtcloud systemd[1]: stunnel4@webmin.service: Succeeded.
Jan 19 21:37:07 gtcloud systemd[1]: Stopped Universal SSL tunnel for network daemons (webmin).
Jan 19 21:38:31 gtcloud systemd[1]: Starting Universal SSL tunnel for network daemons (webmin)...
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: stunnel 5.56 on x86_64-pc-linux-gnu platform
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Compiled with OpenSSL 1.1.1k  25 Mar 2021
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Running  with OpenSSL 1.1.1w  11 Sep 2023
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PS>
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Reading configuration from file /etc/stunnel/webmin.conf
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: UTF-8 byte order mark not detected
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: FIPS mode disabled
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Configuration successful
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Binding service [webmin] to :::12321: Address already in use (98)
Jan 19 21:38:31 gtcloud stunnel[238141]: LOG5[ui]: Switched to chroot directory: /var/lib/stunnel4/
Jan 19 21:38:31 gtcloud systemd[1]: Started Universal SSL tunnel for network daemons (webmin).
Jan 19 21:38:31 gtcloud systemd[1]: Starting Webmin server daemon...
Jan 19 21:38:32 gtcloud perl[238149]: pam_unix(webmin:auth): authentication failure; logname= uid=0 euid=0 tty= ruser=>
Jan 19 21:38:34 gtcloud webmin[238149]: Webmin starting
Jan 19 21:38:34 gtcloud systemd[1]: Started Webmin server daemon.
lines 22-45/45 (END)

tried to apt get update got the below:

root@gtcloud ~# apt-get update
Err:1 http://archive.turnkeylinux.org/debian bullseye-security InRelease
  Temporary failure resolving 'archive.turnkeylinux.org'
Err:2 http://archive.turnkeylinux.org/debian bullseye InRelease
  Temporary failure resolving 'archive.turnkeylinux.org'
Err:3 http://security.debian.org bullseye-security InRelease
  Temporary failure resolving 'security.debian.org'
Err:4 https://pkgs.tailscale.com/stable/debian bullseye InRelease
  Temporary failure resolving 'pkgs.tailscale.com'
Err:5 https://download.docker.com/linux/debian bullseye InRelease
  Temporary failure resolving 'download.docker.com'
Err:6 http://deb.debian.org/debian bullseye InRelease
  Temporary failure resolving 'deb.debian.org'
Reading package lists... Done
W: Failed to fetch https://download.docker.com/linux/debian/dists/bullseye/InRelease  Temporary failure resolving 'download.docker.com'
W: Failed to fetch http://archive.turnkeylinux.org/debian/dists/bullseye-security/InRelease  Temporary failure resolving 'archive.turnkeylinux.org'
W: Failed to fetch http://security.debian.org/dists/bullseye-security/InRelease  Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://archive.turnkeylinux.org/debian/dists/bullseye/InRelease  Temporary failure resolving 'archive.turnkeylinux.org'
W: Failed to fetch http://deb.debian.org/debian/dists/bullseye/InRelease  Temporary failure resolving 'deb.debian.org'
W: Failed to fetch https://pkgs.tailscale.com/stable/debian/dists/bullseye/InRelease  Temporary failure resolving 'pkgs.tailscale.com'
W: Some index files failed to download. They have been ignored, or old ones used instead.
root@gtcloud ~# 
Forum: 
Jeremy Davis's picture

I'm not really sure what or why, but on face value it looks like there is something wrong with your networking?! Perhaps it's just the network config of the TurnKey container, but perhaps it's something else?

The stunnel webmin service is running:

* stunnel4@webmin.service - Universal SSL tunnel for network daemons (webmin)
     Loaded: loaded (/lib/systemd/system/stunnel4@.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-01-19 21:38:31 GMT; 30s ago
    Process: 238141 ExecStart=/usr/bin/stunnel4 /etc/stunnel/webmin.conf (code=exited, status=0/SUCCESS)
   Main PID: 238145 (stunnel4)
      Tasks: 2 (limit: 4532)

Note the "Active" line says "active (running)". If it wasn't running, then that would say something else, like "inactive (dead)", "active (exited)" or similar (there are a range of possible states - but "active (running)" is the one we want).

And the service is listening as it should be (as noted in your netstat output):

tcp        0      0 0.0.0.0:12321           0.0.0.0:*               LISTEN      238145/stunnel4

Note that the PID (238145) confirms that it's the correct service (matches the PID noted in the service status). The '0.0.0.0:12321' means that it is listening on all interfaces ('0.0.0.0') on port 12321.

Webmin is also running:

* webmin.service - Webmin server daemon
     Loaded: loaded (/lib/systemd/system/webmin.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/webmin.service.d
             `-override.conf
     Active: active (running) since Fri 2024-01-19 21:38:34 GMT; 21s ago
    Process: 238149 ExecStart=/usr/share/webmin/miniserv.pl /etc/webmin/miniserv.conf (code=exited, status=0/SUCCESS)

And is also listening on port 10000 - as expected:

tcp        0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      238366/perl

That one would not have been so obvious as the PID doesn't match, nor does it explicitly say "webmin" - but that's because Webmin uses it's own miniserver and it's written in the Perl language (hence why the PID doesn't match).

As something of an aside, I note that for some reason Webmin is also listening on all interfaces. That isn't as it should be, it should only be listening on localhost (127.0.0.1). I.e. instead of '0.0.0.0:10000', it should be '127.0.0.1:10000'. Despite that not being as it should, that would not be causing your issues (it means that Webmin should also be available via http on port 10000 - where it should be hidden and only listening on localhost). Once the main issue is resolved, I can assist you to fix that (should be pretty easy).

Your apt output also suggest network issues (although that's outgoing internet access, rather than incoming access):

W: Failed to fetch https://download.docker.com/linux/debian/dists/bullseye/InRelease  Temporary failure resolving 'download.docker.com'
W: Failed to fetch http://archive.turnkeylinux.org/debian/dists/bullseye-security/InRelease  Temporary failure resolving 'archive.turnkeylinux.org'
W: Failed to fetch http://security.debian.org/dists/bullseye-security/InRelease  Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://archive.turnkeylinux.org/debian/dists/bullseye/InRelease  Temporary failure resolving 'archive.turnkeylinux.org'
W: Failed to fetch http://deb.debian.org/debian/dists/bullseye/InRelease  Temporary failure resolving 'deb.debian.org'
W: Failed to fetch https://pkgs.tailscale.com/stable/debian/dists/bullseye/InRelease  Temporary failure resolving 'pkgs.tailscale.com'

So my guess is that it's not just Webmin that isn't working, I suspect that many (if not all) of the other services are problematic too? Have you tried connected to any of the other services? Are any of those working? If so, which ones? And what interface (IP) are they listening on?

BTW it looks like there is a lot going on in this container! Whilst in theory that shouldn't stop it working, personally I prefer to separate different workloads into different containers. That does mean some redundancy, but I consider that a feature, not a bug. The system overhead of containers is minimal and it means that if you have a problem with one, it won't bring everything down. It also means that everything else keeps working when you're doing maintenance.

Also, if any of the services in this container are publicly available (i.e. outside your local network - although if it's secured via VPN access, that's not so bad), I wouldn't be running a privileged container(as I assume you are - AFAIK Docker requires that?) - possibly with nesting enabled too . Using Podman May (or may not) be a workaround if you're sure you want to run Docker style containers in an LXC container as Podman supports "rootless" containers (I think Docker are working on "rootless" too, but I don't think it's default yet). I personally prefer to ensure that any publicly facing LXC containers are unprivileged (if something goes wrong, the chances of a malicious actor getting access to the host system are vastly reduced). Thus if any services running on this container are publicly available, I'd encourage you to move them to an unprivileged container. As for Docker, I'd recommend running that on a "proper" VM instead (i.e. a KVM VM - not a container). Unprivileged containers (and VMs) will provide much better isolation from your host system and make it much harder for any potential "bad guys". VMs do have higher overhead, but IMO that's a price worth paying.

Regardless, I doubt any of that is a direct cause of your issues.

It looks like the network setup in this container is pretty complex as you're using multiple 192.168.x.x IP ranges, as well as 172.x.x.x ranges?! Usually you would only be using one or the other, although AFAIK Docker uses 172.17.0.0/16 by default. I can see Docker using some 172.17.x.x addresses, but there are other 172.x.x.x addresses too? Perhaps there is some clash between the Docker network and your LAN network? Perhaps changing the default Docker IP range would fix it? Although I'd need to know a bit more about your network before I would be more confident.

I'm not sure how much use I'm going to be, but if you'd like my 2c, could you please share the output of:

cat /etc/network/interfaces

I don't think it will help much, but it might also be worth sharing the output of:

cat /etc/resolv.conf
ls -l /etc/resolv.conf
ip address
johns67w's picture

Thank you so much for your response. 

I will action your recommendations on security, LXC, VMs and Docker. 

Could this be to do with the fact that i have moved the server from 192.168.1.x to a new subnet 192.168.50.x.

You are correct, I cannot access any other webmin, (Portainer, dashboard NginX etc).

ping bbc.com works and. I can see in my router than the lxc is sending traffic out. including dns requests etc. 

Networks should not be too complicated, as only things running are docker containers ( although this also includes MACVLANS on docker). ETH0 should be the main network, and TAILSCALE is my ZERO-config VPN setup 

root@gtcloud ~# cat /etc/network/interfaces:

root@gtcloud ~# cat /etc/network/interfaces
# UNCONFIGURED INTERFACES
# remove the above line if you edit this file

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

allow-hotplug eth1
iface eth1 inet dhcp
 hostname core

 

root@gtcloud ~# cat /etc/resolv.conf
ls -l /etc/resolv.conf
# --- BEGIN PVE ---
search 8.8.8.8
nameserver 1.1.1.1
# --- END PVE ---
-rw-r--r-- 1 root root 72 Feb 16  2023 /etc/resolv.conf
root@gtcloud ~# 

 

root@gtcloud ~# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 72:f9:61:5a:7f:bb brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::70f9:61ff:fe5a:7fbb/64 scope link 
       valid_lft forever preferred_lft forever
3: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none 
    inet6 fe80::9b92:b0e8:cd:b94b/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever
4: br-89f47b050cf0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:01:58:d8:ad brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.1/16 brd 172.22.255.255 scope global br-89f47b050cf0
       valid_lft forever preferred_lft forever
5: br-d5debe1f3d8f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:78:93:49:d4 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.1/16 brd 172.27.255.255 scope global br-d5debe1f3d8f
       valid_lft forever preferred_lft forever
    inet6 fe80::42:78ff:fe93:49d4/64 scope link 
       valid_lft forever preferred_lft forever
6: br-ef17d21094be: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:a2:c0:b9:c9 brd ff:ff:ff:ff:ff:ff
    inet 172.25.0.1/16 brd 172.25.255.255 scope global br-ef17d21094be
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a2ff:fec0:b9c9/64 scope link 
       valid_lft forever preferred_lft forever
7: br-10bf9558f27d: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:12:d6:0b:e2 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-10bf9558f27d
       valid_lft forever preferred_lft forever
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:8d:a9:b0:42 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:8dff:fea9:b042/64 scope link 
       valid_lft forever preferred_lft forever
9: br-1c7c29eedd27: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:54:1e:22:96 brd ff:ff:ff:ff:ff:ff
    inet 172.24.0.1/16 brd 172.24.255.255 scope global br-1c7c29eedd27
       valid_lft forever preferred_lft forever
10: br-bcd8c80b54fa: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:bf:23:1c:1b brd ff:ff:ff:ff:ff:ff
    inet 172.28.0.1/16 brd 172.28.255.255 scope global br-bcd8c80b54fa
       valid_lft forever preferred_lft forever
11: br-cebe929b3796: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:2f:be:8f:f9 brd ff:ff:ff:ff:ff:ff
    inet 172.23.0.1/16 brd 172.23.255.255 scope global br-cebe929b3796
       valid_lft forever preferred_lft forever
12: br-16d3894b9c96: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:b8:e7:28:e9 brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.1/16 brd 172.30.255.255 scope global br-16d3894b9c96
       valid_lft forever preferred_lft forever
    inet6 fe80::42:b8ff:fee7:28e9/64 scope link 
       valid_lft forever preferred_lft forever
13: br-3325d8d37d03: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:b1:62:87:aa brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-3325d8d37d03
       valid_lft forever preferred_lft forever
    inet6 fe80::42:b1ff:fe62:87aa/64 scope link 
       valid_lft forever preferred_lft forever
14: br-55e8a7b9207f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:fe:89:69:9a brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.1/20 brd 192.168.47.255 scope global br-55e8a7b9207f
       valid_lft forever preferred_lft forever
    inet6 fe80::42:feff:fe89:699a/64 scope link 
       valid_lft forever preferred_lft forever
15: br-879ae96b7f9c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:db:fd:ff:45 brd ff:ff:ff:ff:ff:ff
    inet 192.168.112.1/20 brd 192.168.127.255 scope global br-879ae96b7f9c
       valid_lft forever preferred_lft forever
    inet6 fe80::42:dbff:fefd:ff45/64 scope link 
       valid_lft forever preferred_lft forever
16: br-891a0d4fc08e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:de:ff:d6:a0 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.1/16 brd 172.20.255.255 scope global br-891a0d4fc08e
       valid_lft forever preferred_lft forever
    inet6 fe80::42:deff:feff:d6a0/64 scope link 
       valid_lft forever preferred_lft forever
17: br-170928494d5b: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:59:df:f5:db brd ff:ff:ff:ff:ff:ff
    inet 172.26.0.1/16 brd 172.26.255.255 scope global br-170928494d5b
       valid_lft forever preferred_lft forever
18: br-3d3516d280f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:ed:bf:ea:45 brd ff:ff:ff:ff:ff:ff
    inet 172.29.0.1/16 brd 172.29.255.255 scope global br-3d3516d280f3
       valid_lft forever preferred_lft forever
19: br-6c33de252c9e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:7f:89:0f:dd brd ff:ff:ff:ff:ff:ff
    inet 192.168.48.1/20 brd 192.168.63.255 scope global br-6c33de252c9e
       valid_lft forever preferred_lft forever
    inet6 fe80::42:7fff:fe89:fdd/64 scope link 
       valid_lft forever preferred_lft forever
20: br-87547417f30a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f4:f7:24:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.1/20 brd 192.168.79.255 scope global br-87547417f30a
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f4ff:fef7:2487/64 scope link 
       valid_lft forever preferred_lft forever
21: br-3c3fcbd40e4c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:3a:9f:c7:37 brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.1/20 brd 192.168.31.255 scope global br-3c3fcbd40e4c
       valid_lft forever preferred_lft forever
    inet6 fe80::42:3aff:fe9f:c737/64 scope link 
       valid_lft forever preferred_lft forever
22: br-58d88abf81f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:2d:26:2d:da brd ff:ff:ff:ff:ff:ff
    inet 192.168.128.1/20 brd 192.168.143.255 scope global br-58d88abf81f6
       valid_lft forever preferred_lft forever
    inet6 fe80::42:2dff:fe26:2dda/64 scope link 
       valid_lft forever preferred_lft forever
23: br-6f0bc6edd319: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:f3:aa:60:15 brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.1/16 brd 172.21.255.255 scope global br-6f0bc6edd319
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f3ff:feaa:6015/64 scope link 
       valid_lft forever preferred_lft forever
27: veth4b2cfc4@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-891a0d4fc08e state UP group default 
    link/ether c2:be:10:26:bc:4c brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::c0be:10ff:fe26:bc4c/64 scope link 
       valid_lft forever preferred_lft forever
29: veth3dbcdc4@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-87547417f30a state UP group default 
    link/ether 56:b5:7d:07:1e:4d brd ff:ff:ff:ff:ff:ff link-netnsid 14
    inet6 fe80::54b5:7dff:fe07:1e4d/64 scope link 
       valid_lft forever preferred_lft forever
33: veth619fc73@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-3c3fcbd40e4c state UP group default 
    link/ether ba:82:21:b6:e7:bb brd ff:ff:ff:ff:ff:ff link-netnsid 10
    inet6 fe80::b882:21ff:feb6:e7bb/64 scope link 
       valid_lft forever preferred_lft forever
35: veth0868f83@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-879ae96b7f9c state UP group default 
    link/ether c2:95:d3:fd:fb:55 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::c095:d3ff:fefd:fb55/64 scope link 
       valid_lft forever preferred_lft forever
37: veth06d4468@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6c33de252c9e state UP group default 
    link/ether ae:fd:7c:6b:f3:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 7
    inet6 fe80::acfd:7cff:fe6b:f3f1/64 scope link 
       valid_lft forever preferred_lft forever
39: vethba84031@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-55e8a7b9207f state UP group default 
    link/ether d6:0c:c4:8c:79:53 brd ff:ff:ff:ff:ff:ff link-netnsid 9
    inet6 fe80::d40c:c4ff:fe8c:7953/64 scope link 
       valid_lft forever preferred_lft forever
41: veth0c31eb1@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-3325d8d37d03 state UP group default 
    link/ether 96:a7:0b:85:80:d7 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::94a7:bff:fe85:80d7/64 scope link 
       valid_lft forever preferred_lft forever
43: veth885c39c@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d5debe1f3d8f state UP group default 
    link/ether fe:9b:a5:8f:c5:39 brd ff:ff:ff:ff:ff:ff link-netnsid 13
    inet6 fe80::fc9b:a5ff:fe8f:c539/64 scope link 
       valid_lft forever preferred_lft forever
45: vethc2adca5@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ef17d21094be state UP group default 
    link/ether 66:27:62:dc:0a:b0 brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet6 fe80::6427:62ff:fedc:ab0/64 scope link 
       valid_lft forever preferred_lft forever
47: vethae8b946@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-58d88abf81f6 state UP group default 
    link/ether ea:ec:36:28:4d:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 11
    inet6 fe80::e8ec:36ff:fe28:4de2/64 scope link 
       valid_lft forever preferred_lft forever
51: veth2699e40@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether be:cd:0a:89:3c:6f brd ff:ff:ff:ff:ff:ff link-netnsid 8
    inet6 fe80::bccd:aff:fe89:3c6f/64 scope link 
       valid_lft forever preferred_lft forever
53: veth5752dbd@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-891a0d4fc08e state UP group default 
    link/ether a2:7c:da:53:70:21 brd ff:ff:ff:ff:ff:ff link-netnsid 19
    inet6 fe80::a07c:daff:fe53:7021/64 scope link 
       valid_lft forever preferred_lft forever
59: vethb3b6535@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-3325d8d37d03 state UP group default 
    link/ether 92:7c:4b:d7:8c:28 brd ff:ff:ff:ff:ff:ff link-netnsid 18
    inet6 fe80::907c:4bff:fed7:8c28/64 scope link 
       valid_lft forever preferred_lft forever
64: veth6669de7@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-87547417f30a state UP group default 
    link/ether 7e:5e:3d:10:8b:87 brd ff:ff:ff:ff:ff:ff link-netnsid 23
    inet6 fe80::7c5e:3dff:fe10:8b87/64 scope link 
       valid_lft forever preferred_lft forever
66: veth6a44b40@if65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-879ae96b7f9c state UP group default 
    link/ether c2:6d:28:f4:f5:05 brd ff:ff:ff:ff:ff:ff link-netnsid 22
    inet6 fe80::c06d:28ff:fef4:f505/64 scope link 
       valid_lft forever preferred_lft forever
11080: vethbd424ac@if11079: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-879ae96b7f9c state UP group default 
    link/ether 56:8e:ad:9a:83:1b brd ff:ff:ff:ff:ff:ff link-netnsid 17
    inet6 fe80::548e:adff:fe9a:831b/64 scope link 
       valid_lft forever preferred_lft forever
11186: veth87e0b29@if11185: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 36:a8:38:74:5c:7a brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::34a8:38ff:fe74:5c7a/64 scope link 
       valid_lft forever preferred_lft forever
11190: veth08a1755@if11189: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-16d3894b9c96 state UP group default 
    link/ether 7a:cf:fb:67:49:f7 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::78cf:fbff:fe67:49f7/64 scope link 
       valid_lft forever preferred_lft forever
11192: vethdeef5b3@if11191: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ce:1e:e7:64:c3:3b brd ff:ff:ff:ff:ff:ff link-netnsid 12
    inet6 fe80::cc1e:e7ff:fe64:c33b/64 scope link 
       valid_lft forever preferred_lft forever
11194: veth0161cc2@if11193: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 6e:80:84:42:13:df brd ff:ff:ff:ff:ff:ff link-netnsid 15
    inet6 fe80::6c80:84ff:fe42:13df/64 scope link 
       valid_lft forever preferred_lft forever

 

Jeremy Davis's picture

Wow, that's a lot of interfaces!

[Please note that I've edited this post since originally posted yesterday]

FWIW here's a container I have running:

~# cat /etc/network/interfaces
# UNCONFIGURED INTERFACES
# remove the above line if you edit this file

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
	address 192.168.1.120/24
	gateway 192.168.1.1
	hostname lamp

allow-hotplug eth1
iface eth1 inet dhcp
 hostname lamp

Note that whilst it doesn't explicitly note it, that eth0 config is provided by Proxmox (i.e. when launching the container, I added that config in the UI). IIRC unless you edit your container config, then Proxmox will just overwrite in on reboot. Your container config can be found in Proxmox: either /etc/pve/local/lxc/VMID.conf or /etc/pve/nodes/NODE_NAME/lxc/VMID.conf - where VMID is the actual container ID number and NODE_NAME is the name of your PVE node. Here is the relevant line in mine:

net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=62:18:60:70:FA:FF,ip=192.168.1.120/24,type=veth

And FWIW here's my 'ip addr' output:

~# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if20:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d6:4d:ad:7e:b2:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.108/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:44b8:5104:2300:d44d:adff:fe7e:b249/64 scope global dynamic mngtmpaddr 
       valid_lft 595sec preferred_lft 595sec
    inet6 fe80::d44d:adff:fe7e:b249/64 scope link 
       valid_lft forever preferred_lft forever

:)

TBH networking isn't really my strong suit (I strongly suspect that you have knowledge that I do not), but I can also see that your 'eth0@if5' (which is your 'eth0') only has an IPv6 address?! Ultimately that should work ok, but obviously you'd need to connect to that (rather than an IPv4 address). My guess is that that your DHCP is only handing out IPv6 addresses?

Although I also see quite a few bridge interfaces (i.e the ones that start with 'br...') and quite a few have 192.168.x.x address (the rest are 172.x.x.x which I assume are docker related?), but none of your interfaces appears to have an 192.168.50.x address. Not only that but none that I noticed even have that address within their range (perhaps they do and I missed it - I didn't check super thoroughly, but definitely no 192.168.50.x address). So I'm guessing that's the core of the issue - it's not listening on the address that you're trying to connect to?!

All the bridges seem to have different (and massive) IP ranges though, so I can't even be sure that you'll be able to connect to any of the IPv4 ones that do exist - unless you have a switch and router set up to join all the different networks up?! Even then, that would literally mean that you have thousands of IPv4 addresses in the 192.168.x.x range - which seems like serious overkill for me - unless you're running a large enterprise!?

To be completely honest with you, I'm out of my depth here... I can glean some info from what you've shared (as per above), but beyond the basics, I'm not even sure how useful that will be to you (e.g. I recall, if you are using a bridge to connect, you don't assign an interface an IP, you do that on the bridge). I wouldn't even know where to start to make that all work... Personally I keep things pretty simple locally, as you may have already guessed from the container network info I posted above, I just have a simple 192.168.1.0/24 network.

Add new comment