roller24's picture

I'm inexperienced with python uwsgi, and am trying to proxy_pass over to the python served webpage.

 location / {
        include         uwsgi_params;
        uwsgi_pass unix:/var/lib/lxc/nas/rootfs/usr/share/mayan-edms/uwsgi.sock;

I also tried without /var/lib/lxc/nas/rootfs

getting a 502 error, after much trial and error.

using  as a guideline.


roller24's picture

curious as to why the nginx on the mayan container isn't dealing with the service of the python. I tried a regulare proxy_pass and received a gateway error too.


Jeremy Davis's picture

FWIW, we have a Mayan-EDMS appliance and as of v15.0, it doesn't use uwsgi anymore. FWIW we install as per the Mayan-EDMS "basic" direct deployment. That simplifies things a fair bit, as then you can test the state of the Mayan-EDMS service itself (without the Nginx proxy) either from localhost (using curl or similar) or by configuring it to allow a remote connection (it's bound to localhost by default). That aids troubleshooting by either confirming that the issue is with the backend service somehow (e.g. not running, not accepting connection, etc), or something up with the Nginx config.

FWIW, as you're possibly already aware, 502 errors are almost always related to the backend service not allowing connection, or not being available (e.g. crashed).

I'm not 100%, but my first guess is that the server doesn't have the appropriate access to the socket. If you were to upgrade Mayan-EDMS to use the newer config (and provide the service via port, rather than using uwsgi) then you could remove the localhost binding for testing, and once you have that working, you could then lock it down to only accept connections from the LXC host (same setup as binding to localhost, but instead bind to the LXC host).

It doesn't really answer your explicit question, but hopefully might give you a workaround. FWIW, I'm not super familiar with uwsgi myself, but I suspect that if you wish to proceed with the uwsgi implementation it'd be worth just doing some general googling. So long as you keep in mind that TurnKey is based on Debian, I suspect that you would find at least some info to assist you to understand how it should be working (and some of the limitations and potential issues).

roller24's picture

I'm getting a connection refused error on my host system nginx log 

 [error] 7383#7383: *28 connect() failed (111: Connection refused) while connecting to upstream

and in the Mayan container..

The daemon log had a timestamp current to my attempt, and I tailed was logging at a rapid rate about stunnel.

Jan 13 16:09:32 nas stunnel: LOG5[2035]: Service [webmin] accepted connection from
Jan 13 16:09:32 nas stunnel: LOG5[2035]: s_connect: connected
Jan 13 16:09:32 nas stunnel: LOG5[2035]: Service [webmin] connected remote server from
Jan 13 16:09:33 nas stunnel: LOG3[2035]: transfer: s_poll_wait: TIMEOUTclose exceeded: closing
Jan 13 16:09:33 nas stunnel: LOG5[2035]: Connection closed: 382 byte(s) sent to TLS, 943 byte(s) sent to socket

The .11 is my workstation 

Jeremy Davis's picture

As per my post above, it seems likely that the Nginx server does not have the appropriate permissions to access the socket (assuming that the socket exists).

I suggest trying something like this:

ls -la /var/lib/lxc/nas/rootfs/usr/share/mayan-edms/uwsgi.sock

As a general rule, the root account should have access to everything, but obviously Nginx doesn't run as root. The default webserver account for Debian is 'www-data'. One way to ensure that Nginx has permission would be to give everyone full access to it. Note that is almost always extremely poor practice, with testing being the only legitimate exclusion to that rule IMO.

As for the stunnel entries to the daemon log, that is expected behaviour if you're using webmin (perhaps have a browser session open?).

roller24's picture

the proxy is calling an ip6 address and so I am working on the firewall to open ports 80 and 443 , Ive applied the configured firewall in webmin, on the container, and enabled v6 on both host and container.

Still no go, I'll post results


Jeremy Davis's picture

Unless you are in a hostile environment (e.g. open internet) then I would recommend disabling any (TurnKey) firewalls while troubleshooting. And personally, even if it were internet connected, I'd still probably disable the firewall whilst troubleshooting issues such as this. At least that's one less thing that you need to consider.

Once everything is working as it should, you can then move to locking it down (i.e. similar to my above note about permissions).

roller24's picture

The ports are open but now I get a no route to host error.

Also bottom two rules on host fire wall accept 1:65355? that seems like it's not safe, I 

roller24's picture

I have changed those from accept to drop. Till I hear that it serves a purpose.


Jeremy Davis's picture

FWIW it appears that the LXC appliance sets those ports open by default. TBH, I'm 100% sure on the rationale, but it was done by Liraz (one of the TurnKey co-founders and original core devs) in early 2016 (see the actual commit here) with the commit message "LXC: open all ports (make network routing easier)".

Like you, I'm not sure that's the best approach. Although, my guess is it was done like that after someone running into issues because of firewall config.

roller24's picture

I do run sip server on my core container, so I consider the environment hostile, the first night I was compiling the source code onto the core, and was finished with asterisk and working on the front end, they were already in and wiped out all of my network settings. I just pulled out the patch cable for the reinstall. I am catching on, as I was under the impression that the host fire wall took care of the entire machine, but noticed my pbx was working without having opened up 5060. I engaged the firewall in default mode on the pbx container and lost the phone untill opened the port up. so I will now assume each container needs it's own firewall for ports not handled by nginx. As for the mayan container, i'll have to reread the replies you sent and go from their. I'll comeback with results or failures. to end this thread. 


on a side note, I'm on another host at the moment, and tried to log in with my username, roller24 and was rejected. my email login  worked though. 


Jeremy Davis's picture

Yes, that's probably not a bad way to go. I guess alternatively you could configure it all on the host, but it's probably easier to do it via each guest.

Re Mayan, no problem. I'm certainly no expert with it, but I did do the v15.0 upgrade on that appliance, so know it a bit (at least from an initial setup perspective if not from an actual user one). Obviously ask more if there is anything that needs clarification or expansion.

Re not being able to log in with username, I'm not really sure what that's about, but so long as you can still log in via email, I guess it's not too big a deal (FWIW I always log in via email so don't ever recall hitting that issue).

[Totally off topic] I'd love to add a SIP server (Asterisk?) to the TurnKey library, but have never got around to it... Judging from your message, it sounds like you've build yours on top of Core(?). I can't promise anything, but if you could share any info you've got regarding that, perhaps we can build one?!

Additionally (instead?), if you're keen, I'd be open to coaching you on how to use our build tools (aka TKLDev) to create a new appliance that we could add to the library if you're interested? Tons of our appliances were originally developed by users. We encourage ongoing contribution (usually if a user assist maintain the appliance quality control is higher) but we're happy to guarantee maintenance once the appliance exists.

As a heads up though, if you want to have a look at TKLDev, you'll want to run it in a proper VM (not on LXC). Because it uses filesystem layering, it doesn't work within a LXC (or Docker) container.

roller24's picture

I can go to the ipaddr in browser and the page opens 

I curl from the terminal of the container both  localhost and  are found as well as the lan ip and http://nas which is the container name and called to from the proxy_pass 

I curl from the server and I succeed on the ip , but get bad gateway on the name. when I try curl .http://nas I receive a connection refused... 

I'm stumped, as i have tried almost every configuration I can imagine.

I ran the ls that you posted and received file not found error, I went to the directory and one file exists uswgi.ini which says that the sock file should exist in the same folder. So i'm off to see if I can create the sock, but no sense to me since the page loads ....?? maybe i'ts a symbolic link ...





Jeremy Davis's picture

That uwsgi.ini file is left over cruft. It shouldn't be there... Sorry if that lead you astray.

The current applaince install Mayan-EDMS to /opt - as per the recommended install technique I linked to in a previous post.

FWIW unix sockets are file (type) objects which side step the need for use of ports when 2 applications communicate. In practice, unix sockets are mostly used by different applications on a single host. I imagine that's because they're file like objects; use of ports is more practical between different computers. Although just to confuse things, AFAIK technically all network connections use sockets...

roller24's picture

I did consider trying my hand at turning the Asterisk install into an APP, it won't be this month, but depending on my work load, I would like to just for my own future plans for Turnkey utilization. It's a non commercial install, so it won't Tick Off anybody at Sagnoma or Digium. :)


Jeremy Davis's picture

FWIW we only allow Open Source licensed software, so that is the main requirement. But if that applies, you're keen and you have the time and energy, I'd be super grateful and more than happy to coach you.

roller24's picture

Since the prior post curl results had success from the server with ipaddr, I changed the upstream in the generated nginx-proxy file to the ipaddr instead of http://nas. this was met with success.

Still seems odd to me, but I will accept. 

This is the first container that did not open with the generated proxy_pass file, so I suspect something in the container needs adjustment or it was designed this way. anyway, I may try more later, but will leave it for now. 

The directory structure of the Mayan APP baffles me, I'm not a python familiar guy, and have yet to see a file directory that holds the website files.

Jeremy Davis's picture

TBH, I'm no expert with Nginx config (I'm much more experienced with Apache - although still wouldn't consider myself an Apache expert either...) but I suspect that it's either something that Nginix is forwarding that it shouldn't, or something that it should, that it isn't.

I was almost going to suggest that you may need to change the Mayan config to allow connection from the LXC host, but that's clearly not an issue if you can connect to it via curl from the LXC host.

Jeremy Davis's picture

FWIW I've removed that cruft file that was leading you astray (uwsgi.ini). I've also updated the appliance page text so it's now up to date.

Out of interest, the text for the appliance pages is generated from the Readme. I updated the readme a while ago, but hadn't pushed it to the website yet...

I'm guessing it wasn't intentional, but thanks for the nudge on that...! :)

roller24's picture

I can only curl the mayan, from the host with the ipaddr, all named attempts failed. 

Yet all named attempts from within mayan succeed.  My proxy configuration .conf is generated from the script that was written for the LXC appliance, the nginx.conf is also unedited, so you can see the code at git, I ve only edited hosts and network settings, which are identical to 3 other containers which have no issues.  There is also no errors logged in the mayan files, only in the host logs, and they are not too helpful, connection refused and/or no path to host.

I'm just gettn started with nginx myself, so maybe John/Dude4linux could be of some help.

I'll email him and ask for him to take a gander and chime in.


Jeremy Davis's picture

My guess is that it's something to do with the Mayan config rejecting the call via name. Although I'm only guessing...

Also, I'm assuming that you are trying to connect direct to Mayan (from the LXC host's Nginx) rather than having the intermediate Nginx reverse proxy (within Mayan appliance itself)?!

I wonder if it's something to do with the way that we bind Mayan to localhost:8000? (hopefully that's not a sidetrack, but seems possible?) Unless of course you are still using the intermediate Nginx reverse proxy...

John Carver's picture

Hi Ray, I'm currently unable to reproduce the problem with mayan-edms because I'm on the road and don't have access to a working LXC appliance.  I'm trying to reproduce the problem by installing in a LXD v3 container, but some other issues have appeared that I need to solve first.

In reading this thread, I had a hunch that maybe the problem had to do with host headers but I don't know if mayan-edms checks them or not.  I do know that several applications like Drupal, Wordpress, etc are now checking them to prevent cross-site scripting attacks.  The default for TurnKey appliances is to set the hostname to the name of the appliance e.g. drupal8 and to include that name in a list of acceptable host headers.  When using LXC, the hostname is changed to the container name e.g. dp01 and when nginx-proxy is used to assign a domain name e.g. www.<>, both the container name and the domain name need to be added to the list of acceptable host headers.  Currently we don't have a way to do this automatically and every application has a different method of configuration.  Looks like I need to add another paragraph to the LXC documentation.

I see from reading ahead that you've found an alternative to Mayan-EDMS, so good luck with that. 

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Thanks for jumping in with your thoughts John.

I strongly suspect that you are right re headers! If that were the case, it would certainly explain this issue.

Hope your travels are going well mate! :)

roller24's picture

I would not have thought of that on my own for sure, as I am unexperienced in headers. I assume you make the declaration in the nginx configuration, more even the proxy file generated by the nginx-proxy binary.   This then has that information to offer the browser requesting the site? or would it reside in the actual webserver of the proxied domain?  could you give me an example statement, if you have a free minute?


roller24's picture

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_buffering off;


proxy_set_header nas;   (nas being the container name)

proxy_set_header (fqdn) 

now, the hostname and container name were both nas, isn't $host = nas..? should I have dropped the http:// ? 

Jeremy Davis's picture

I missed this one last week, apologies on the slow response. And sorry in advance for not having a super clear answer for you.

Re your question, TBH, I don't recall and without testing, I can't be sure. I don't use Nginx day to day, so generally when I use it in an appliance, I use one of the reverse proxy configs we have already as a base (i.e. rob the code from an existing appliance), and if it doesn't work OOTB, via a combo of the Nginx proxy docs and a bit of trial and error, get it working as desired. IMO the Nginx docs in general are quite good. You may also find the reverse proxy section of the Nginx Admin Guide useful too.

Sorry that I'm not more explicit, but I'm just not sure enough OTTOMH to even guess without double checking. And I'm pretty tied up with other stuff ATM. I hope you can understand.

roller24's picture

Jeremy couldn't have said it better himself. I ended up destroying the mayan appliance and have gone with the odoo app instead, it has document storage and is much more rebust in it's features, and fits just right with my concept.

It also loaded up with the LXC container app without a hitch.

All my project needs now is to get letsencrypt to play nice with the pbx, and i'll be ready to tweak it into a product ready to test 

Jeremy Davis's picture

That was a new (to me at least) and quite cheeky spam vector! And it got them past our spam filters! As you no doubt noticed, it was more-or-less a verbatim copy of a previous post of mine, but they changed the links to point to the sites they were trying to promote!

Anyway, Odoo sounds like it's going to suit your purposes. Although unless you use some of the additional functionality and/or "apps", it may be a little overkill (it's essentially a web based application framework). OTOH, if it works for you, fantastic! :)

Re Let's Encrypt, I'm not sure if you've noticed, but Confconsole has a Let's Encrypt plugin which should work OOTB. It doesn't support wildcard certificates, but so long as you include all of the domains and sub-domains you are hosting, it should server the purpose. Having said that, I'm not 100% clear of your PDX requirements re TLS cert, so not 100% sure how that will work.

If you use it though, keep in mind that when it updates the certs it will "hijack" port 80 (to host the challenges). As a general rule it's fairly quick, but does stop and start the webserver, so it might be worth ensuring that the cron runs at a time when it's unlikely that your server is in use.

roller24's picture

Ive had real good luck using letsencrypt on the main host and passing it to the subdomain containers. using this command.

certbot --authenticator standalone --installer nginx --pre-hook "service nginx stop" --post-hook "service nginx start"


Jeremy Davis's picture

No worries if you have a working solution.

It's irrelevant now, but there is actually a Let's Encrypt integration built into all TurnKey servers OOTB. Although it's possibly worth noting that you won't be able to compare now as the install of Certbot will break Confconsole. Certbot pulls in a library that conflicts with one that TurnKey's Confconsole relies on - breaking it in the process.

Other than the audit of ACME clients that we did years ago, I don't know much about Certbot. (ACME is the protocol that the Let's Encrypt client is the "reference" implementation for - there are many other clients; the one we leverage is called dehydrated). I do recall we had some minor concerns about information leakage, but that was a long time ago now and I suspect that they've improved it.

Unfortunately though, because of my lack of knowledge, I can't really provide much insight or assistance.

Add new comment