Timmy's picture

Ok, so this one isn't strictly a "How do I do X?" question itself. I'm hoping for some thoughts, suggestions, and discussion. And yes, some how-to's. So let's cover the network configuration real quick.

Router: pfsense- This router nominally serves three functions besides being the router/firewall for the network.

  1. It performs LetsEncrypt cert acquisition/renewal via ACME for my domain, getting a wildcard cert for my needs.
  2. It runs HAProxy, which allows access for a couple services that are public.
  3. It serves as the VPN server (OpenVPN).

DNS: Piholes- Two piholes on network with identical configurations. These also have the local DNS resolutions for my internal services (so no hairpin NAT). This is also my helpful guide if I forget which IP address is which server.

Services: A half dozen services run as VM instances. All are Turnkey builds or setup on Turnkey Core, except one. (That one service is the only one publicly accessible via HAProxy and performs its own cert work so can be ignored for the purposes of this discussion).

 

Desired outcome: A sensible/repeatable, durable, easily-maintained solution for getting https for all my turnkey services from within the LAN for local users and users VPN'd into the network. Ideally, this is done leveraging the existing certs being fetched by the router.

By easily-maintained, I mean the architecture is simple and readily visible. No change config file X here, and config file Y over there when a service changes IP addresses.
ie right now, if I wanted to spool up a second Nextcloud server (ie test upgrading to PHP 7.4 from 7.3), I can clone my existing nextcloud instance and change its static IP, make a new DNS entry in my piholes (nextcloud2.domain.local) pointing to this new IP, and work on the clone until I've successfully made the upgrade.
Then I go to my piholes and switch the IP addresses nextcloud.domain.local and nextcloud2.domain.local point to and live test it. If its still all good, delete nextcloud2 and offline the old VM. Most of this process is GUI guided/prompted.

I like one throat to choke. Or in the case of redundant systems like the piholes, identical throats to choke. When something breaks in 8 months, no matter my documentation, it is much easier to have the least complicated tool chain possible to work through when something breaks.

I also like repeatable. When something new catches my eye to try out, I don't want to be puzzling how to integrate it into this system a year later. A system where I can copy a config and change the obvious details is nice. 12 step CLI process is not.

 

Progress so far:
I have read and explored a number of solutions posts and blogs. Most address this question from the standpoint of outside coming in using a proxy server (like nginx), often to a single endpoint. I've started down this road two or three times today but these examples are often absurdly simplistic and extremely short on details that would help me understand how to adapt to above needs/wants.

I believe I have managed to setup the pfsense ACME package to perform a shell command to dump the certs on the TKL Core Nginx server I am messing around with. And that was a pita, as again, people are grossly simplistic and explain next to nothing about how the process works to run without pw using keyfile and most are running this using pfsense root and a passphrase-less key file, which seems like a terrible idea, even if this is my internal network. I figured out how to using good ole scp. I think. It works in shell anyway. I don't like it because it relies on CLI configuration steps in pfsense that are not backed up. The appliance eats a brick, I'll be digging in my notes for the how to. And if I find I want to use another appliance instead?

I think I'd like a solution where all the servers are setup to copy the cert from a "cert-holder" server. But that sounds like it will obscure the setup process. 8 months from now, will I be able to repeat the user scp setup easily or will I be scratching my head?

This is, in part, where I like using the piholes for DNS resolution vs other solutions like my router. For one, at least one of the piholes is a VM instance. Backup is as easy as can be and testing/drilling myself on how to restore from back up is equally easy (and equally easy to undo if I screw up). Second, DNS setup is straight forward and simple. If I had to manually set it up again, it'll be as easy as the first time I blindly did it w/ no guidance.
Its also where I like using Turnkey, even just Core, for test projects and explorations. The CLI menu makes performing some steps very easy and low hassle (like slapping together a quick server to test something and using the menu to give it a static IP).

 

So there you have it. This isn't just about how-to's but hopefully a helpful discussion on solutions and why one would pick them.

Forum: 
Jeremy Davis's picture

First up, I have zero experience with pfSense, Pihole or HAProxy, so am working more from my general knowledge rather than claiming any sort of expertise here. So it's quite possible I'm missing relevant details.

Assuming that you are using your own domain, I think that support for valid external and internal HTTPS access should be quite doable (for simplicity I'd probably be inclined to suggest a subdomain per service but ultimately it shouldn't matter). But as per always though, the devil is in the details...

Before i move on, please note that I haven't tested any of this, or even pondered it deeply. This is very much off the top of my head and I am essentially thinking out loud. Hopefully I've covered everything of relevance to you, but please feel free to bump me on anything I haven't addressed.

TBH, my first thought is to just keep it simple and allow direct public access to your server(s) - i.e. via firewall forwarder. Obviously if you do that and you have more than one server, you'll need a reverse proxy (I've got plenty of experience with Nginx, but there are lots of other options - if you're already familiar with HAproxy, perhaps just use that?). If you allow public access to port 80, then the reverse proxy could even get it's own cert, further simplifying things.

You did mention that you like the idea of a central cert location. You could do that, but IMO it doesn't actually provide any value. IMO it just complicates your setup. If you were using your own certs (i.e. use your own CA cert to generated your own certs) then a centralized place to manage them would make sense (i.e only one CA cert to load on all your devices, that rarely, if ever changes). But it could be argued, that Let's Encrypt already is a centralized location for certs! :)

At the end of the day though, it's what works best for you.Regardless, I digress...

Traffic between the reverse proxy and the backend sever could either be plain http, or use a self signed https cert. To use a self signed cert, either load the CA cert on your reverse proxy (so it trusts the cert), or disable cert authentication between the proxy and backend. Both these options are supported by Nginx and Apache, not sure about HAProxy.

Additionally or perhaps alternatively, something that may be relevant here and of interest, is that a community member has been working on a Confconsole Let's Encrypt integration improvement; namely support for getting certs via DNS-01 challenge (so no public access to port 80 required). Once that's live then you could leverage that to allow each server to get it's own cert. So connect directly to the server when within your home network, or via the reverse proxy when out and about. If you like that idea but don't want to wait, then there are alternate ACME implementations that support DNS-01 challenges.

I'd also like to share my perspective on your note re passphrase-less SSH keys. Whilst it's certainly not as secure as a keypair with a passphrase (essentially 2 factor auth; something you have and something you know), IMO it could be argued that it's still more secure than just a passphrase/password (with no key). Passphrases/passwords can be socially engineered, brute forced, or attacked via other vectors. These are all much harder with a key! The only way for it to be hacked (at least at this point in time) would be to physically steal the key! Also, particularly for specific server to server connection, generating specific keypairs for specific connections (i.e. not reusing keys) and locking the remote end in a limited user account also mitigates security issues (use a service, cron job, or other triggering method to act on uploaded files, you can include validation if there is further security concerns about the risk of compromised keys.

Re-reading that, it seems like a fair bit of waffle, without a lot of clarity. I hope that it's not too bad on your end. Hopefully it's of some value... If you have further questions and/or would like me to elaborate on anything please ask.

Timmy's picture

First up, I have zero experience with pfSense, Pihole or HAProxy, so am working more from my general knowledge rather than claiming any sort of expertise here.

-No biggie. PfSense is not so important itself so much as it has the ability to load packages and perform other duties besides just the usual router/firewall stuff. ie HAProxy, ACME, and DDNS functionality being the applicable features for this issue.

Pihole is just a local DNS server acting as sinkhole for undesirable domains; as the DNS server, it has the capacity to let me handle FQDNs internally with manually defined internal IP addresses.

 

Assuming that you are using your own domain, I think that support for valid external and internal HTTPS access should be quite doable

-I do have my own domain (apologies for not making that clear) and I do already support the services which are allowed to be public with certs et al via built in and extended functionality in pfSense router.

 

Before i move on, please note that I haven't tested any of this, or even pondered it deeply. This is very much off the top of my head and I am essentially thinking out loud. Hopefully I've covered everything of relevance to you, but please feel free to bump me on anything I haven't addressed.

-Rgr. Exact and perfect instructions inbound. My body is ready.

 

TBH, my first thought is to just keep it simple and allow direct public access to your server(s) - i.e. via firewall forwarder. Obviously if you do that and you have more than one server, you'll need a reverse proxy (I've got plenty of experience with Nginx, but there are lots of other options - if you're already familiar with HAproxy, perhaps just use that?). If you allow public access to port 80, then the reverse proxy could even get it's own cert, further simplifying things.

-HAProxy may provide capacity for reverse proxy but its been a while since I set it up and I never got it working in that capacity. Nginx I have some familiarity with but only as a web host and the reverse-proxy examples I’ve encountered thus far are lacking. If you know of a good walkthrough for multiple sites all using 80/443, I’d love to read it just for knowledge’s sake alone. Currently, pfSense hosted ACME and HAProxy are being used for handling the couple services that are public and gaining LetsEncrypt certs for them (including a wildcard) but they all work on different ports. It also handles Cloudflare IP address updates automatically as well.

The other services are internal only and are only available from outside the network if on VPN. Connecting them up to

 

You did mention that you like the idea of a central cert location. You could do that, but IMO it doesn't actually provide any value.

-I meant it merely as a central location I could fetch the certs from to load onto any service I spool up. Nothing more than a file server in reality. I could have worded that better. See remarks at end.

 

If you were using your own certs (i.e. use your own CA cert to generated your own certs) then a centralized place to manage them would make sense (i.e only one CA cert to load on all your devices, that rarely, if ever changes).

-I considered my own CA but then every device would need it loaded in and I don’t really hate myself that much. Not yet anyway. Plus it would be another situation where I forget devices have the CA cert loaded and I’d get something new or family would visit and I’d spend way too long to remember they need the cert added to their OS and way too long remembering/relearning how to do that for whatever device they have. And I don’t even want to think about mobile devices in this situation.

 

Additionally or perhaps alternatively, something that may be relevant here and of interest, is that a community member has been working on a Confconsole Let's Encrypt integration improvement; namely support for getting certs via DNS-01 challenge (so no public access to port 80 required).

-Interesting. I’ve not delved the console LE integration, as most services are purely internal. But because they are internal only, I’m not sure this is a fit.

***

In light of the above, a quick summing up- I have the certs and the process to automatically renew my wildcard established. I have those certs in hand. I can even automate the dumping of them to another device (my central cert holder/file holder/whatever I suggested) where it is easier access and further disseminate them. Everything needed to get certs inside my network exists. The last leg of pushing them to services in a sensible manner is what remains. Could the Confconsole LE integration or some other component be put to this use?

 

The option I thought about, after I wrote my original post, is a script that scp’s the certs once a day at some absurd time in the morning to the required folder and performs whatever conversion (ie to .pem) needed. The script can then be dropped on any new service and cron job written to execute. That handles some of the issue of a process being easily repeatable over long periods of time. The script would have commenting explaining how to complete the required data for application and writing cron job is straightforward as can be. User that is made to scp the files can be read-only. Downside to this brilliant plan is I’m a relative newb on shell scripting. One of the many books I’ve not gotten around to reading.

Jeremy Davis's picture

Nginx I have some familiarity with but only as a web host and the reverse-proxy examples I’ve encountered thus far are lacking

In my experience, getting a "basic" reverse proxy is relatively straight forward, although the devil is always in the detail! The only way to ensure that the app behind the reverse proxy works as intended is to either follow whatever upstream recommends (if they're kind enough to document that), or go through a process of trial and error. Ultimately, what will work best will depend on the assumptions that the app developers made and the way that the app communicates with the client.

I'm not sure how useful they are as examples, but 2 apps that we provide that use Nginx as a reverse proxy that jumped into my mind are:

  • Etherpad - /etc/nginx/sites-available/etherpad
  • Syncthing - /etc/nginx/sites-available/syncthing

But if you go that way, when you connect to your server, you'll need to connect to the reverse proxy - regardless of whether you are accessing it internally and externally (as I hinted in my previous post).

Another option which I didn't mention in my previous post is a tunnel proxy (as opposed to a reverse proxy). The difference is somewhat vague (a tunnel proxy might be a reverse proxy and a "normal" reverse proxy could be considered a tunnel proxy). The difference in my usage of the terms is that when I refer to a reverse proxy, I mean a proxy between the client and the server, which terminates the connection with the client and creates a new connection with the backend server. Whereas when I refer to a "tunnel proxy", I mean that it just provides a tunnel between the client and the server. If the connection is encrypted (i.e. HTTPS) the tunnel just passes the encrypted traffic to the backend server (so the backend server needs legitimate certs). I don't intend this as a recommendation, but included more for completeness (putting all the options on the table).

It sounds like you want to manage certs centrally, so the above note is possibly of limited value. Besides, I've never set up a tunnel proxy to tunnel HTTPS traffic, so I can't offer any advice even if that was appealing...

HAProxy [...] also handles Cloudflare IP address updates automatically as well.

Ah ok. Out of interest, if you're using Cloudflare are you also using that for your public certs too? If so, that might cause issues with some SSL/TLS hardening measures (such as cert stapling, perhaps HSTS too?). So if you're accessing the same HTTPS URL via 2 different methods (whih use different certs) then you might get some weirdnesss. You might need to disable cert stapling and perhaps HSTS too?

I considered my own CA but then every device would need it loaded in and I don’t really hate myself that much. Not yet anyway. Plus it would be another situation where I forget devices have the CA cert loaded and I’d get something new or family would visit and I’d spend way too long to remember they need the cert added to their OS and way too long remembering/relearning how to do that for whatever device they have. And I don’t even want to think about mobile devices in this situation.

Sounds we're on the same page there! I think running your own CA might make sense in a business environment. But even then, now days with Let's Encrypt, possibly not even then.

[re my note on coming confconsole Let's Encrypt DNS-01 support] Interesting. I’ve not delved the console LE integration, as most services are purely internal. But because they are internal only, I’m not sure this is a fit.

Why is that? The 'DNS-01' challenge type uses DNS txt records to prove control of the relevant domain. So it doesn't need port 80 available publicly (just a way to create the DNS record on your authoritative nameserver) - making it the ideal solution for HTTPS on local servers! IMO less moving parts equals less to do initially (just set up each new server as part of it's initialization process) and less to maintain in future.

In light of the above, a quick summing up- I have the certs and the process to automatically renew my wildcard established. I have those certs in hand. I can even automate the dumping of them to another device (my central cert holder/file holder/whatever I suggested) where it is easier access and further disseminate them. Everything needed to get certs inside my network exists. The last leg of pushing them to services in a sensible manner is what remains. Could the Confconsole LE integration or some other component be put to this use?

The option I thought about, after I wrote my original post, is a script that scp’s the certs once a day at some absurd time in the morning to the required folder and performs whatever conversion (ie to .pem) needed. The script can then be dropped on any new service and cron job written to execute. That handles some of the issue of a process being easily repeatable over long periods of time. The script would have commenting explaining how to complete the required data for application and writing cron job is straightforward as can be. User that is made to scp the files can be read-only. Downside to this brilliant plan is I’m a relative newb on shell scripting. One of the many books I’ve not gotten around to reading.

Yep, if you'd rather use your existing set up (rather than have each server get it's own cert) then a cron job that distributes the certs (via scp as you suggest, or rsync, or whatever really) should be fine.

On the upside of doing it that way, I imagine that would ensure that it'd work for both external access (i.e. via reverse proxy) and local direct connection (because it would be exactly the same cert).

Timmy's picture

In my experience, getting a "basic" reverse proxy is relatively straight forward, although the devil is always in the detail! The only way to ensure that the app behind the reverse proxy works as intended is to either follow whatever upstream recommends (if they're kind enough to document that), or go through a process of trial and error. Ultimately, what will work best will depend on the assumptions that the app developers made and the way that the app communicates with the client.

- I'd agree it might be trial/error. I got through setting up one reverse proxy and it was having some issues. Composing a single proxy that handled multiple sites was not an example I saw where it was broken down and the components/moving pieces explained.

 

Another option which I didn't mention in my previous post is a tunnel proxy (as opposed to a reverse proxy). The difference is somewhat vague (a tunnel proxy might be a reverse proxy and a "normal" reverse proxy could be considered a tunnel proxy). The difference in my usage of the terms is that when I refer to a reverse proxy, I mean a proxy between the client and the server, which terminates the connection with the client and creates a new connection with the backend server. Whereas when I refer to a "tunnel proxy", I mean that it just provides a tunnel between the client and the server. If the connection is encrypted (i.e. HTTPS) the tunnel just passes the encrypted traffic to the backend server (so the backend server needs legitimate certs).

- Well, not only is it of limited value if I'd like to manage certs centrally but if I was getting the certs to the backend of this tunnel, I would have achieved my objective, thus negating the need for the tunnel (presuming these certs were the identical to public certs to avoid below issue).

 


Ah ok. Out of interest, if you're using Cloudflare are you also using that for your public certs too? If so, that might cause issues with some SSL/TLS hardening measures (such as cert stapling, perhaps HSTS too?). So if you're accessing the same HTTPS URL via 2 different methods (whih use different certs) then you might get some weirdnesss. You might need to disable cert stapling and perhaps HSTS too?

- I am using Cloudflare just for the public DNS/Proxy/free tier stuff and its API for DDNS.

Let's Encrypt provides certs used for public. Not like I'm making a real distinction public vs internal anyway. Only one service has external and internal utility and it uses the same LE cert for both ends. If I used my wildcard cert or made specific host certs from Cloudflare, I don't see where it really matters using what might have been a public cert for internal-only usage. When I tried out the Nginx reverse-proxy with one of my services, I did it with a manually copied wildcard cert.

Access method stays the same internally- DNS resolution internally just points the inquiring client to the local IP address rather than public. So I'd use, as you surmise, the same cert for internally and externally if the service was available in both routes (one service does this as it is available in both contexts).

 


Why is that? The 'DNS-01' challenge type uses DNS txt records to prove control of the relevant domain. So it doesn't need port 80 available publicly (just a way to create the DNS record on your authoritative nameserver) - making it the ideal solution for HTTPS on local servers! IMO less moving parts equals less to do initially (just set up each new server as part of it's initialization process) and less to maintain in future.

- Didn't delve much because, at the time, I was following a guide on how to setup ACME on the router to do the job. It fetches the certs and sets up an automatic recheck at intervals. Since setting it up, I've just never revisited the topic in any detail. So while I've seen the LE integration in confconsole, I was already fetching the certs so I didn't explore it.

Heck, I don't even remember the setup that well. For all I know, I'm probably using it already via ACME and I completely forgot.

 


Yep, if you'd rather use your existing set up (rather than have each server get it's own cert) then a cron job that distributes the certs (via scp as you suggest, or rsync, or whatever really) should be fine.

On the upside of doing it that way, I imagine that would ensure that it'd work for both external access (i.e. via reverse proxy) and local direct connection (because it would be exactly the same cert).

- That was my thought. Now to see if I have a book on shell scripting to help me walk through this. If I come to something fruitful, I'll toss it here.

Jeremy Davis's picture

Composing a single proxy that handled multiple sites was not an example I saw where it was broken down and the components/moving pieces explained.

Just think of it like any other web server. It's not uncommon to host multiple sites on a single server. Whilst the multiple sites run on the same server (even the same underlying processes), each "site" aka "virtual host" is (usually) "self contained" in it's own config file (perhaps with an additional file or two containing commonly required shared code that can be imported as needed).

I recommend just looking at each site/domain you are reverse proxy in that same way. I.e. look at each one individually. Put each config in it's own file, perhaps with the common options that you'll always want in a separate file (FYI that's what the snippets directory is for). You could even make a template config file that includes all the likely options (with less common ones commented out?) - that might be an easily tweakable sane start for any time you need a new reverse proxy?

Obviously if you break your general server config (e.g. Nginx won't start) that will impact all sites. But so long as Nginx itself is running ok, then one site (or reverse proxy) should not be affected by changes you make on another (obviously until you get a bit of a feel for what things you can do within a single site, that may not be completely true, but in general should be the case).

Well, not only is it of limited value if I'd like to manage certs centrally but if I was getting the certs to the backend of this tunnel, I would have achieved my objective, thus negating the need for the tunnel (presuming these certs were the identical to public certs to avoid below issue).

Yeah, sorry I ramble a bit sometimes...

Didn't delve much because, at the time, I was following a guide on how to setup ACME on the router to do the job. It fetches the certs and sets up an automatic recheck at intervals. Since setting it up, I've just never revisited the topic in any detail. So while I've seen the LE integration in confconsole, I was already fetching the certs so I didn't explore it.

So it sounds like you're using DNS-01 ACME authentication to get your certs. Although the TurnKey integration doesn't (yet) support DNS-01 - otherwise, what we provide does the the same deal. I.e. set it up once and then auto updates. Currently our integration only supports the HTTP-01 authentication mechanism. That means that it requires port 80 publicly accessible on the IP that the DNS points to. It does not support wildcard certs although you can have multi-site certs (the default) and/or multiple individual certs (with a little manual config).

The upside of HTTP-01 is that it requires minimal config for a public server. So long as the server has a domain name (i.e. DNS record) pointing to a public IP with port 80 open, then the only config required is the domain name. OTOH because of the public IP/port 80 requirements, our current integration isn't really suitable for certs for local only servers (unless you centralized it like you've done). Once we have the DNS-01 integration built in, then configuring each server to get it's own cert would be an option. IIRC the new DNS-01 support will require the desired domain name, the service name/url (e.g. cloudflare) and API key/credentials. So should still be pretty easy.

Having said that, as you've already got most/all of this set up, there doesn't seem much value in changing direction now - especially considering that we still don't yet have the DNS-01 challenge integrated.

Now to see if I have a book on shell scripting to help me walk through this. If I come to something fruitful, I'll toss it here.

No worries. Good luck! Please to feel free to share anything you think might have value. Also please don't hesitate to ask if you have any scripting questions. I'm fairly handy with bash! :)

On that note, unless you have a preferred shell (such as csh or fsh) and/or have non-Linux Unix-like machines (e.g. Mac), I would highly recommend using bash (it's the default shell in TurnKey). It is less generic than sh (usually provided by dash) but also much more powerful and IMO nicer to read and maintain - in contrast to POSIX compliant shell scripts designed to work everywhere.

Another tip; set 'e' so that if any command errors, the script will exit (instead of blindly continuing after errors). If you are ok with a particular command failing, explicitly allow it by adding '|| true' to the end of the line. IMO the best way to do that is after the shebang:

#!/bin/bash -e

Actually, one more before I go, for debugging, setting 'x' can be really useful (it makes the script really verbose). You can just add an 'x' on the end of the above line (so it ends '-ex').

There I go rambling again... Apologies if those last few bits are teaching you to suck eggs. I just can't help myself sometimes... :)

Good luck with it and I'd love to hear how you go.

Timmy's picture

On that note, unless you have a preferred shell (such as csh or fsh) and/or have non-Linux Unix-like machines (e.g. Mac), I would highly recommend using bash (it's the default shell in TurnKey). It is less generic than sh (usually provided by dash) but also much more powerful and IMO nicer to read and maintain - in contrast to POSIX compliant shell scripts designed to work everywhere.

I work pretty much only in BASH. Most all of my server architecture is Debian (and a couple Ubuntu). Aside from my couple Windows machines, everything is in the default Debian/Ubuntu/Mint world. So good idea to spec bash vs sh.

 

I spent a bit of time today on this. Got a bit to learn.

Timmy's picture

Ok, so I've sat down for this today and hammered out a working set of scripts that move my certs around.

But I appear to have an issue will incorporating them in to apache2 that serves the sites. Perhaps I'm in the wrong spot but I'll do my best to outline what I have.

I'll use the current WIP Nextcloud server as my example.

Scripts one router and intermediate server pass certs to Nextcloud server. Certs are stashed in /etc/ssl/mydomain.org/, with the mydomain.org's group being certgrp and root, mycertusr, and www-data being members of that group.

in /etc/apache2/sites-enabled/nextcloud.conf, the 443 section has been edited to add the SSL lines (whole section included for completeness).

<VirtualHost *:443>
    SSLEngine on
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/nextcloud/
    SSLCertificateFile /etc/ssl/mydomain.org/mydomain.org.crt
    SSLCertificateKeyFile /etc/ssl/mydomain.org/mydomain.org.key


    <IfModule mod_headers.c>
        Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains"
    </IfModule>
</VirtualHost>

 

I ran 'apachectl -t' to check syntax. All good. Restart apache2 with 'service apache2 restart'; no reported errors.

But when I try hit the address from a browser in incognito/privacy/et al, it still throws the security warning indicating its running from a self-signed cert.

This is my first time working with apache so perhaps I'm not searching/finding the complete information I need to setup. What am I missing?

Jeremy Davis's picture

FWIW the default certs that Apache will use (/etc/ssl/private/cert.pem) are configured system wide in /etc/apache2/mods-available/ssl.conf (we include the private key in /etc/ssl/private/cert.pem as well, so no separate specific key file is required). But your config should override that for this specific site.

I suspect that you are right, but to double check that it's not some other issue, I suggest actually checking the certificate (in your browser) to 100% confirm your suspicions.

Another thing to check is permissions of your cert. I forget whether it's an issue for Apache or not, but some apps will refuse to use certs/keys that aren't adequately locked down. Default ownership and permissions for sensitive things such as SSL certs/keys are ownership by root and 400 permissions (owner read only, no access to everyone else). I.e.:

root@test ~# ls -la /etc/ssl/private/cert.pem 
-r-------- 1 root root 3714 Apr 26  2022 /etc/ssl/private/cert.pem

If yours don't look like that, fix like this:

chown root:root /etc/ssl/path/to/your/{cert,key}
chmod 400 /etc/ssl/path/to/your/{cert,key}
Timmy's picture

Perhaps the browser was just holding on to the old site data some how.

 

Someone could be trying to impersonate the site and you should not continue.
 
Websites prove their identity via certificates. Firefox does not trust nextcloud.mydomain.org because its certificate issuer is unknown, the certificate is self-signed, or the server is not sending the correct intermediate certificates.

 

Guessing that it demands a bundle cert. I'll try to concat'ing the CA cert and my domain cert tonight.

Jeremy Davis's picture

If you look at the cert in your browser, hopefully the specific issue may become clear?

IMO that error message isn't really as useful as it could be.

Timmy's picture

So two things appear to be at play.

One- Lets Encrypt *.ca file can simply be renamed *.ca-bundle, as there is no chain of certs but just its own. Many instructions I found here had a line

`SSLCertificateChainFile /etc/ssl/mydomain.org/mydomain.org.ca-bundle`

 

Two- For whatever reason, I had never moved Lets Encrypt sync from Staging to Production. Derp.

 

For those interested, below is the main distro script and inside the comments is the shell script and description on how it is all used.

 

#!/bin/bash
cp mydomain.org.ca mydomain.org.ca-bundle
scp -i ~/.ssh/myusr ~/certs/mydomain.org* myusr@nextcloud.mydomain.org:/etc/ssl/mydomain.org/

 

# On pfsense under acme, you can add a shell script to run that scp's the files to a intermediary.
# scp -i /home/myusr/.ssh/id_rsa /conf/acme/mydomain.org.crt /conf/acme/mydomain.org.key /conf/acme/mydomain.org.fullchain /conf/acme/mydomain.org.ca myusr@192.168.50.50:/home/myusr/certs


# In order to use this on a new server, the myusr pub key needs to be copied to the authorized_keys file in
# target myusr .ssh folder.
# myusr created on target server needs to have perms to the target folder.
# certgrp needs to be created and given control of mydomain.org folder that will house mydomain certs.
# run command inside backtics to do this `chmod 774 mydomain.org`


# need to add this to the /etc/apache2/sites-enabled/SITE-FILE.conf
# where SITE-FILE.conf is the config file in question.
# below goes in the VirtualHost 443 section, after other stuff

# SSLCertificateFile /etc/ssl/mydomain.org/mydomain.org.crt
# SSLCertificateKeyFile /etc/ssl/mydomain.org/mydomain.org.key
# SSLCertificateChainFile /etc/ssl/mydomain.org/mydomain.org.ca-bundle

# Check file syntax with `apachectl -t`
#

 

Hope this helps anyone else!

Jeremy Davis's picture

Great work! Glad to hear you're all go.

Timmy's picture

For those who might be curious how to deal with gitea/nginx, there are a pair of lines to replace in /etc/nginx/include/ssl

ssl_certificate     /etc/ssl/mydomain.org/mydomain.org.all.pem;
ssl_certificate_key /etc/ssl/mydomain.org/mydomain.org.key;

Jeremy Davis's picture

I'm not sure if I mentioned before, but if you are only hosting one domain per server, you can just replace /etc/ssl/private/cert.pem and /etc/ssl/private/cert.pem and your cert will then work for Webmin and Webshell too.

The only thing to note is that the default cert.pem also includes the key and Diffie-Hellman parameters (dhparams). I.e.:

root@tkldev ~# grep '^---' /etc/ssl/private/cert.*
/etc/ssl/private/cert.key:-----BEGIN PRIVATE KEY-----
/etc/ssl/private/cert.key:-----END PRIVATE KEY-----
/etc/ssl/private/cert.pem:-----BEGIN CERTIFICATE-----
/etc/ssl/private/cert.pem:-----END CERTIFICATE-----
/etc/ssl/private/cert.pem:-----BEGIN PRIVATE KEY-----
/etc/ssl/private/cert.pem:-----END PRIVATE KEY-----
/etc/ssl/private/cert.pem:-----BEGIN DH PARAMETERS-----
/etc/ssl/private/cert.pem:-----END DH PARAMETERS-----

You can also put the cert chain in the cert.pem too (without need to add an extra config entry to Apache - Apache can work it out itself). FWIW, that's what we do with our Let's Encrypt integration and it "just works".

Add new comment