• Use cloudflare to get an api token
  • Set an a record for a wildcart cert *.domain.com pointing towards your servers local IP such as 192.168.0.1, turn off cloudflare proxy
  • Go into NPM and setup the SSL cert using dns challenge and your api token
  • setup a proxy host user your subdomain.domain.com pointing towards your docker container
  • key step!!! make sure you do not have conflicting ports 80 and 443 on your machine. On unraid the device management ports are set to this, but for NPM to do local proxies, it needs access to these ports.
  • Tinkerer@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I’m running cloudflare and NPM, I did a DNS challenge to get my wildcard cert, then put in access lists so for my internal hosts only private IP address subnet can access them. I have my OPNsense firewall also redirect any of those internal hosts request back to my NPM host. I have everything internal with a valid https cert.

      • Tinkerer@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Ah yeah I’m trying to move to podman for NPM and the access lists don’t work for some reason. On docker though it works very well

  • 2xsaiko@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 days ago

    This seems super overcomplicated. What I would do is put all the subdomains on the public DNS, let HTTP(S) through the firewall for the respective hosts, deny everything from outside of your local network on the http server that isn’t under the HTTP challenge path and then run the HTTP challenge as you would for a public site.

    Then you can get certs, everyone outside trying to access will get 403, and inside the network you can access as normal.

    Of course you’ll have to trust your http server’s ACL for that, but I’m just going to assume servers like nginx (which I use) have a reliable implementation.

  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 days ago

    Proxy host out on the public internet? Usually I just use a local private CA for this, and install the CA root in my browser.

      • solrize@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        I don’t bother with a proxy host or with LetsEncrypt, though I guess you could use LetsEncrypt perfectly well. Back when I was doing this, LetsEncrypt didn’t exist and you had to actually pay for public certificates, so using locally generated free ones saved money. It also had a minor(?) security advantage in that if the private server key somehow leaked, it wouldn’t let people impersonate our internet domain.

        For the private CA I simply used the crappy CA.pl script that comes with OpenSSL or did at the time. There are much better ways to do it, especially at any kind of scale, but CA.pl sufficed dealing with a few development machines.

  • emax_gomax@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I would recommend just using caddy. It removes the complicated part of ssl management. For a local network it’ll setup a local self signed certificate authority and you can just install those certificates to any devices on your LAN that you want to have access. For a public setup it’ll use letsencrypt. You will still need to setup dns if you want wildcard routing.

    • vividspecter@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      If you want to use DNS challenge with Caddy it’s kind of annoying though (need to download/compile a separate version with the DNS plugin you need).

      Which is probably a good idea if you don’t plan to expose the services publicly but want a real certificate to avoid self-signed cert warnings.

      • emax_gomax@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I’ve never had this issue but I run basically everything through docker and presumably it bundles this by default.