The introductory sigh
Habits are hard to break. When I need a quick proxy I grab for nginx and it tends to just work. Unless you abuse the poor thing like I do...
Nginx in docker and port forwarded menace
My setup spans multiple servers with individual docker compose builds securely connected over SSH. It is very simple at its core, but the edges...
If you start port forwarding, there are many footguns to avoid. I am not going to get into all of them in this article, and just give an example of what the forwarding itself looks like:
autossh -g -i /home/infra/.ssh/id_ed25519 \
-M 0 -N -o "ServerAliveInterval 60" \
-o "ServerAliveCountMax 3" -p 22 \
-L 172.17.0.1:10012:localhost:10012 someuser@someserver
Something like this. The interesting part is 172.17.0.1:10012:localhost:10012 that forwards the port 100012 from localhost of the remote server to our 172.17.0.1 interface on port 100012 that is docker. It means another service on another server that runs in docker, can expose its port 10012 ONLY to localhost and we're the only ones who can access it on our docker network on this side. Unless I've made a horrible mistake of course and something completely different is happening.
BEWARE! If you do this 10012:localhost:10012 and omit the 172.17.0.1 interface the whole world that can see your host will see the service exposed on port 10012. Yeah!
That works most of the time perfectly fine. Unless...
Resolvers...
Nginx has a very particular approach to whether it bothers to start up or not when it is a proxy. If you do this:
location / {
proxy_pass http://host.docker.internal:10012;
}
it will start up fine, BUT ONLY if there is a server listening on host.docker.internal:10012. If it has gone down due to any reason then nginx will also fail to start up due to "upstream not found" error or similar.
To bypass this issue, it's possible to ask nginx to pretend to be a load balancer instead by hinting at it with:
# Here's the secret, pretending to load-balance
set $TARGETS "http://host.docker.internal:10012";
location / {
proxy_pass $TARGETS;
}
This will let it start up and will work unless it is host.docker.internal. If it is host.docker.internal then nginx will be sad and say
[error] 35#35: *27 no resolver defined to resolve host.docker.internal,
client: 192.168.1.224,
server: someserver.somewhere,
request: "GET / HTTP/1.1",
host: "someserver.somewhere"
Why? Because it is probably trying to use /etc/resolv.conf to resolve it and it just isn't there. I think it is fair enough, if it needs to use docker's internal resolver explicitly.
set $TARGETS "http://host.docker.internal:10012";
location / {
# Internal resolver added!!!
resolver 127.0.0.11 valid=30s;
proxy_pass $TARGETS;
}
All good? Nope... It turns out that internal resolver of docker does not know what is host.docker.internal. It's again a bit unclear why it's not there, if it is registered as an extra host. Before anybody asks, yes, it is a custom defined network that I'm using. Not the default one.
So I got back to the original one:
location / {
proxy_pass http://host.docker.internal:10012;
}
It works as long as the other host is up and running when nginx is started. Nginx auto-restarts if it fails after a bit, so I can live with it, but it's not nice... If anybody knows how to handle it better, please let me know :S
I could also consider Kubernetes, but last time I installed it and set everything up it completely broke after an automatic version update, so I am somewhat reluctant to spend time on it again.
- Heidi (Founder)