Homebox Not Working With Nginx (Docker) [Documentation]
I decided that I wanted to selfhost homebox by hay-kot: https://github.com/hay-kot/homebox
I have a VPS that is running Rocky Linux and I use Nginx for serving webpages and reverse proxying. I decided to use the docker setup because the documentation just says “TODO” for compiling the binary as of me writing. I decided to create the Nginx configuration like so:
File: /etc/nginx/conf.d/inventory.tnology.dev.conf
server {
server_name inventory.tnology.dev;
listen 443 ssl http2;
listen [::]:443 ssl http2;
client_max_body_size 1024M;
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:3100;
}
access_log /var/log/nginx/inventory.tnology.dev.access.log;
error_log /var/log/nginx/inventory.tnology.dev.error.log;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
}
server {
if ($host = inventory.tnology.dev) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name inventory.tnology.dev;
listen 80;
return 404; # managed by Certbot
}
I cloned the repository from GitHub with the git command line and, when I got Docker ready, I ran docker-compose up -d
. I made sure that I had an A record pointing to the IP address of my VPS set up with Cloudflare.
Here is the docker-compose.yml file:
version: "3.4"
services:
homebox:
image: ghcr.io/hay-kot/homebox:latest
# image: ghcr.io/hay-kot/homebox:latest-rootless
container_name: homebox
restart: always
environment:
- HBOX_LOG_LEVEL=info
- HBOX_LOG_FORMAT=text
- HBOX_WEB_MAX_UPLOAD_SIZE=10
volumes:
- homebox-data:/data/
ports:
- 3100:7745
volumes:
homebox-data:
driver: local
Note: Make sure you have set up the DNS record to point the subdomain at the right IP if you have to! This is a simple mistake.
Cloudflare SSL Setup – Skip this if you don’t need to read this part
For SSL, I’m using Cloudflare Origin Certificates. Here’s how you can create that:
Step 1: Go to your Cloudflare dashboard and select the domain you want to set up SSL for.
Step 2: On the left navigation bar, under “SSL/TLS,” click Overview. Set your SSL/TLS encryption mode to Full.
Step 3: Now, under “SSL/TLS” on the left navigation bar, click Origin Server. Click the Create Certificate button.
Step 4: Assuming you want the SSL certificates to be valid on the normal domain and all subdomains, make sure that the two domains selected are your domain and “*.” before your domain. For example, if your domain is example.com, then you want the two domains selected to be example.com
and *.example.com
.
Step 5: Click the Next button. You should hopefully see the Origin Certificate and Private Key.
Step 6: Copy the content of the Origin Certificate and write it to the file and directory of your choice, ensuring the file ends in .pem
. For example, you can copy it to the file /etc/ssl/cert.pem
Step 7: Copy the content of the Private Key and write it to the file and directory of your choice, ensuring the file ends in .pem
. For example, you can copy it to the file /etc/ssl/key.pem
Step 8: Assuming you are using Nginx, be sure to add the directives in a server block (meaning in the curly brackets after the word “server”) with ssl_certificate
and ssl_certificate_key
. For example, if your Origin Certificate is /etc/ssl/cert.pem
and your Private Key is /etc/ssl/key.pem
, then add these two lines to your Nginx configuration file:
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
Trying to get it to work
I noticed that, when I ran the Docker container, the port used was 3100. I saw this via the netstat -tnlp
command.
[root@t-nology conf.d]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:3100 0.0.0.0:* LISTEN 45275/docker-proxy
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1090/sshd: /usr/sbi
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2928/nginx: master
tcp 0 0 0.0.0.0:444 0.0.0.0:* LISTEN 1592/node /var/www/
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 2928/nginx: master
tcp 0 0 0.0.0.0:3001 0.0.0.0:* LISTEN 4456/node
tcp6 0 0 :::3100 :::* LISTEN 45280/docker-proxy
tcp6 0 0 :::22 :::* LISTEN 1090/sshd: /usr/sbi
tcp6 0 0 :::80 :::* LISTEN 2928/nginx: master
tcp6 0 0 :::3306 :::* LISTEN 844/mysqld
tcp6 0 0 :::33060 :::* LISTEN 844/mysqld
tcp6 0 0 :::443 :::* LISTEN 2928/nginx: master
More specifically, notice that port 3301 is docker:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:3100 0.0.0.0:* LISTEN 45275/docker-proxy
However, here is where the problem arrived: I got a Cloudflare Error 502 Page when I would attempt to visit the website I had set up. This is obviously a problem:
Here’s some things I had tried:
Running systemctl disable firewalld
and systemctl stop firewalld
made no difference. I therefore re-enabled them.
Running docker-compose restart homebox
and systemctl restart nginx
did not make a difference.
SELinux is disabled. However, try disabling SELinux if it isn’t disabled already. You can check with the getenforce
command.
I had tried changing the “ports” value in my docker-compose.yml file from 3100:7745 to 3100:3100. However, this did not help.
Troubleshooting
I looked at the currently running Docker containers with docker container ls
, and I stopped the container with docker stop <container ID>
. I tried running docker-compose up
again, but this time without the -d parameter. The -d parameter makes the container run as a daemon, meaning it runs in the background. By not passing this parameter, we can see the output of what is printed.
One interesting thing I noticed is that the last line printed said the HTTP server was running on port 7745, but it was actually, in fact, running on port 3100. This is regardless of what my ports
value was set to in the docker-compose.yml value, even when it was 3100:3100
.
Anyways, when I attempted to visit the webpage through my VPS’s public IP address and the port directly, I was able to visit the page directly. For example, if my VPS’s public IP address were to be 1.1.1.1, then I would be visiting http://1.1.1.1:3100. When I did this, a lot of lines were printed in the SSH session that was running the Docker container without the -d parameter.
I tried changing the proxy_pass to an IP Google owns by doing nslookup google.com in my local machine’s Windows Command Prompt, and the IP redirected to a Google 400 page. This suggests that nginx is properly using proxy_pass to another IP.
I checked the /var/log/nginx/inventory.tnology.dev.error.log file. This file logs errors because I used the error_log directive in my Nginx configuration and set errors to log to that file. This is what I found:
2024/01/21 14:38:54 [error] 50198#50198: *109470 upstream prematurely closed connection while reading response header from upstream, client: <ip address>, server: inventory.tnology.dev, request: “GET /favicon.ico HTTP/2.0”, upstream: “https://127.0.0.1:3100/favicon.ico”, host: “inventory.tnology.dev”, referrer: “https://inventory.tnology.dev/”
So, what I was seeing was an HTTP 502 Bad Gateway Error with upstream prematurely closed connection while reading response header from upstream
in the log file.
I tried changing my Nginx configuration file such that the proxy_pass
directive in server.location
used http:// rather than https:// – which resulted in a different error. Rather than getting a Cloudflare 502 error, I was seeing a page that simply had the text 400 Bad Request.
At a different point in time, I found this in my error log file:
File: /var/log/nginx/inventory.tnology.dev.error.log
2024/01/21 15:01:07 [error] 50567#50567: *110446 SSL_do_handshake() failed (SSL: error:0A00010B:SSL routines::wrong version number) while SSL handshaking to upstream, client: 172.70.178.106, server: inventory.tnology.dev, request: “GET / HTTP/2.0”, upstream: “https://127.0.0.1:3100/”, host: “inventory.tnology.dev”
So now I was getting an SSL_do_handshake() failed
error with routines::wrong version number) while SSL handshaking to upstream
in the error. This makes it seem like it was an SSL issue. However, I was using the same Cloudflare SSL Certificate and Cloudflare SSL Certificate Private Key as the other services I am hosting with nginx on that same VPS, and my other services were working just fine.
One thing I noticed is that when I did curl https://127.0.0.1:3100
, I got this error:
[root@t-nology conf.d]# curl https://127.0.0.1:3100
curl: (35) error:0A00010B:SSL routines::wrong version number
However, after getting this routines::wrong version number
error with the curl command, I then tried it with the HTTP protocol rather than the HTTPS protocol. The result was the HTML of the webpage, which means that using HTTP worked but not HTTPS in the curl command.
Solution
The solution to my problem was quite simple. Take a look at my Nginx configuration at the top of this article. One thing I did was I commented the following lines:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
Just commenting those lines alone didn’t fix it. However, I then decided to uncomment one of the two duplicate directives I have for proxy_set_header Host $host;
After doing that, it worked for me! My configuration turned out to be like this:
server {
server_name inventory.tnology.dev;
listen 443 ssl http2;
listen [::]:443 ssl http2;
client_max_body_size 1024M;
location / {
# proxy_set_header Upgrade $http_upgrade; # commented 1/21/2024 8:38 PM EST
# proxy_set_header Connection 'upgrade'; # commented 1/21/2024 8:38 PM EST
# proxy_set_header Host $host; # commented 1/21/2024 8:42 PM EST because it's a duplicate
# proxy_cache_bypass $http_upgrade; # commented 1/21/2024 8:38 PM EST
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:3100;
}
access_log /var/log/nginx/inventory.tnology.dev.access.log;
error_log /var/log/nginx/inventory.tnology.dev.error.log info;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
}
server {
if ($host = inventory.tnology.dev) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name inventory.tnology.dev;
listen 80;
return 404; # managed by Certbot
}
This is thanks to Dirbaio from Stack Overflow with this post: https://stackoverflow.com/questions/62392574/troubleshooting-application-behind-nginx-reverse-proxy-as-post-put-requests-are
If that doesn’t solve your issue, though, here’s a few things you can try:
- This one might be obvious, but make sure you have a DNS Record pointing to the desired IP address. Sometimes people forget to do that (for example, there’s been times in the past where I forgot), and it’s a simple mistake that can have a simple solution.
- Make sure SELinux is disabled. You can run the getenforce command to see if it’s enabled. If it is, you can open the SELinux configuration file with an editor of your choice (such as nano) and disable it. To do that, open the
/etc/selinux/config
file (such as withnano /etc/selinux/config
) and set the SELINUX value to disabled, such that it is SELINUX=disabled. Save the file and run the commandsetenforce 0
. - In your Nginx configuration file, make sure that the
proxy_pass
directive in theserver.location
block (such as inlocation /
in the server block) is using the HTTP protocol and not HTTPS. This means it starts with http:// and not https:// - Try setting up an error_log directive in your Nginx configuration file. For example, if you’re trying to troubleshoot an instance for myservice.example.com, you can add this to your Nginx configuration file in the server block:
error_log /var/log/nginx/myservice.example.com.error.log
- See if you can figure out which port the server is being running on. You can see ports that are being listened on with the
netstat -tnlp
command.
Hopefully this article has helped you, or at least given you an idea and/or pointed you in the right direction. If it didn’t, I’m sorry about that! I decided to document this because I understand how great it is for people to document these things when they get it working to help other people out in the future.
Thanks for reading!
biolean reviews
I do not even know how I ended up here but I thought this post was great I dont know who you are but definitely youre going to a famous blogger if you arent already Cheers.