I have the Nginx appliance running in a hyper V virtual machine acting as my proxy to my development servers. I would like to self host Bitwarden which requires docker. Therefore rather than create another virtual machine would it be a good idea to install docker on the Nginx appliance and then into install  bitwarden from there. Nginx would need to proxy ports 80 and 443 based upon hostnames but already do this so should work.

Are there any security implications from this setup. I've never used docker before. Only for small number of users.
Thanks
Paul

 

Forum: 
Jeremy Davis's picture

Assuming that you trust what will be running in the Docker container, that should be fine. I just had a quick google and assuming you mean this, then on face value that seems pretty legit to me (obviously I haven't done a code review or anything...).

Docker is a pretty cool technology, but is really more-or-less a somewhat hardened chroot. So it's certainly no security silver bullet. But it is a pretty handy way to install applications and is pretty popular these days.

Installation of Docker is generally pretty straight forward, but I'll give you the more locked down method (lifted from our buildtasks setup-docker script):

# install dependencies
apt-get update
apt-get install apt-transport-https gnupg2

# download the docker gpg key, an export it somewhere safe
GPG_FINGERPRINT=9DC858229FC7DD38854AE2D88D81803C0EBFCD88
gpg --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys $GPG_FINGERPRINT
mkdir -p /usr/share/keyrings/
gpg --output /usr/share/keyrings/docker.gpg --export $GPG_FINGERPRINT

# add the docker sources list
cat > /etc/apt/sources.list.d/docker.list <<EOF
deb [signed-by=/usr/share/keyrings/docker.gpg ,arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable
EOF

# pin the package
REPO_ORIGIN="apt.dockerproject.org"
cat > /etc/apt/preferences.d/docker <<EOF
Package: *
Pin: origin "$REPO_ORIGIN"
Pin-Priority: 100
Package: docker-ce
Pin: origin "$REPO_ORIGIN"
Pin-Priority: 500
EOF

# install docker
apt-get update
apt-get install docker-ce

# test docker is working
docker run hello-world

As for the details of running the docker container itself, I suggest that you follow their instructions...

I hope that gets you going.

In my Seafile docker start script, I exposed port 3180:8080.

Now I am not sure how to configure my Nginx-Proxy.

I tried two options: a proxy pass and the same fastcgi script as written in the seafile manual.
Both don’t work for on my server. 502: Bad Gateway error.

Would anyone have an idea what I am doing wrong? I know it’s not a real Seafile issue, more an Nginx issue. But any help would be appreciated. I will keep looking as well.

location /webdav {
    proxy_pass http://localhost:3180;
    proxy_read_timeout 310s;
    proxy_set_header Host $host;
    proxy_set_header Forwarded “for=$remote_addr;proto=$scheme”;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Connection “”;
    proxy_http_version 1.1;
}

and I tried the same as in the Seafile manual:

location /webdav {
    fastcgi_pass 127.0.0.1:3180;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_script_name;
    fastcgi_param   SERVER_PROTOCOL     $server_protocol;
    fastcgi_param   QUERY_STRING        $query_string;
    fastcgi_param   REQUEST_METHOD      $request_method;
    fastcgi_param   CONTENT_TYPE        $content_type;
    fastcgi_param   CONTENT_LENGTH      $content_length;
    fastcgi_param   SERVER_ADDR         $server_addr;
    fastcgi_param   SERVER_PORT         $server_port;
    fastcgi_param   SERVER_NAME         $server_name;
    fastcgi_param   HTTPS               on;
    fastcgi_param   HTTP_SCHEME         https;

    client_max_body_size 0;
    proxy_connect_timeout  36000s;
    proxy_read_timeout  36000s;
    proxy_send_timeout  36000s;
    send_timeout  36000s;

    # This option is only available for Nginx >= 1.8.0. See more details below.
    proxy_request_buffering off;

    access_log      /var/log/nginx/seafdav.access.log;
    error_log       /var/log/nginx/seafdav.error.log;
}

To clarify you have nginx running on the same machine as seafile ? and do you also have a server block before the location block in you nginx config so it know which port and domain to listen on .

server {    
         listen       443 ssl; # was 443 ssl
        server_name abc.com;

#then location

location / {


Jeremy Davis's picture

502: Bad Gateway error is caused by Nginx not being able to connect to the backend server. Both of the configs you have posted are trying to connect to port 3180 on localhost (127.0.0.1 is the local IP address for localhost).

So my guess is that your Nginx server is not on the same server as your Seafile server. In which case, you'll need to adjust the proxy location to be the server which is hosting Seafile. The fastcgi config will only work if the Seafile server is running fastcgi.

Add new comment