Basic Access Restrictions with NGinx
Last week, we went over how to quickly build a reverse proxy using NGinx. While this solved our immediate problem of hiding n+1 servers behind our n ip addresses, it could still stand a bit of work. As it stands right now, we don't have any access control capability.
The Setup
Let's assume we have two sites that need to be proxied: public.example.com and internal.example.com1 and 2.
upstream internal {
server 10.1.2.3:8080;
}
upstream public {
server 10.1.2.4:8090;
}
server {
listen *:80;
server_name public.example.com;
location / {
proxy_pass http://public;
}
}
server {
listen *:80;
server_name internal.example.com;
location / {
proxy_pass http://internal;
}
}
Now that we know our setup, here are our requirements:
- Our internal site only works from a list of trusted addresses
- We add a restricted site that allows all internal access, but requires external access to login with a domain account that is part of the ‘Restricted Access’ group
- Our public site needs to remain full open
IP Based Restrictions
A first, naive approach would be to simply not add a DNS entry for the internal server to the public facing name server. While this might appear to be enough from an initial glance, it doesn't really do the job we thought it would. If someone knew the name of our internal server, they could defeat our attempt at security simply by adding the public IP of the reverse proxy their local hosts file. Since the HTTP header would still contain the name of the server we're trying to connect to, the request would still be handled by our proxy.
In order to achieve our goal of access control, we're going to have to modify our configuration a little. Thankfully, all of the heavy lifting is performed by the provided ngx_http_access_module. In fact, we can add all of our required access control by adding two more directives to the location element of our internal server.
location / {
allow 10.1.0.0/16;
deny all;
proxy_pass http://internal;
}
Above, we've added the directives allow and deny. Our directives are going to be evaluated in the order they are encountered. Our first directive, the allow directive gives NGinx a CIDR that requests should be accepted from. After that, we let NGinx know to deny any unmatched requests. Before we ended our ACL chain, we could have even authorized or blocked specific machines on our network. We'll add one more of each rule, to disallow an internal machine, and to allow a specific external one. We'll add our exceptions at the top, so they get evaluated first.
location / {
deny 10.1.2.5;
allow 1.2.3.4;
allow 10.1.0.0/16;
deny all;
proxy_pass http://internal;
}
User Restrictions
Our internal server now has some IP based access controls applied to it, but our job isn't done; now we need to add our restricted site. We'll start with some boilerplate config, allowing all access to our site.
upstream restricted {
server 10.1.2.6:8090;
}
server {
listen *:80;
server_name restricted.example.com;
location / {
proxy_pass http://restricted;
}
}
We'll add two directives to our location element to enable authentication with local users. The first, auth_pam will be the message on the login form when users try to authenticate. The second directive, auth_pam_service_name, will tell NGinx that authentication requests will be handled by the nginx PAM configuration on the local system.
location / {
auth_pam "Authentication Required";
auth_pam_service_name "nginx";
proxy_pass http://restricted;
}
Our configuration looks good, but it won't work until we create the PAM configuration we referenced. We'll provide PAM with two directives. One of our directives will check that the user is a member of the “restricted^access” group on our machine. The other directive will instruct the system to wait three seconds before returning the result in the case of an authentication failure. Our configuration will be placed at /etc/pam.d/nginx.
auth optional pam_faildelay.so delay=3000000
auth required pam_succeed_if.so user ingroup restricted^access
With this configuration, NGinx will check for a local machine user that is in the restricted^access group. In order to allow it to use a domain user, you'll need to follow the steps in “Adding an Ubuntu Machine to a Windows Domain".
Combining Restrictions
We went a bit overzealous with our configuration for the restricted site: all users now are required to login. Our internal users were supposed to bypass login. We can accomplish this by adding the satisfy any directive. This tells NGinx to allow the user to connect if any of the access control mechanisms match. Thus, if we add satisfy any, we simply need to add our allow directive.
location / {
satisfy any;
allow 10.1.0.0/16;
auth_pam "Authentication Required";
auth_pam_service_name "nginx";
proxy_pass http://restricted;
}
Final Configuration
At this point, we can finally say that we are done. The public site has no access restrictions, the restricted site requires login externally, and the internal site is not externally accessible. External attempts to hit the internal site, and failed attempts to login to the restricted site will now all return “401 Unauthorized.” 3
upstream public {
server 10.1.2.4:8090;
}
upstream restricted {
server 10.1.2.6:8085;
}
upstream internal {
server 10.1.2.3:8080;
}
server {
listen *:80;
server_name public.example.com;
location / {
proxy_pass http://public;
}
}
server {
listen *:80;
server_name restricted.example.com;
location / {
satisfy any;
allow 10.1.0.0/16;
auth_pam "Authentication Required";
auth_pam_service_name "nginx";
proxy_pass http://restricted;
}
}
server {
listen *:80;
server_name internal.example.com;
location / {
deny 10.1.2.5;
allow 1.2.3.4;
allow 10.1.0.0/16;
deny all;
proxy_pass http://internal;
}
}
/etc/pam.d/nginx
auth optional pam_faildelay.so delay=3000000
auth required pam_succeed_if.so user ingroup restricted^access
Footnotes
Ideally this configuration wouldn't occur. Ideally private and public resources shouldn't overlap like this. Ideally. The world isn't always ideal. Lack of available computing resources, inheriting an existing network topology, or only being in charge of a subset of the network. None of these are ideal, but they certainly happen. ↩︎
Even given more IP addresses than servers, as is the case in an internal network, there are still a number of valid reasons to run a proxy, such as making several services all appear to be hosted on the same machine, or to provide some load balancing capabilities. ↩︎
If you've worked with many of Atlassian's products, you might recognize that the public, restricted, and internal servers in our examples use the default ports for Confluence, Bamboo, and Jira, respectively. ↩︎