Ever wondered how more than one application is deployed to the same machine, and how traffic is routed to the corresponding applications? Keep reading to find out.
Introduction
Goals
By the end of the article, you’ll understand
What a Reverse Proxy is?
What NGINX is?
How does NGINX help in managing multiple applications?
How to leverage NGINX as a Reverse Proxy?
Reverse Proxy
According toWikipedia, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client, appearing as if they originated from the server itself.
Refer tothis articleto better understand what Reverse Proxies are.
NGINX
We will be using NGINX as a Reverse Proxy. According toWikipedia, NGINX is a web server that can be used as a reverse proxy, load balancer, mail proxy, and HTTP cache. The software was created by Igor Sysoev and was publicly released in 2004. Nginx is a free and open-source software, released under the terms of the2-clause BSD license. A large fraction of web servers use NGINX, often as a load balancer.
Some other examples Reverse Proxies available are:
This is an example of an architecture, where two apps are running in the background, but the clients have no idea about them. The clients only know about NGINX which acts as a reverse proxy that sends the request to the appropriate application.
Now that you have a broader idea of what we are about to build, let’s jump right in!
Aim
Deploy two applications and have them managed by NGINX.
Setup & Pre-Requisites:
For this example, we have two sample Express Applications. One can have any kind of application running on different ports.
NOTE:Do not run your application on Port 80 or 443. We will explaining later why this must not be done. Refer the officialExpressJS documentationfor help getting started.
We have installed NGINX on our local machine, but the same could be done on any Virtual Machine where the applications are expected to be deployed.
Here is thedocumentationon how to install NGINX on your machine.
Step 1: Start two apps running in different ports
As we’ve mentioned earlier, we’ve got two Node.js Apps running on two different ports as shown below.
Server app running on Port 3000
Client app running on Port 3001
Now that we have our apps up and running, we don’t want our users to use these applications by typing their PORTS explicitly, so we need to map it with something that is more human-readable.
In this example, we will be using subdomains to distinguish between them. Again one is free to use whichever element is suitable as per requirements.
Another example could be a particular route like domain/client and domain/server. The only condition for the distinguishing element is to follow a valid URL regular expression. To learn about Regex you can clickhere.
Step 2: Add DNS records
This is the part where one would add the DNS records in their DNS management dashboard. If you are running Nginx locally, you can skip this step.
The general DNS Configurations would be something like:
Server app mapped to the server.domain
Client app mapped to the client.domain
My Localhost Config, in this case, would be:
Server mapped to server.localhost
Client mapped to client.localhost
There are two standard protocols HTTP and HTTPS. The default port for HTTP is 80 and HTTPS is 443.The reason we must not run our applications on these ports is because our NGINX server is running on these two ports.All the requests the client makes would either be redirected to port 80 or 443 from where it would be redirected internally to the corresponding application.
Step 3 - Configure NGINX at 80 for HTTP and 443 for HTTPS
Now that we have our apps running and our DNS records ready. We can start configuring our NGINX Reverse Proxy to make it all work.
By default, the configuration file is namednginx.confand placed in the directory/usr/local/nginx/conf,/etc/nginx, or/usr/local/etc/nginxfor Linux and Debian Based systems.
On Windows, the file is placed inside the installation folder,nginx/conf/nginx.conf.
server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name server.domain; location / { proxy_pass "http://localhost:3000/"; } ssl_certificate <location of SSL certificate>; ssl_certificate_key <location of SSL certificate Key>; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name client.domain; location / { proxy_pass "http://localhost:3001/"; } ssl_certificate <location of SSL certificate>; ssl_certificate_key <location of SSL certificate Key>; }
NOTE:These are the minimum configurations required to successfully implement NGINX for reverse proxying. Feel free to explore other config parameters as well.
Make sure to change the domain name to your domain. For a SSL Certificate and Key, you can obtain them from your SSL provider.
If you don’t have one, use this free serviceLetsEncrypt.
Follow their documentation to get free SSL instantly!
Step 4 - Save and Restart
After editing, save your changes. Use thesudo nginx -tcommand to test your changes before actually reloading NGINX. It is good practice do this to make sure your server won’t crash, if there were any errors in your config file.
Once you get a message that the test is successful, you can go ahead and restart NGINX.
Use this commandsudo nginx -s reloadto restart NGINX.
Open the browser and enter the URLs to find your applications running on the corresponding URLs configured.
For the example above, the URLs are:
client.localhost
server.localhost
Important Note
Using NGINX secures your server because it routes the traffic internally. Instead of having to open up all of your ports, in this case 3000 and 3001, to the internet, just 80 and 443 will do the trick.
This is because all traffic passes through the secure NGINX server (like a gateway) and is redirected to the correct application. Using a reverse proxy like NGINX is more secure that opening up several ports for every application you deploy because of the increased risk a hacker will use an open port for malicious activity.
Conclusion
Here is the end result:
Congratulations! You did it! 🎉
In large systems, the system is highly dependent on the micro-services architecture where each service would be served by an application. In that case, managing multiple apps would be an essential skill to know.
The microservices architecture is discussedherein detail.
Hope this article helped you to manage those independently deployed applications as a whole with the help of NGINX as a reverse proxy.
Harish Ramesh Babu is a final year CS Undergrad at the National Institute of Technology, Rourkela, India. He gets really excited about new tech and the cool things you can build with it. Mostly you’ll find him working on web apps either for the campus or an opensource project with the
팀 동료가 tcp/ip socket 을 unix socket 으로 수정후 테스트 케이스 1000개 정도의 속도가 1분정도 감소 하는 효과가 있었다고 말 해줬습니다.
2)
또다른 팀 동료는 tcp/ip socket 대신 unix socket 사용하는 이유는 tcp/ip socket 은 close 시에 프로세스가 time wait 가 걸려서 바로 반환 하지 않기 때문에 소켓 개수 제한에 금방 걸릴수 있어서 unix socket 으로 대체 한다고 말해 줬다.
nano 명령어로 해당 conf 파일을 생성시킨다. nano 가 익숙하지 않다면 vi 로 해결할 것
. For example with nano:
sudo nano /etc/nginx/conf.d/load-balancer.conf
해당 파일 안에서 2개의 segment 로 구분하게 되는데, upstream 이 로드밸런서에 해당하고, server 에 관한 설정이 nginx 에서 설정하는 가상호스트의 설정을 따르게 된다.
In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.
# Define which servers to include in the load balancing scheme. # It's best to use the servers' private IPs for better performance and security. # You can find the private IPs at yourUpCloud control panel Network section. http { upstream backend { server 10.1.0.101; server 10.1.0.102; server 10.1.0.103; } # This server accepts all traffic to port 80 and passes it to the upstream. # Notice that the upstream name and the proxy_pass need to match. server { listen 80; location / { proxy_pass http://backend; } } }
Then save the file and exit the editor.
CentOS 에서는 같은 이름의 파일으로 심볼릭 링킹 처리가 안된다. 따라서 아래처럼 default.conf 파일을 다른 이름으로 바꿔서 처리해준다.