728x90

Ever wondered how more than one application is deployed to the same machine, and how traffic is routed to the corresponding applications? Keep reading to find out.

Introduction

Goals

By the end of the article, you’ll understand

  • What a Reverse Proxy is?
  • What NGINX is?
  • How does NGINX help in managing multiple applications?
  • How to leverage NGINX as a Reverse Proxy?

Reverse Proxy

According to Wikipedia, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client, appearing as if they originated from the server itself.

Refer to this article to better understand what Reverse Proxies are.

NGINX

We will be using NGINX as a Reverse Proxy. According to Wikipedia,
NGINX is a web server that can be used as a reverse proxy, load balancer, mail proxy, and HTTP cache. The software was created by Igor Sysoev and was publicly released in 2004. Nginx is a free and open-source software, released under the terms of the 2-clause BSD license. A large fraction of web servers use NGINX, often as a load balancer.

Some other examples Reverse Proxies available are:

Reverse Proxy Example

This is an example of an architecture, where two apps are running in the background, but the clients have no idea about them. The clients only know about NGINX which acts as a reverse proxy that sends the request to the appropriate application.

Now that you have a broader idea of what we are about to build, let’s jump right in!

Aim

  • Deploy two applications and have them managed by NGINX.

Setup & Pre-Requisites:

  • For this example, we have two sample Express Applications. One can have any kind of application running on different ports.

NOTE: Do not run your application on Port 80 or 443. We will explaining later why this must not be done.
Refer the official ExpressJS documentation for help getting started.

  • We have installed NGINX on our local machine, but the same could be done on any Virtual Machine where the applications are expected to be deployed.

Here is the documentation on how to install NGINX on your machine.

Step 1: Start two apps running in different ports

As we’ve mentioned earlier, we’ve got two Node.js Apps running on two different ports as shown below.

Server app running on Port 3000

Client app running on Port 3001

Now that we have our apps up and running, we don’t want our users to use these applications by typing their PORTS explicitly, so we need to map it with something that is more human-readable.

In this example, we will be using subdomains to distinguish between them. Again one is free to use whichever element is suitable as per requirements.

Another example could be a particular route like domain/client and domain/server. The only condition for the distinguishing element is to follow a valid URL regular expression. To learn about Regex you can click here.

Step 2: Add DNS records

This is the part where one would add the DNS records in their DNS management dashboard. If you are running Nginx locally, you can skip this step.

The general DNS Configurations would be something like:

  • Server app mapped to the server.domain
  • Client app mapped to the client.domain

My Localhost Config, in this case, would be:

  • Server mapped to server.localhost
  • Client mapped to client.localhost

There are two standard protocols HTTP and HTTPS. The default port for HTTP is 80 and HTTPS is 443. The reason we must not run our applications on these ports is because our NGINX server is running on these two ports. All the requests the client makes would either be redirected to port 80 or 443 from where it would be redirected internally to the corresponding application.

Step 3 - Configure NGINX at 80 for HTTP and 443 for HTTPS

Now that we have our apps running and our DNS records ready. We can start configuring our NGINX Reverse Proxy to make it all work.

By default, the configuration file is named nginx.conf and placed in the directory /usr/local/nginx/conf, /etc/nginx, or /usr/local/etc/nginx for Linux and Debian Based systems.

On Windows, the file is placed inside the installation folder, nginx/conf/nginx.conf.

Add these configurations inside the HTTP block.

Step 3.1 - HTTP

server { listen 80; server_name server.domain; location / { proxy_pass "http://localhost:3000" ; } } server { listen 80; server_name client.domain; location / { proxy_pass "http://localhost:3001" ; } }

Step 3.2 - HTTPS

server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name server.domain; location / { proxy_pass "http://localhost:3000/"; } ssl_certificate <location of SSL certificate>; ssl_certificate_key <location of SSL certificate Key>; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name client.domain; location / { proxy_pass "http://localhost:3001/"; } ssl_certificate <location of SSL certificate>; ssl_certificate_key <location of SSL certificate Key>; }

NOTE: These are the minimum configurations required to successfully implement NGINX for reverse proxying. Feel free to explore other config parameters as well.

Make sure to change the domain name to your domain. For a SSL Certificate and Key, you can obtain them from your SSL provider.

If you don’t have one, use this free service LetsEncrypt.

Follow their documentation to get free SSL instantly!

Step 4 - Save and Restart

After editing, save your changes. Use the sudo nginx -t command to test your changes before actually reloading NGINX. It is good practice do this to make sure your server won’t crash, if there were any errors in your config file.

Once you get a message that the test is successful, you can go ahead and restart NGINX.

Use this command sudo nginx -s reload to restart NGINX.

Open the browser and enter the URLs to find your applications running on the corresponding URLs configured.

For the example above, the URLs are:

  • client.localhost
  • server.localhost

Important Note

Using NGINX secures your server because it routes the traffic internally. Instead of having to open up all of your ports, in this case 3000 and 3001, to the internet, just 80 and 443 will do the trick.

This is because all traffic passes through the secure NGINX server (like a gateway) and is redirected to the correct application. Using a reverse proxy like NGINX is more secure that opening up several ports for every application you deploy because of the increased risk a hacker will use an open port for malicious activity.

Conclusion

Here is the end result:

Congratulations! You did it! 🎉

In large systems, the system is highly dependent on the micro-services architecture where each service would be served by an application. In that case, managing multiple apps would be an essential skill to know.

The microservices architecture is discussed here in detail.

Hope this article helped you to manage those independently deployed applications as a whole with the help of NGINX as a reverse proxy.

Thanks for reading!

References


Peer Review Contributions by: Louise Findlay


About the author

Harish Ramesh Babu

Harish Ramesh Babu is a final year CS Undergrad at the National Institute of Technology, Rourkela, India. He gets really excited about new tech and the cool things you can build with it. Mostly you’ll find him working on web apps either for the campus or an opensource project with the

 

 

 

https://www.section.io/engineering-education/nginx-reverse-proxy/

728x90

기본적으로 웹서버 구조는

와 같습니다.

nginx 와 php 를 사용하는 경우

와 같이 됩니다.

FastCGI는 웹 서버와 프로그램이 상호작용(데이터를 주고 받기 위한) 인터페이스 개발을 위한 프로토콜입니다.

FastCGI는 CGI(Common Gateway Interface)를 개선한 인터페이스입니다.

PHP 는 FastCgi 구현채인 php-fpm 을 사용해서 nginx 와 통신합니다.

php-fpm 설정 ( 기본경로 : /etc/php-fpm.d/www.conf )

listen = 127.0.0.1:9000

9000 번 포트로 tcp/ip socket 통신을 하는걸 알 수 있습니다.

nginx 설정 에서 php-fpm 과 통신할 수 있도록 설정 해봅시다.

nginx 설정 ( 기본경로 : /etc/nginx/nginx.conf )

server { location ~ \.(php)$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; } location ~ \.(gif|jpg|png)$ { root /data/images; } }

만일 nginx 와 php-fpm 이 물리적으로 동일한 서버에 있다면 unix socket 을 사용해서 속도를 향상 시킬 수 있습니다.

(주의 : unix socket 은 프로세스간 통신 이기 때문에 물리적으로 다른 서버에 있다면 사용 할 수 없습니다.)

다음과 같이 설정을 바꿉니다.

php-fpm 설정

listen = /var/run/php7-fpm.sock

nginx 설정

fastcgi_pass unix:/var/run/php7-fpm.sock;

설정 후

설정한 경로에 파일이 생김을 알 수 있습니다. (리눅스는 소켓을 파일 취급 합니다.)

nginx 와 php-fpm 소켓통신의 경우 접근이 잦기 때문에 /dev/shm 경로 아래에 넣어주면 약간의 속도 향상 효과를 얻을 수 있습니다.

/dev/shm 은 실제로 메모리를 점유하는건 아니지만 사용하는 만큼 램을 사용하는데

결국

php-fpm 설정

listen = /dev/shm/php7-fpm.sock

nginx 설정

fastcgi_pass unix:/dev/shm/php7-fpm.sock;

으로 수정하면 됩니다.

아주 간단하게 벤치마킹을 해봅시다.

server 코드

<?php echo "test"; echo PHP_EOL;

client 코드

<?php $message = 'test'; $header = array(); $url = "서버 주소"; $start = microtime(true); for($i=0; $i<100; $i++) { $curlSession = curl_init(); curl_setopt($curlSession, CURLOPT_URL, $url); curl_setopt($curlSession, CURLOPT_HTTPHEADER, $header); curl_setopt($curlSession, CURLOPT_POST, TRUE); curl_setopt($curlSession, CURLOPT_POSTFIELDS, $message); curl_setopt($curlSession, CURLOPT_RETURNTRANSFER, TRUE); $curlResult = curl_exec($curlSession); curl_close($curlSession); } $end = microtime(true); print_r($end - $start);

5회 돌려본 결과

 1회2회3회4회5회

tcp/ip socket 2.2519 2.5132 2.3198 2.4347 2.4336
unix socket 2.4471 2.3182 2.1452 2.1803 1.9307



참고 사이트

+

1)

팀 동료가 tcp/ip socket 을 unix socket 으로 수정후 테스트 케이스 1000개 정도의 속도가 1분정도 감소 하는 효과가 있었다고 말 해줬습니다.

2)

또다른 팀 동료는 tcp/ip socket 대신 unix socket 사용하는 이유는 tcp/ip socket 은 close 시에 프로세스가 time wait 가 걸려서 바로 반환 하지 않기 때문에 소켓 개수 제한에 금방 걸릴수 있어서 unix socket 으로 대체 한다고 말해 줬다.

관련 내용은 다시 찾아봐야 할 거 같습니다.

728x90

Load Balance Setting in NginX

 

로드밸런서 셋팅

  1. NginX install check

Setting in Ubuntu

# Debian and Ubuntu sudo apt-get update # Then install the Nginx Open Source edition sudo apt-get install nginx

 

Setting in CentOS

# CentOS # Install the extra packages repository sudo yum install epel-release # Update the repositories and install Nginx sudo yum update sudo yum install nginx

yum install epel-release 로 repository 를 최신 상태로 업데이트한다. 일반적으로 repo 의 경우는 초기상태에서는 최신 버전을 가르키진 않기 때문에, 해당 방식으로 업데이트 해준다.

[ Remi 와 Epel 은 Dependency Library 관계를 가지고 있다.]

 

CentOS 사용자의 경우 host 설정 파일 (/etc/nginx/conf.d/) 아래에 있는 .conf 파일들이 virtual host 상태로 로드된다.

 

2. 데몬을 재시작한다.

sudo systemctl restart nginx

 

 

3. 기본적으로 로딩 페이지에서 에러 발생시, 방화벽에 의해서 연결이 차단된 상태일 가능성이 있다. CentOS 7 의 기본 방화벽 설정은 HTTP 트래픽을 허용하지 않으므로 허용시킨다.

sudo firewall-cmd --add-service=http --permanent sudo firewall-cmd --reload

 

4. 브라우저 리로드

 

nano 명령어로 해당 conf 파일을 생성시킨다. nano 가 익숙하지 않다면 vi 로 해결할 것

. For example with nano:

sudo nano /etc/nginx/conf.d/load-balancer.conf

 

해당 파일 안에서 2개의 segment 로 구분하게 되는데, upstream 이 로드밸런서에 해당하고, server 에 관한 설정이 nginx 에서 설정하는 가상호스트의 설정을 따르게 된다.

In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.

# Define which servers to include in the load balancing scheme. # It's best to use the servers' private IPs for better performance and security. # You can find the private IPs at yourUpCloud control panel Network section. http { upstream backend { server 10.1.0.101; server 10.1.0.102; server 10.1.0.103; } # This server accepts all traffic to port 80 and passes it to the upstream. # Notice that the upstream name and the proxy_pass need to match. server { listen 80; location / { proxy_pass http://backend; } } }

Then save the file and exit the editor.

 

CentOS 에서는 같은 이름의 파일으로 심볼릭 링킹 처리가 안된다. 따라서 아래처럼 default.conf 파일을 다른 이름으로 바꿔서 처리해준다.

sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled sudo systemctl restart nginx # nginx daemon 재시작

 

로드 밸런싱 유형 (심화버전)

 

최소 활성화 연결방식 (내부 Threading 사용)

라운드 로빈 방식이 완료하는데에 비교적 오래걸리는 반면에 쓰레드를 사용하여 자원적으로 공평하게 처리한다.

최소 연결 로드밸런싱 방식을 사용하려면, least_conn 을 upstream 영역에 아래와 같이 추가하면 된다.

To enable least connections balancing method, add the parameter least_conn to your upstream section as shown in the example below.

upstream backend { least_conn; server 10.1.0.101; server 10.1.0.102; server 10.1.0.103; }

 

라운드로빈과 최소연결 밸런싱 방식은 평등함을 추구하는 방식이지만 세션을 지속적으로 제공하지는 않는다.

 

ip hashing 방식을 사용하는 경우에 사요자의 IP 주소는 키로 사용된다. 또한 이전 연결에 대하여 같은 서버의 처리시간을 유도한다.

upstream backend { ip_hash; server 10.1.0.101; server 10.1.0.102; server 10.1.0.103; }

 

로드밸런싱 방식에서 가중치를 두어서 로드밸런싱 방식을 처리하고 싶다면 weight 인자를 주어서 처리하면 된다.

upstream backend { server 10.1.0.101 weight=4; server 10.1.0.102 weight=2; server 10.1.0.103; }

 

 

Health Check [ Advanced Method ]

max_fails 와 fail_timeout 을 사용하여 relay 를 유도하는 방식이다.

upstream backend { server 10.1.0.101 weight=5; server 10.1.0.102 max_fails=3 fail_timeout=30s; server 10.1.0.103; }

 

 

 

https://upcloud.com/community/tutorials/configure-load-balancing-nginx/

 

How to configure load balancing using Nginx

Load balancing is an excellent way to scale out your application to increase its performance and redundancy. Here's how to set up load balancing with Nginx.

upcloud.com

 

 

 

+ Recent posts