Operation of Nginx on CentOS 7
1. Setting up Nginx Reverse Proxy
A reverse proxy is a service that takes a client request, sends the request to one or more proxied servers, fetches the response, and delivers the server’s response to the client.
Because of its performance and scalability, NGINX is often used as a reverse proxy for HTTP and non-HTTP servers. A typical reverse proxy configuration is to put Nginx in front of Node.js , Python , or Java applications.
Using Nginx as a reverse proxy gives you several additional benefits:
- Load Balancing - Nginx can perform load balancing to distribute clients' requests across proxied servers, which improve the performance, scalability, and reliability.
- Caching - With Nginx as a reverse proxy, you can cache the pre-rendered versions of pages to speed up page load times. It works by caching the content received from the proxied servers' responses and using it to respond to clients without having to contact the proxied server for the same content every time.
- SSL Termination - Nginx can act as an SSL endpoint for connections with the clients. It will handle and decrypt incoming SSL connections and encrypt the proxied server’s responses.
- Compression - If the proxied server does not send compressed responses, you can configure Nginx to compress the responses before sending them to the clients.
- Mitigating DDoS Attacks - You can limit the incoming requests and number of connections per single IP address to a value typical for regular users. Nginx also allows you to block or restrict access based on the client location, and the value of the request headers such as “User-Agent” and “Referer”.
This article outlines the steps required for configuring Nginx as a reverse proxy.
Prerequisites
We are assuming that you have Nginx installed on your Ubuntu , CentOS , or Debian server.
1)Using Nginx as a Reverse Proxy
To configure Nginx as a reverse proxy to an HTTP server, open the domain’s server block configuration file and specify a location and a proxied server inside of it:
server {
listen 80;
server_name www.example.com example.com;
location /app {
proxy_pass http://127.0.0.1:8080;
}
}
The proxied server URL is set using the proxy_pass directive and can use HTTP or HTTPS as protocol, domain name or IP address, and an optional port and URI as an address.
The configuration above tells Nginx to pass all requests to the /app location to the proxied server at http://127.0.0.1:8080.
On Ubuntu and Debian based distributions, server block files are stored in the /etc/nginx/sites-available directory, while on CentOS in /etc/nginx/conf.d directory.
To better illustrate how location and proxy_pass directives work, let’s take the following example:
server {
listen 80;
server_name www.example.com example.com;
location /blog {
proxy_pass http://node1.com:8000/wordpress/;
}
}
If a visitor access http://example.com/blog/my-post, Nginx will proxy this request to http://node1.com:8000/wordpress/my-post.
When the address of the proxied server contains a URI, (/wordpress/), the request URI that is passed to the proxied server is replaced by a URI specified in the directive. If the address of the proxied server is specified without a URI, the full request URI is passed to the proxied server.
2)Passing Request Headers
When Nginx proxies a request, it automatically defines two header fields in a proxied requests from the client, Host and Connection, and removes empty headers. Host is set to the $proxy_host variable, and Connection is set to close.
To adjust or set headers for proxied connections, use the proxy_set_header directive, followed by the header value. You can find a list of all available Request Headers and their allowed values here . If you want to prevent a header from being passed to the proxied server, set it to an empty string "".
In the following example, we are changing the value of the Host header field to $host and removing the Accept-Encoding header field by setting its value to an empty string.
location / {
proxy_set_header Host $host;
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:3000;
}
Whenever you modify the configuration file, you have to restart the Nginx service for the changes to take effect.
3)Configuring Nginx as a Reverse Proxy to a non-HTTP proxied server
To configure Nginx as a reverse proxy to a non-HTTP proxied server, you can use the following directives:
- fastcgi_pass - reverse proxy to a FastCGI server.
- uwsgi_pass - reverse proxy to a uwsgi server.
- scgi_pass - reverse proxy to an SCGI server.
- memcached_pass - reverse proxy to a Memcached server.
One of the most common examples is to use Nginx as a reverse proxy to PHP-FPM:
server {
# ... other directives
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
}
}
4)Common Nginx Reverse Proxy Options
Serving content over HTTPS has become a standard nowadays. In this section, we will give you an example of HTTPS Nginx reverse proxy configuration including the recommended Nginx proxy parameters and headers.
location/ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
- proxy_http_version 1.1 - Defines the HTTP protocol version for proxying, by default it it set to 1.0. For Websockets and keepalive connections you need to use the version 1.1.
- proxy_cache_bypass $http_upgrade - Sets conditions under which the response will not be taken from a cache.
- Upgrade $http_upgrade and Connection "upgrade" - These header fields are required if your application is using Websockets.
- Host $host - The $host variable in the following order of precedence contains: hostname from the request line, or hostname from the Host request header field, or the server name matching a request.
- X-Real-IP $remote_addr - Forwards the real visitor remote IP address to the proxied server.
- X-Forwarded-For $proxy_add_x_forwarded_for - A list containing the IP addresses of every server the client has been proxied through.
- X-Forwarded-Proto $scheme - When used inside an HTTPS server block, each HTTP response from the proxied server is rewritten to HTTPS.
- X-Forwarded-Host $host - Defines the original host requested by the client.
- X-Forwarded-Port $server_port - Defines the original port requested by the client.
If you don’t have an existing SSL/TLS certificate, use certbot to obtain a free Let’s Encrypt SSL certificate on your Ubuntu 18.04, CentOS 7, or Debian server.
Conclusion
You have learned how to use Nginx as a Reverse Proxy. We have also shown you how to pass additional parameters to the server and to modify and set different header fields in proxied requests
2. Secure Nginx with Let's Encrypt on CentOS 7
Let’s Encrypt is a free and open certificate authority developed by the Internet Security Research Group (ISRG). Certificates issued by Let’s Encrypt are trusted by almost all browsers today.
In this tutorial, we’ll provide a step by step instructions about how to secure your Nginx with Let’s Encrypt using the certbot tool on CentOS 7.
Prerequisites
Make sure that you have met the following prerequisites before continuing with this tutorial:
- You have a domain name pointing to your public server IP. In this tutorial we will use example.com.
- You have enabled the EPEL repository and installed Nginx by following How To Install Nginx on CentOS 7 .
1)Install Certbot
Certbot is an easy to use tool that can automate the tasks for obtaining and renewing Let’s Encrypt SSL certificates and configuring web servers.
To install the certbot package form the EPEL repository run:
sudo yum install certbot
2)Generate Strong Dh (Diffie-Hellman) Group
Diffie–Hellman key exchange (DH) is a method of securely exchanging cryptographic keys over an unsecured communication channel.
Generate a new set of 2048 bit DH parameters by typing the following command:
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
Obtaining a Let’s Encrypt SSL certificate
To obtain an SSL certificate for our domain we’re going to use the Webroot plugin that works by creating a temporary file for validating the requested domain in the ${webroot-path}/.well-known/acme-challenge directory. The Let’s Encrypt server makes HTTP requests to the temporary file to validate that the requested domain resolves to the server where certbot runs.
To make it more simple we’re going to map all HTTP requests for .well-known/acme-challenge to a single directory, /var/lib/letsencrypt.
The following commands will create the directory and make it writable for the Nginx server.
sudo mkdir -p /var/lib/letsencrypt/.well-knownsudo chgrp nginx /var/lib/letsencryptsudo chmod g+s /var/lib/letsencrypt
To avoid duplicating code create the following two snippets which we’re going to include in all our Nginx server block files:
sudo mkdir /etc/nginx/snippets
/etc/nginx/snippets/letsencrypt.conf
location ^~ /.well-known/acme-challenge/ {
allow all;
root /var/lib/letsencrypt/;
default_type "text/plain";
try_files $uri =404;
}
/etc/nginx/snippets/ssl.conf
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 30s;
add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
The snippet above includes the chippers recommended by Mozilla , enables OCSP Stapling, HTTP Strict Transport Security (HSTS) and enforces few securityfocused HTTP headers.
Once the snippets are created, open the domain server block and include the letsencrypt.conf snippet as shown below:
/etc/nginx/conf.d/example.com.conf
server {
listen 80;
server_name example.com www.example.com;
include snippets/letsencrypt.conf;
}
Reload the Nginx configuration for changes to take effect:
sudo systemctl reload nginx
You can now run Certbot with the webroot plugin and obtain the SSL certificate files for your domain by issuing:
sudo certbot certonly --agree-tos --email admin@example.com --webroot -w /var/lib/letsencrypt/ -d example.com -d www.example.com
If the SSL certificate is successfully obtained, certbot will print the following message:
OTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/example.com/privkey.pem
Your cert will expire on 2018-06-11. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Now that you have the certificate files, you can edit your domain ```
server block as follows:
/etc/nginx/conf.d/example.com.conf
server {
listen 80;
server_name www.example.com example.com;
include snippets/letsencrypt.conf;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
include snippets/ssl.conf;
include snippets/letsencrypt.conf;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
include snippets/ssl.conf;
include snippets/letsencrypt.conf;
# . . . other code
}
``With the configuration above we are forcing HTTPS and redirecting the www to non www version.
Finally, reload the Nginx service for changes to take effect:
sudo systemctl reload nginx
3)Auto-renewing Let’s Encrypt SSL certificate
Let’s Encrypt’s certificates are valid for 90 days. To automatically renew the certificates before they expire, we will create a cronjob which will run twice a day and will automatically renew any certificate 30 days before its expiration.
Run the crontab command to create a new cronjob:
sudo crontab -e
Paste the following lines:
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "systemctl reload nginx"
Save and close the file.
To test the renewal process, you can use the certbot command followed by the --dry-run switch:
sudo certbot renew --dry-run
If there are no errors, it means that the test renewal process was successful.
Conclusion
In this tutorial, you used the Let’s Encrypt client, certbot to download SSL certificates for your domain. You have also created Nginx snippets to avoid duplicating code and configured Nginx to use the certificates. At the end of the tutorial you have set up a cronjob for automatic certificate renewal.
3.How to Set Up Nginx Server Blocks on CentOS 7
Nginx Server Blocks allows you to run more than one website on a single machine. This is useful because for each site you can specify the site document root (the directory which contains the website files), create a separate security policy, use different SSL certificates, and much more.
In this tutorial, we’ll explain how to set up Nginx server blocks on CentOS 7.
Prerequisites
Ensure that you have met the following prerequisites before continuing with this tutorial:
- Domain name pointing to your public server IP. We will use example.com.
- Nginx installed on your CentOS system.
- Logged in as root or user with sudo privileges.
In some documentation, you’ll see Server Blocks being referred to as a Virtual host. A virtual host is an Apache term.
1)Create the Directory Structure
The document root is the directory where the website files for a domain name are stored and served in response to requests. We can set the document root to any location you want.
We will use the following directory structure:
/var/www/
├── example.com
│ └── public_html
├── example2.com
│ └── public_html
├── example3.com
│ └── public_html
Basically we are creating a separate directory for each domain we want to host on our server inside the /var/www directory. Within this directory, we’ll create a public_html directory that will be the domain document root directory and will store the domain website files.
Let’s start by creating the root directory for our domain example.com:
sudo mkdir -p /var/www/example.com/public_html
For testing purposes, we will create an index.html file inside the domain’s document root directory.
Open your text editor and create the demo index.html file:
sudo nano /var/www/example.com/public_html/index.html
Copy and paste the following code into the file:
/var/www/example.com/public_html/index.html
<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
<meta charset="utf-8">
<title>Welcome to example.com</title>
</head>
<body>
<h1>Success! example.com home page!</h1>
</body>
</html>
In this example, we are running the commands as a sudo user and the newly created files and directories are owned by the root user.
To avoid any permission issues, change the ownership of the domain document root directory to the Nginx user (nginx):
sudo chown -R nginx: /var/www/example.com
2)Create a Server Block
Nginx server block configuration files must end with .conf and are stored in /etc/nginx/conf.d directory.
Open your editor of choice and create a server block configuration file for example.com.
sudo nano /etc/nginx/conf.d/example.com.conf
You can name the configuration file as you want. Usually, it is best to use the domain name.
Copy and paste the following code into the file:
/etc/nginx/conf.d/example.com.conf
server {
listen 80;
listen [::]:80;
root /var/www/example.com/public_html;
index index.html;
server_name example.com www.example.com;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
location / {
try_files $uri $uri/ =404;
}
}
Save the file and test the Nginx configuration for correct syntax:
sudo nginx -t
If there are no errors, the output will look like this:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Restart the Nginx service for the changes to take effect:
sudo systemctl restart nginx
Finally, to verify the server block is working as expected open http://example.com in your browser of choice, and you will see something like this:
Conclusion
You have learned how to create an Nginx server block configuration to host multiple domains on a single CentOS server. You can repeat the steps we outlined above and create additional server blocks for all your domains.
4.Redirect HTTP to HTTPS in Nginx
In this guide, we will explain how to redirect the HTTP traffic to HTTPS in Nginx.
Nginx pronounced “engine x” is a free, open-source, high-performance HTTP and reverse proxy server responsible for handling the load of some of the largest sites on the Internet.
If you are a developer or system administrator, chances are that you’re dealing with Nginx on a regular basis. One of the most common tasks you’ll likely perform is redirecting the HTTP traffic to the secured (HTTPS) version of your website.
Unlike HTTP, where requests and responses are sent and returned in plaintext, HTTPS uses TLS/SSL to encrypt the communication between the client and the server.
There are many benefits of using HTTPS over HTTP, such as:
- All the data is encrypted in both directions. As a result, sensitive information cannot be read if intercepted.
- Google Chrome and all other popular browsers will mark your website as safe.
- HTTPS allows you to use the HTTP/2 protocol, which significantly improves the site performance.
- Google favors HTTPS websites. Your site will rank better if served via HTTPS.
The preferred method to redirect HTTP to HTTPS in Nginx is to configure a separate server block for each version of the site. You should avoid redirecting the traffic using the if directive , as it may cause unpredictable behavior of the server.
1)Redirect HTTP to HTTPS per Site
Typically when an SSL certificate is installed on a domain, you will have two server blocks for that domain. The first one for the HTTP version of the site on port 80, and the other for the HTTPS version on port 443.
To redirect a single website to HTTPS open the domain configuration file and make the following changes:
server {
listen 80;
server_name linuxize.com www.linuxize.com;
return 301 https://linuxize.com$request_uri;
}
Let’s break down the code line by line:
- listen 80 - The server block will listen for incoming connections on port 80 for the specified domain.
- server_name linuxize.com www.linuxize.com - Specifies the server block’s domain names. Make sure you replace it with your domain name.
- return 301 https://linuxize.com$request_uri - Redirect the traffic to the HTTPS version of the site. The $request_uri variable is the full original request URI, including the arguments.
Usually, you will also want to redirect the HTTPS www version of the site to the non-www or vice versa. The recommended way to do the redirect is to create a separate server block for both www and non-www versions.
For example, to redirect the HTTPS www requests to non-www, you would use the following configuration:
server {
listen 80;
server_name linuxize.com www.linuxize.com;
return 301 https://linuxize.com$request_uri;
}
server {
listen 443 ssl http2;
server_name www.linuxize.com;
# . . . other code
return 301 https://linuxize.com$request_uri;
}
server {
listen 443 ssl http2;
server_name linuxize.com;
# . . . other code
}
Whenever you make changes to the configuration files you need to restart or reload the Nginx service for changes to take effect:
sudo systemctl reload nginx
2)Redirect All Sites to HTTPS
If all of the websites hosted on the server are configured to use HTTPS, and you don’t want to create a separate HTTP server block for each site, you can create a single catch-all HTTP server block. This block will redirect all HTTP requests to the appropriate HTTPS blocks.
To create a single catch-all HTTP block which will redirect the visitors to the HTTPS version of the site, open the Nginx configuration file and make the following changes:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
Let’s analyze the code line by line:
- listen 80 default_server - Sets this server block as the default (catch-all) block for all unmatched domains.
- server_name _ - _ is an invalid domain name that never matches any real domain name.
- return 301 https://$host$request_uri - Redirect the traffic to the corresponding HTTPS server block with status code 301 (Moved Permanently). The $host variable holds the domain name of the request.
For example, if the visitor opens http://example.com/page2 in the browser, Nginx will redirect the request to https://example.com/page2.
If possible, prefer creating a redirection on a per-domain basis instead of a global HTTP to HTTPS redirection.
Conclusion
In Nginx, the preferred way to redirect HTTP to HTTPS is to create a separate server blocks and perform 301 redirect.
5.How to Enable the EPEL repository on CentOS
The EPEL (Extra Packages for Enterprise Linux) repository provides additional software packages that are not included in the standard Red Hat and CentOS repositories. EPEL repository was created because Fedora contributors wanted to use the packages they maintain on Red Hat Enterprise Linux (RHEL) and its derivatives such as CentOS, Oracle Linux, and Scientific Linux.
Enabling this repository gives you access to popular software packages including Nginx, R , and Python Pip.
In this tutorial, we will show you how to enable the EPEL repository on CentOS.
Prerequisites
Before starting with the tutorial, make sure you are logged in as a user with sudo privileges.
1)Enabling the EPEL Repository on CentOS 7
Enabling the EPEL repository on CentOS 7 is a pretty simple task as the EPEL rpm package is included in the CentOS extras repository.
To install the EPEL release package, type the following command:
sudo yum install epel-release
To verify that the EPEL repository is enabled run the yum repolist command that will list all available repositories.
sudo yum repolist
The command will display the repo ID, name and the number of packages for the enabled repositories. The output should include a line for the EPEL repository.
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
...
repo id repo name status
base/7/x86_64 CentOS-7 - Base 10,019
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 12,912
extras/7/x86_64 CentOS-7 - Extras 371
updates/7/x86_64 CentOS-7 - Updates 1,098
repolist: 24,400
That’s it. EPEL repository has been enabled on your CentOS system.
2)Enabling the EPEL Repository on RHEL
This method will work on any RHEL based distribution including Red Hat, CentOS 6 and 7, Oracle Linux, Amazon Linux, and Scientific Linux.
To enable the EPEL repository, run the following command which will download and install the EPEL release package:
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-$(rpm -E '%{rhel}').noarch.rpm
rpm -E '%{rhel}' will print the distribution version (6 or 7).
Conclusion
For more information about the EPEL repository, see the EPEL documentation .
6. Nginx Commands
In this guide, we will go over the most important and frequently used Nginx commands, including starting, stopping, and restarting Nginx.
Before You Begin
We’re assuming that you are logged in as root or user with sudo privileges. The commands in uide this gshould work on any modern Linux distribution like Ubuntu 18.04 and CentOS 8 and Debian 10.
Starting Nginx
Starting Nginx is pretty simple. Just run the following command:
sudo systemctl start nginx
On success, the command doesn’t produce any output.
If you are running a Linux distribution without systemd to start Nginx type:
sudo service nginx start
Instead of manually starting the Nginx service, it is recommended to set it to start on system boot:
sudo systemctl enable nginx
Stopping Nginx
Stopping Nginx quickly shuts down all Nginx worker processes even if there are open connections.
To stop Nginx, run one of the following commands:
sudo systemctl stop nginxsudo service nginx stop
Restarting Nginx
The restart option is a quick way of stopping and then starting the Nginx server.
Use one of the following commands to perform an Nginx restart:
sudo systemctl restart nginxsudo service nginx restart
This is the command that you will probably use the most frequently.
Reloading Nginx
You need to reload or restart Nginx whenever you make changes to its configuration.
The reload command loads the new configuration, starts new worker processes with the new configuration, and gracefully shuts down old worker processes.
To reload Nginx, use one of the following commands:
sudo systemctl reload nginxsudo service nginx reload
Testing Nginx Configuration
Whenever you make changes to the Nginx server’s configuration file, it is a good idea to test the configuration before restarting or reloading the service.
Use the following command to test the Nginx configuration for any syntax or system errors:
sudo nginx -t
The output will look like below:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If there are any errors, the command prints a detailed message.
Viewing Nginx Status
To check the status of the Nginx service, use the following command:
sudo systemctl status nginx
The output will look something like this:
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-05-4 13:57:01 PDT; 5min ago
Docs: man:nginx(8)
Process: 4491 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
Process: 4502 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 4492 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 4504 (nginx)
Tasks: 3 (limit: 2319)
CGroup: /system.slice/nginx.service
|-4504 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
|-4516 nginx: worker process
`-4517 nginx: worker process
Checking Nginx Version
Sometimes you may need to know the version of your Nginx so you can debug an issue or determine whether a certain feature is available.
You can check your Nginx version by running:
sudo nginx -v
nginx version: nginx/1.14.0 (Ubuntu)
The -V option displays the Nginx version along with the configure option.
sudo nginx -V
Conclusion
In this guide, we have shown you some of the most essential Nginx commands. If you want to learn more about the Nginx command line options, visit the Nginx documentation.