Reverse Proxy Forward Proxy Load Balancers

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

http://www.jscape.

com/blog/bid/87783/Forward-Proxy-vs-Reverse-Proxy

Forward Proxy vs Reverse Proxy


Posted by John V. on Mon, Aug 06, 2012 @ 10:30 AM

Overview

We've talked about reverse proxy servers and how they can really be good at protecting the servers in your internal network.
Lately, however, we've realized that some people actually think we're talking about forward proxy servers or that the two are one
and the same. They're not. This post should easily spell out the differences between the two.

Just to make sure we're starting off on the same foot, the main purpose of a proxy service (which is the kind of service either of
these two provide) is very similar to what a person aims to achieve when he proxies for another person. That is, to act on behalf
of that other person. In our case, a proxy server acts on behalf of another machine - either a client or another server.

The Forward Proxy

When people talk about a proxy server (often simply known as a "proxy"), more often than not they are referring to a forward
proxy. Let me explain what this particular server does.

A forward proxy provides proxy services to a client or a group of clients. Oftentimes, these clients belong to a common internal
network like the one shown below.

When one of these clients makes a connection attempt to that file transfer server on the Internet, its requests have to pass
through the forward proxy first.

Depending on the forward proxy's settings, a request can be allowed or denied. If allowed, then the request is forwarded to the
firewall and then to the file transfer server. From the point of view of the file transfer server, it is the proxy server that issued the
request, not the client. So when the server responds, it addresses its response to the proxy.

But then when the forward proxy receives the response, it recognizes it as a response to the request that went through earlier.
And so it in turn sends that response to the client that made the request.

Because proxy servers can keep track of requests, responses, their sources and their destinations, different clients can send out
various requests to different servers through the forward proxy and the proxy will intermediate for all of them. Again, some
requests will be allowed, while some will be denied.

As you can see, the proxy can serve as a single point of access and control, making it easier for you to enforce security
policies. A forward proxy is typically used in tandem with a firewall to enhance an internal network's security by controlling traffic
originating from clients in the internal network that are directed at hosts on the Internet. Thus, from a security standpoint, a
forward proxy is primarily aimed at enforcing security on client computers in your internal network.

But then client computers aren't always the only ones you find in your internal network. Sometimes, you also have servers. And
when those servers have to provide services to external clients (e.g. field staff who need to access files from your FTP server), a
more appropriate solution would be a reverse proxy.

The Reverse Proxy

As its name implies, a reverse proxy does the exact opposite of what a forward proxy does. While a forward proxy proxies in
behalf of clients (or requesting hosts), a reverse proxy proxies in behalf of servers. A reverse proxy accepts requests from
external clients on behalf of servers stationed behind it just like what the figure below illustrates.

To the client in our example, it is the reverse proxy that is providing file transfer services. The client is oblivious to the file
transfer servers behind the proxy, which are actually providing those services. In effect, whereas a forward proxy hides the
identities of clients, a reverse proxy hides the identities of servers.

An Internet-based attacker would therefore find it considerably more difficult to acquire data found in those file transfer servers
than if he wouldn't have had to deal with a reverse proxy. No wonder reverse proxy servers like JSCAPE MFT Gateway are very
suitable for complying with data-impacting regulations like PCI-DSS.

Just like forward proxy servers, reverse proxies also provide a single point of access and control. You typically set it up to work
alongside one or two firewalls to control traffic and requests directed to your internal servers.

In most cases, reverse proxy servers also act as load balancers for the servers behind it. Load balancers play a crucial role in
providing high availability to network services that receive large volumes of requests. When a reverse proxy performs load
balancing, it distributes incoming requests to a cluster of servers, all providing the same kind of service. So, for instance, a
reverse proxy load balancing FTP services will have a cluster of FTP servers behind it.

Both types of proxy servers relay requests and responses between source and destination machines. But in the case of reverse
proxy servers, client requests that go through them normally originate from the Internet, while, in the case of forward proxies,
client requests normally come from the internal network behind them.

Recommended post: Setting Up A HTTPS To HTTP Reverse Proxy

Summary

In this post, we talked about the main differences between forward proxy servers and reverse proxy servers. If you want to
protect clients in your internal network, put them behind a forward proxy. On the other hand, if your intention is to protect
servers, put them behind a reverse proxy.

Want to try a reverse proxy for FREE?

We recommend JSCAPE MFT Gateway, a reverse proxy and load balancer that supports SFTP, FTP/S, HTTP/S, and other
TCP-IP protocols. It comes with a fully-functional evaluation edition which you can download below.
https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/

WHAT IS A REVERSE PROXY VS. LOAD BALANCER?


Reverse proxy servers and load balancers are components in a client-server computing architecture.
Both act as intermediaries in the communication between the clients and servers, performing functions
that improve efficiency. They can be implemented as dedicated, purpose-built devices, but
increasingly in modern web architectures they are software applications that run on commodity
hardware.

The basic definitions are simple:

A reverse proxy accepts a request from a client, forwards it to a server that can fulfill it, and returns the
servers response to the client.
A load balancer distributes incoming client requests among a group of servers, in each case returning the
response from the selected server to the appropriate client.
But they sound pretty similar, right? Both types of application sit between clients and
servers, accepting requests from the former and delivering responses from the latter. No wonder
theres confusion about whats a reverse proxy vs. load balancer. To help tease them apart, lets
explore when and why theyre typically deployed at a website.

Load Balancing
Load balancers are most commonly deployed when a site needs multiple servers because the volume
of requests is too much for a single server to handle efficiently. Deploying multiple servers also
eliminates a single point of failure, making the website more reliable. Most commonly, the servers all
host the same content, and the load balancers job is to distribute the workload in a way that makes
the best use of each servers capacity, prevents overload on any server, and results in the fastest
possible response to the client.

A load balancer can also enhance the user experience by reducing the number of error responses the
client sees. It does this by detecting when servers go down, and diverting requests away from them to
the other servers in the group. In the simplest implementation, the load balancer detects server health
by intercepting error responses to regular requests. Application health checks are a more flexible and
sophisticated method in which the load balancer sends separate health-check requests and requires a
specified type of response to consider the server healthy.

Another useful function provided by some load balancers is session persistence, which means sending
all requests from a particular client to the same server. Even though HTTP is stateless in theory, many
applications must store state information just to provide their core functionality think of the shopping
basket on an e-commerce site. Such applications underperform or can even fail in a load-balanced
environment, if the load balancer distributes requests in a user session to different servers instead of
directing them all to the server that responded to the initial request.

Reverse Proxy
Whereas deploying a load balancer makes sense only when you have multiple servers, it often makes
sense to deploy a reverse proxy even with just one web server or application server. You can think of
the reverse proxy as a websites public face. Its address is the one advertised for the website, and it
sits at the edge of the sites network to accept requests from web browsers and mobile apps for the
content hosted at the website. The benefits are two-fold:

Increased security No information about your backend servers is visible outside your internal network, so
malicious clients cannot access them directly to exploit any vulnerabilities. Many reverse proxy servers
include features that help protect backend servers from distributed denial-of-service (DDoS) attacks, for
example by rejecting traffic from particular client IP addresses (blacklisting), or limiting the number of
connections accepted from each client.
Increased scalability and flexibility Because clients see only the reverse proxys IP address, you are free to
change the configuration of your backend infrastructure. This is particularly useful In a load-balanced
environment, where you can scale the number of servers up and down to match fluctuations in traffic volume.
Another reason to deploy a reverse proxy is for web acceleration reducing the time it takes to
generate
a response and return it to the client. Techniques for web acceleration include the following:

Compression Compressing server responses before returning them to the client (for instance, with gzip)
reduces the amount of bandwidth they require, which speeds their transit over the network.
SSL termination Encrypting the traffic between clients and servers protects it as it crosses a public network
like the Internet. But decryption and encryption can be computationally expensive. By decrypting incoming
requests and encrypting server responses, the reverse proxy frees up resources on backend servers which they
can then devote to their main purpose, serving content.
Caching Before returning the backend servers response to the client, the reverse proxy stores a copy of it
locally. When the client (or any client) makes the same request, the reverse proxy can provide the response
itself from the cache instead of forwarding the request to the backend server. This both decreases response
time to the client and reduces the load on the backend server.

How Can NGINX Plus Help?


NGINX Plus and NGINX are the best-in-class reverse proxy and load balancing solutions used by high-
traffic websites such as Dropbox, Netflix, and Zynga. More than 300 million websites worldwide,
including the majority of the 100,000 busiest websites, rely on NGINX Plus and NGINX to deliver their
content quickly, reliably, and securely.
NGINX Plus performs all the load-balancing and reverse proxy functions discussed above and more,
improving website performance, reliability, security, and scale. As a software-based load balancer,
NGINX Plus is much less expensive than hardware-based solutions with similar capabilities. The
comprehensive load-balancing and reverse-proxy capabilities in NGINX Plus enable you to build a
highly optimized application delivery network.

You might also like