Network Appliances: Load Balancer

Video Activity

Network Appliances: Load Balancer This lesson discusses the load balancer. The load balancer is the network appliance that is used to distribute the workload across devices. Load balancers transparently distribute using algorithms. Load balancers are also capable of DOS protection and caching.

Join over 3 million cybersecurity professionals advancing their career
Sign up with
or

Already have an account? Sign In »

Time
31 hours 29 minutes
Difficulty
Beginner
CEU/CPE
30
Video Description

Network Appliances: Load Balancer This lesson discusses the load balancer. The load balancer is the network appliance that is used to distribute the workload across devices. Load balancers transparently distribute using algorithms. Load balancers are also capable of DOS protection and caching.

Video Transcription
00:04
So our first appliance that we're gonna talk about is gonna be our load balancers. Now, what is load balancing when we're talking about load balancing? We're talking about distributing a workload across several different devices. So essentially we're taking certain devices, which we are getting a lot of traffic on. We're having a lot of people trying to access these devices.
00:23
And instead of just sending the workload toe one single device
00:26
we distributed across several different devices several different servers, If you think of when you navigate to Facebook's website, Facebook doesn't just have one server sitting in a room somewhere that everybody in the world acts who accesses. It connects to that one single server. They have tohave a lot more servers than just one single server,
00:46
but they need it to be transparent. Tow us. They need it to be
00:50
done in a way that we don't have to try to connect to facebook dot com, and we can't get there because it's too busy. So then we're gonna try to connect to Facebook to dot com and this too busy we're gonna try to connect to Facebook three dot com. It's too busy
01:06
we don't wanna have our users doing that. We wanna have devices available that take those requests and then autumn, even if they're directed at one location, even if they're directed toward one i p address, they can take those requests and they could distribute them across. Different across different devices that can distribute that workload, make it more manageable.
01:26
And when we're distributing workload across different devices there several different types of reasons why we do it, why we would do load balancing. I just wrote two examples up here. We have something called D N s Round Robin and we can have http response round Robin,
01:41
where if we set up different D. N s servers and we're servicing a large enterprise environment, we may need more than just one D. N s server. So what we do is we set up multiple Deanna servers and have them set up to round robin requests back. So if our computers and our environment we have one computer that requests
02:00
request information from a D. N s server
02:04
and it gets sends the response to this server, and then the next request goes to the next server. When we're talking about right round robin essentially
02:13
say we're extending out requests. The first request is going to go to server number one second server number 2/3 requests server number three and then our fourth requests goes back to server number one or however many servers we have. It just puts him in a list, goes down through the list and then re starts at the top
02:30
and the same thing with http requests. Web servers will do this and they'll distribute their workload across several different Web servers. We will have our load balance of received a request, and then it will send it to the first in the list. And then and then the second request. It receives it since two, the second on the list
02:46
and keeps going down the list until it sends it back to the 1st 1
02:51
Now
02:52
load balancers. We want them to be transparent to our end users. We want our load balancers to be as transparent as possible to the people that are using them. We want it to seem like to the people who are connecting or sending a request that we only have one device that's being serviced.
03:10
We really we want to reduce the amount of time that it takes for our load balancers to make a decision as to where they distribute this request to.
03:19
And we want to make sure that a user doesn't say, Oh, well, sometimes I send a I'm loading your website and it comes up really fast and then sometimes I loaded up and it's about medium speed, and then sometimes I loaded up and it's really, really slow. Well, that would mean that there's differences in between our three different servers as far as how they're servicing those requests.
03:38
So we want to make sure that when we're doing when we want to make this,
03:42
when we're setting up our load balancing, we want to do it in such a way that it's as transparent as possible to the end user. It looks to them like we just have one single server,
03:52
so our load balancers have different algorithms that they use in order to equally distribute the weight amongst the servers. We talked about round robin. When we have the load balance or send our request to each server and then once it hits the end of the list, it starts over again
04:08
But sometimes that's not the best way to do things. We have servers, some service that maybe
04:14
that may have more computing power. They may have more. Resource is than enough. Another server. We may have one server that's getting a lot of high volume requests, having requests that are trying to pull a lot of data.
04:27
So other than just round robin, we also have least response time.
04:31
So our load balancer essentially sends a request to each of the servers in our list. Each of the devices in our list and says, Hey, which of you are available? And then the first response he gets back saying, I'm available. It's gonna send the request to that server. That way it is. It's pretty much just saying, OK, well, whoever gets back to me first,
04:50
that apparent, that must be the person who has the least amount of stuff going on
04:55
because it took them the least amount of time to confer, formulate a response and send it to me. So I'm gonna send them the request.
05:01
We can also have waited round robin where we give different servers, different weights. Essentially, we say, OK, this server is about twice as powerful. Is this server here? So let's say I have I have two servers in my round robin list for my load balancer. But this server is twice as powerful as this server.
05:20
So I'm going to send this server two requests in a row before I send my second server. One request.
05:27
So powerful server gets a request. Powerful get server gets another request and then week server gets a request, and then powerful server gets two more requests, so that would be weighted round robin. It's a more hands on approach to round robin where we actually
05:43
take and look at our servers, look at their capabilities and decide who gets more weight.
05:46
Who should be trying tow. Handle more traffic
05:49
load balancers may also do more than just balancing work. Workloads load balancers may also give us D. O s protection denial of service protection. DDOS attacks are essentially a high volume traffic requests to a certain device to a certain to a certain service provider.
06:10
I say we have a someone who's trying to bring down our website
06:14
rather than trying to hack into our website or rather than trying to find a vulnerability in our Web server. They may just try and send our website millions of requests at once. And then our website is so busy trying to catch up to those millions of requests that nobody else can catch up are no one else can connect in.
06:31
So that would be our denial of service. They're essentially denying service
06:35
to anyone else who's trying to connect to our Web server because they're sending so many requests. Well, our load balancers may notice that increase
06:45
that that influx of traffic and make Seo these air just all ping requests where these are just random nonsense requests or all of these requests. All of these millions of requests are coming from the exact same the exact same destinations, the exact same I P addresses.
07:01
So I'm gonna take all of these requests and I'm just going to drop them,
07:05
and I'm not gonna pass them onto the server to try to handle Well, that could give us a little bit of denial of service protection because our load balancers are actually going to drop those requests and they're not gonna afford them on. They're not gonna be something that our server has to deal with
07:17
load balancers could also can also do some really balances may also be able to do cashing. Now cashing essentially means that our load balancers are going to take a service that is commonly commonly requested. Take a Web page that's commonly requested,
07:34
say, our home page, and they're gonna take that page and they're going to keep it in their local memory.
07:40
So it's being requested a lot. We don't need to pass on the request to our servers are load balancers get the request and they say, Oh, I have a copy of the answer to the this request. So rather than passing along and giving extra workload to the servers, I'm just gonna respond to this myself because I have a I have a recent copy of this
07:59
so they could provide cashing
08:01
so doing more than just distributing the workload. They're actually easing some of the workload by responding to some of the requests requests for us.
08:09
So here we have, ah, very basic example of what a what a load balanced environment may look like.
08:16
So, up here we have our Internet, and this is gonna be our public network that's connecting into our into our private network.
08:24
And we have our router here, and we have our load balancer, and we have our three Web servers now are three Web servers are
08:31
are pushing back are giving back
08:35
requests to clients. They're streaming videos. They're letting clients download files. So we have a lot of we have a lot of very in user heavy functionality is going on here video streaming file downloads, things like that. So we have three Web servers rather than just one.
08:52
So right here are green Device
08:54
is going to be our load balancer. And when our load balancer as it gets requests from the Internet
09:01
is going to take them and distribute them to each of our different devices.
09:07
So let's say rather than just doing waited rather than just doing regular round Robin, we're gonna tell our load balancer that we want to do waited round Robin. And we say that we have server our first server here,
09:20
which we give it a weight. We say that it has a computing power of 10.
09:26
Serving number two has a computer power computing power of 20
09:31
and 37. Number three has a computer power computing power power of 30. Don't don't stress over these numbers too much. We're essentially just saying that Computer One is only 1/3. It's powerful. It's computer as our third computer. That can do safe that can respond to 30 requests at a time.
09:52
So our load balancer we're gonna enter that information in, we're gonna enter our different weighted round robin information in and then around our load balancers gonna receive the first request
10:01
and send it to our 30 server.
10:05
Receive a second request is going to send it here third requests and it's the same place. And then it's gonna start sending to our 2nd 1
10:13
and understand one sent to,
10:15
And then it will send to the 3rd 1 and just send one request and then it will look back and start sending to our most powerful server again
10:24
so we can see how our load balancing helps us to distribute out. That workload helps us to take those if we connect. If we're providing Internet Internet facing functionalities, take potentially hundreds of thousands, if not millions of requests and distribute those across several different devices
10:43
now load balancers, maybe just a single
10:48
a single hardware device that we buy as a load balancing a load balancer device,
10:52
a load balancer, maybe a server that we've configured with specific load balancing functionalities that has that is able to perform load balancing and able to pass on request other devices. So, just depending on our depending on our environment, depending what type of hardware we have, what type of what type of
11:11
but we have if we want to go out and buy a device that just functions is a load balancer
11:16
or by a server that functions is load balancer as well as perhaps a gateway device. Then we can. But just remember that when we see load balancer and we're and we're trying to know what load balancing is and what it does, just remember that load balancing is distributing our workload across several devices
11:35
and that when we load balance, we want to make it as transparent as possible to the end users
Up Next
CompTIA Network+

This CompTIA Network+ certification training provides you with the knowledge to begin a career in network administration. This online course teaches the skills needed to create, configure, manage, and troubleshoot wireless and wired networks.

Instructed By