our last topic for this board, for our Leighton See and our high bandwith applications is going to be our load balancing now are rare. Balancing is going to be distributing a workload across multiple different devices. Now what the stove is, it allows easing the strain off of
if we would have tried to put all of the workload and all of the requests on a single device.
this is less about. This is less
for us connecting to someone on the other side of the Internet and more in users trying to connect two applications on our side of the Internet. If we have a if we have an application or for have a say, a Web server stand stood up in our network and that
people on the other side of trying to connect to people on the public network trying to connect into our private network and connected this Web server
and we just have one Web server, Then we may notice that there's a lot of late and see there's a lot of delay in people being able to connect to our Web server because our Web server may not be able to handle all of the requests that are coming in. So instead, what we can do is we can set up additional Web servers
that act as that act as additional devices that in users can connect to.
And we stayed up a device that low balances between these different Web servers. We stand up a device that takes the incoming connections and then says, Okay, this connection is goingto server A. This next connections gonna go to serve a B that's next. Next connection is going to go to server, see, And that balances the connections between all of these different servers
faster connections in order to spread out the network traffic. So that even if we have a lot of these high bandwith connections coming in that were still able to balance this traffic between these different devices. So we don't just have one single device that has all this strain on it
so we can distribute workload across multiple devices. Now, we want this to be as transparent as possible. We want to, if possible, get if we have three different service that were low bouncing across. Then we want these three servers to not be any have any noticeable infuser difference on them?
We want to make sure that if someone is connecting to server A one day and then someone else is connecting to serve a B,
the both of them are seeing the same in result that they're as transparent as possible, so there isn't necessarily faster to connect to. Then server be you're not. You're not
playing game luck every time you try to connect to this device because you don't know which so you're going to get and one server might be faster than the other. So we want load balancing to be as transparent as possible. Now
they're really there. Are devices going to load balance across different across our different servers may be different depending on our environment may be different depending on our needs. We may have a single load balancing device that all it does is it has incoming connections into it
and it performs load balancing and then it has the outgoing connections to our different devices that we want in the end users to connect to.
We may actually have a server that performs other functions but also has a role of a load balancing server and also functions to be ableto take incoming connections and then provide low balancing. We just need to know that we do have to have this whatever load balancing device or whatever load balancing sober we have
needs to be in a location what it is intercepting network traffic.
He needs to be in a location once actually receiving these network these requests and then is able to low balance them across these different devices.
So the way that our devices perform this profession load balancing can. Very. But we have four different ways up here of rays that are load balancing device can sort of determine which device is going to send a request to. We have round robin
least connections, fastest response time and waited round. Robin,
when we say round Robin were essentially just saying that
we're just going to each device in order. We have a list of all of the soap all of our Web servers and our load balancing device is just gonna go down the list when it receives connections. Gonna send the connection? It's over. A. The next connection goes a server, be the next connection goes to server, see And then the fourth connection goes back up to Server A.
And we just keep going down the list and keep going down, and we keep round robin ing around.
Our next type is going to be least connections now,
sometimes with round robin. That may not be the best solution, because we may have connections that stay active. Maybe we don't. We are. It isn't a Web server sending out requests. Maybe this is an FTP are different FTP servers, different file transfer servers, and so we need users to be able to connect in.
But when the users do connect to these servers, they're going to be receiving files that are sent to them.
Well, if someone is connecting to Sober A and is downloading a file that's going to take them 20 minutes in order to download and someone's connecting to server be and it's gonna take them 20 seconds to download the file, then it's not fair. When we get back to Server A and we're giving them another connection,
maybe another 20 minute file download
because several B is already done so a stir has a connection server. Steep see still has a connection. Several be has nothing, but we're gonna give one back to server A. That's not. That may not be the best solution. So with different servers that may be certain, that may be servicing different
intensity. Requests may be serving different requests that have
that take them different amounts of time to complete then least connections. Maybe a better solution for us are
essentially our load balancing device. Will look alike, look to each server and say OK, Server A right now has far is currently transferring files to five connections. So it will be is transferring files to three connections and sort of see is transferring files to six connections. So I'm going to take the next connection request
and give it to sober beat. So I don't now have four connection requests,
and then it will keep track of how many connections are currently active. Teach Server says to who gives the next connection to So it does. It still has its list of Server A, B and C, but it isn't just go down the list and then restart. It actually looks at each sober and says OK, who has the least amount of people
who was servicing the least amount of requests and the least amount of connections right now.
And I'm going to give you the next connection whoever that person who's servicing the least amount of connections is.
And next, we have the fastest response time
in the fastest response time scenario, our little balancing devices actually querying our servers and saying, Okay, I have a connection who can handle this? Next? Who can handle this next connection? And whichever server responds back first and says, Oh, that's me is going to get the connection request
Now if we have different devices, maybe there are. We have three servers that are
their hardware setups are pretty much exactly the same. Then whoever is going to give us that fastest response time back if they're the same distance from our load balancing device on the network, is probably going to be the one who has the least going on right now. Who is whoever is able to process that that query, fender load balancing server,
process it and Cinderella and send a response back first
is most likely going to be the person who has the least on their mind right now, sort of like if you have the boss come into the office and just yell down the hall. Okay? I had a project who can handle it. Well, the first person who says, Oh, I got it is either the very ambitious person or even put the person who probably has the least amount of stuff going on right now.
So So do the same with a load Balancer is gonna yell down the office going to say to all of our different devices,
Okay, Who has Who has the weeks going on right now? Who could handle this request from the first person to raise their hand and say, I got it is going to get that is going to get that connection.
And then lastly, we have waited round robin. Now waited round. Robin is very similar to round Robin, except with rated round Robin. We give each of our devices different weights. Essentially, we say that this device can handle more than this device. So we want this device to get more connections
as we're going down the weighted round robin table.
So we still have served a Serbian server, see? But maybe we started out with server A and serve a B, and we just used those two sobers as our as our web. Back in Mazar Ft. Peace is our file transfer server back in. And then, as we grew as our company sized group, we decided Reloaded a server. See, we were having a lot of connections
forever and forever be. We're having a hard time juggling all of this.
Or maybe we wanted a little bit of redundancy. We wanted some backup, so we run out and we bought another server. But what we could buy for for, say, that we could buy for $1500
five years ago when we set up server and server be buys a lot more. Now $1500 buys more, completing rise now than it did. It did. It did five years ago. So or maybe we find a good deal or whatever the case may be. Well, server see is going to be a much more powerful, much my powerful server.
It's able to handle more requests, is able to send out more data, and maybe we were able to install an additional network interface card.
So whatever the case may be, server see is able to handle more requests so we don't want to just go around. Robin, we don't want to say OK, one request. Sever a one to serve a B one. The server. See Now back to survey A because server See can handle three times the requests as server A can or server be can.
So in weighted round Robin, we're telling our load. Balancing the rice.
Okay, you see several A and server. Be ableto handle. One requested a time but server See Can handle three work requests for everyone. Request on server A or server be so as our load balancer is going down, it's going to say OK one request goes to sober A 11 request goes to serve a b.
Another request goes a server, see another request goes to server, See another request goes to server. See? And now my next request is going to go to sever a so rated round Robin, we're still now back to just going down the list. We're not clearing. The servers were not checking how many connections they have,
but we're telling our load balancer. Okay for everyone request that you give to server A
You're going to give three you're going to give three to server see. So that's our weighted round robin versus our round robin.
whether it's our load balancing our quality of service or our traffic shaping weaken, see how these different methods can help us to mitigate high bandwith applications as well as our latent see sensitive applications. Quality of service in a traffic shaping our more geared toward our Leighton see sensitive applications
and then a traffic shaping can help with our high bandwith applications because they can help, too
delay potentially less important high bandwith traffic again. Maybe we have some fire screaming that's very high bandwith traffic, but we wantto delay that so we use less of art available network bandwidth when when people trying to use that we have quality of service that can help with our
that can help with our different agencies instead of applications, or if maybe we're the people. Will the end that in users air connecting to maybe someone's connecting tow us and is requesting all agency sensitive or high bandwith service,
then load balancing can help on our end. Maybe someone is trying to download
large amounts of files from our FTP servers and low balancing can help to speed that up. It can help to distribute out the network. Traffic can help to make it make the end user happier because they're seeing their date is coming down a lot quicker than if we were to not be using low balancing and having them just connect the one device.
now that we've taken a look at our late in seasons of applications and we take our we've taken a look at our legacy sensitivity and we've taken a look at high bandwidth, now we're going to take a look at up time.