Tier 1, Tier 2 & Tier 3 Internet Service Providers

tier-1-tier-2-tier-3-isp

Ever wondered how effortless it seems when you communicate over the internet. Everything seems to be always available, always on. Most of the time, it is a simple case of connecting to your home wifi router and off you go. Things aren’t as simple behind the scenes though. There is a huge mesh of inter-connectivity between networks at a global scale that allows us to communicate. This inter-connectivity is largely possible due to the flow of traffic between tier 1, tier 2 and tier 3 internet service providers (ISPs).

 

Tier 1:

Tier 1 ISPs provide the backbone of connectivity for all internet on a global scale. Tier 1 ISPs interconnect with each other to share big data channels for traffic to pass through them. They usually peer directly with each other with each tier 1 provider responsible for a large chunk of global traffic. Tier 1 providers don’t purchase transit links as they form the backbone of the internet which other networks utilise as their transit.

 

Tier 2:

Tier 2 ISPs provide  connectivity between tier 1 and tier 3 ISPs. They purchase transit links from tier 1 providers and freely peer with other tier 3 ISPs and any other enterprise that wishes to peer with them. Selling a middleman service, the goal of tier 2 providers is to have as many networks connected as possible. This enables them to provide lower latency to their customers.

 

Tier 3:

Tier 3 ISPs fall in the realm of the end user. These ISPs are largely responsible for connectivity on the last mile. Tier 3 ISPs connect home users with a network and via their links with tier 2 and tier 1 providers, tier 3 providers inject meaningful traffic into the internet. This traffic than makes its way via tier 1 provider’s backbone network all the way to the destination which usually sits inside a tier 3 ISP network or a non-ISP network connected to a tier 2 ISP. A tier 3 ISP can have direct links with a tier 1 ISPs.

To understand the  end to end packet flow over the internet, please refer to my video below:

High CPU EEM script on Cisco Nexus switches

eem-script-cisco-nexus-switch

Recently, I was working in a  Cisco Nexus dominated environment when the main 7K core switches reported high CPU for a short period of time. By the time I would log on the switch and parse the show processes cpu history command, the cpu would be back to normal again. There was no way to find out which process caused the cpu to spike. To make matters worse, this became a recurring event and so I thought about making the nexus 7k run a script locally upon high cpu detection. This led me to create an EEM script that did just that.

The below EEM script can come handy if you have a similar problem on any nexus device. It will run upon CPU spiking above 90%, run the commands listed and save the output in flash with the specified file names. Once finished running, it will delete itself when it runs the command in action 1.6. If you decide to not let the script delete itself than remember that ‘>>’ indicates an addition to the existing file. You can replace all ‘>>’ in the script below with a single ‘>’ which would always delete the previous file and create a new one. This would keep your memory in check.

In case you would like to change the cpu threshold than simply change the value in the script above from entry-cal 90 to any other threshold you may prefer.

You can simply pass the DIR command to look at flash and see if the file highcpu.txt is listed in the output. If it is, than that means the script successfully ran. You can then easily figure out which process is the culprit and proceed with root cause analysis.

Juniper: Configuration Management

configure-manage-juniper-junos-device

Logged on a Juniper router for the first time? Scared by the command line interface staring back at you? Don’t worry, Juniper devices are actually one of the easiest to manage.

Unlike many other vendors that simply copied the CLI structure authored by Cisco, juniper took a different route altogether. To begin with, juniper uses the same junos operating system on every juniper device. This means whilst some commands can be supported/unsupported, the underline CLI architecture stays the same. Once you get to grips with junos, you can work on any juniper device be it routers, switches or firewalls. For starters, there is a defined hierarchical structure to configuration where everything falls in place. The first thing you need to do is to get used to this structure.

 

Configuration Management:

There are two ways to interpret configuration on a junos device. One is the standard output which is presented to you when you pass the show configuration command. The other is when you add display set in pipped output in front of any configuration related command. For example, the “show configuration system” command can yield the following output:

If however, you were to add the “| display set” variant in front of the show configuration command you will get the following output:

Notice the difference? The first output uses the default junos style scripting language type output for junos. Whereas the second is closer at heart to cisco’s style of show run output.

However, there is still a fundamental difference. Wherever you navigate, junos keeps the hierarchy in view which helps keep the perspective in place of the configuration snippet you are dealing with. For example, when analysing the output above you can clearly see that each attribute of configuration complements the hierarchy by associating itself to system and so on. This is something I have always missed in other vendors as you find yourself hacking your way via grep commands to look for the parent attribute usually resting couple of lines above. The best thing about junos is when you perform a match statement (same as include in Cisco), you can simply add display set to it and you will be able to see the full configuration hierarchy instantly.

It doesn’t just stop here, what you also get is a set place for each type of configuration. For example, all configuration related to the device itself for it’s management etc will always be found under the system hierarchy. All routing protocols related configuration will be found under the protocol hierarchy and so on and so forth. In my experience, I have found Cisco’s lack of conformity in different operating systems with regards to configuration management and the lack of organised structure through out the configuration very unpleasant to deal with. Being the largest network vendor out there I find myself working on a Cisco device every now and then and I always wish they would improve their configuration management and bring it up to par with Juniper.

 

Commit Confirmed:

By far the best feature in junos is the ability to ‘confirm‘ a configuration change for a set amount of time and automatically rollback. This feature has proved to be a lifesaver for me on many occasions. A lot of times, you can find yourself making changes to a router on the fly and lock yourself out due to an error in the configuration. With commit confirmed statement, junos will automatically rollback the configuration if the configuration is not committed within the set amount of time defined in the commit confirmed statement. If you found yourself locked out of a device due to a configuration error, you will be able to access the device again once the timer expires. You can setup your own timer for the commit confirmed operation by specifying a timer in front of the command in minutes. The default value is 10 minutes.

There is no better way to explain how to configure a juniper router and how the junos operating system manages to provide the rollback feature than to show you a video I have prepared specifically for this purpose. Please see below:

If you would like to understand the underlying architecture on juniper devices and how the control, data and services plane work together to power up the junos operating system than please refer to my post: Juniper: Control, Data and Services plane

Juniper: Control, Data and Services plane

juniper-junos-control-plane-data-plane-services-plane

The three main planes of Juniper’s Junos operating system are the control plane, data plane and services plane.

The control plane is the brain of the junos operating system. This is where the bulk of the brain work takes place. It is the control plane that runs protocol daemons and creates a routing table that is sent to the data plane. The data plane is not as clever as the control plane but is a different beast altogether. Transmitting received packets at light speed, the data plane relies solely on the forwarding table created by the control plane. The forwarding table in essence is a copy of the routing table created specifically with the data plane in mind.

The data plane discards any packets it doesn’t has a forwarding entry for in the forwarding table. The data plane is also known as the forwarding plane or packet forwarding engine (PFE).

Most of the time as network engineers we tend to ignore the forwarding table. This is because we assume whatever is in the routing table is replicated to the forwarding table. This is true as this is how junos is meant to operate. However, in my experience, I have found myself in situations where the control plane is working as intended and all protocol daemons seem to be working fine but no traffic is passing through. In those instances, I have found the forwarding table to be empty or forwarding destinations missing from it due to a bug, usually. So its always a good practice to check the forwarding table using the show route forwarding-table command. This would ensure the data plane is working as intended.

Whenever traffic is received that intends to use a state full service, such as a state full firewall service or any service non native to the data plane, it is forwarded to the services plane. However, once the services plane has dealt with it, the traffic is sent back to the data plane to forward it to the intended destination.

Finding it all too complicated to grasp? Don’t worry, I have created a concise video that explains these concepts with graphical illustration and simple terms just for you.

What is an IP Address?

what-is-an-ip-address

IP in an IP address stands for Internet Protocol. The Internet is a collection of networks on a global scale. In any network, every device needs a unique identity to send and receive traffic. IP address solves this problem where it provides an address for each end device to communicate freely.

Over the internet, every address is meant to be unique and identifies the location of an end device from a network perspective. However, an IP address also reveals other information of an end device such as the country it is coming from and the ISP the end device is connected to.

To understand IP addressing better, please refer to my short video below, simplifying the concept even further.

Peering: Why use an IXP such as LINX or AMS IX

peering-internet-exchange-point-linx-ams-ix-de-cix

Peering is where networks meet, be it LINX, AMS IX, DE-CIX etc. All these peering providers have one goal and that is to connect as many networks as possible and make communication faster and more efficient. Peering exchange points are called Internet Exchange Points or IXPs.

At it’s heart, the main purpose of an IXP is to facilitate peering via the exchange itself eliminating the total reliance on tier 1 providers. Most IXPs however, force members to ensure they have presence on the internet routing table (usually via Tier 1 ISPs) before they can join them. IXPs are usually non-profit organisations where the members have full say in which direction the organisation takes.

Enterprises that depend on internet for revenue tend to register themselves with an internet registry such as RIPE, ARIN etc which enables them to present themselves on the internet routing table. Communication is then made possible via peering using BGP. With their own AS handy, they reach out to teir 1 internet service providers and create direct peering connections with them. However, these direct connections also mean that the tier 1 providers are always in the path of the enterprise and the destination network.

To avoid this latency, the concept of peering was born with the birth of the first IXP back in the 90s. The concept was simple, allow every member to be able to connect to a switched environment and create neighbour relationships over BGP locally. This was an instant hit as enterprises no longer needed to peer directly with anyone yet receive the same speed as a direct peering connection.

To elaborate this further, lets use the example of google. Your business might send a lot of traffic to google due to the use of google’s search, youtube etc. All this traffic would then traverse from your main tier 1 provider links. If your edge router is connected to an IXP’s switch that google already connects to, VOILA! you can now peer with google directly and send all your traffic to google via your IXP peer. This presents a win-win solution for all sizes of businesses that need to achieve the advantages of direct peering but at a fraction of the cost.

 

IP Addressing:

All members in an IXP are part of the same prefix. This means that if the prefix is a /22 subnet, than a thousand members can join in and share the same broadcast domain. Since all they need is connectivity to a given address on port 179 to activate BGP, the schema of keeping all peers in the same subnet easily achieves that.

It’s important to note that peering is not out of the realms of political influence. You will routinely see bigger companies refuse to peer with smaller companies unless they agree to send a specific amount of traffic their way. This problem is then resolved via the use of route servers.

 

Route Servers:

It is possible to connect to an IXP and still not peer with anyone at all and yet reap the benefits of direct peering. How? By peering with route servers.

Usually IX’s have multiple linux hosts running bird which then peer with everyone willing to peer with them. Usually, every member of the internet exchange peers with the route servers. Once peered with the route server, the route server sends the prefixes advertised by every member to each member connected to it via BGP. This means member A will get prefixes of member B, C and D and so on via the route server itself. This is the same result member A would achieve had they peered with member B, C and D directly. The catch here is that route servers do go down due to maintenance.  In case of an outage with all route servers, you can loose all your traffic to peering members at the IXP and will have to rely on your tier 1 links alone.
This brings us to the question, why peer directly anyway? The advantage of peering directly with another member is guaranteed connectivity at all times. This is why larger enterprises force you to either send more than a threshold of bandwidth their way, or rely on peering with route servers to reach them.

Another important thing to note is that the route server never changes the next-hop attribute of the received prefix from a member to itself when advertising out to another member.  This way, all the route server does, is share prefixes between members. Since all members are part of the same network subnet, the received prefix’s next hop is always the originator of that prefix and always available. Therefore, outgoing traffic from member A is sent directly to member B and doesn’t use the route server as a transit hop.

 

Peering Process:

Once you become a member of an internet exchange, all you have to do is look for a peering registry that hosts information about other peers. In the UK, we use peeringdb which provides all information regarding any member’s IP address and which internet exchange point they are part of. Once identified, you can simply configure your end and send them an email on their email address listed in their profile on peeringdb. Make sure you send them your peeringdb link which will have all details in place for them to configure their end. Once done, you can simply wait for their reply confirming their end is configured. You should be able to see BGP up and running and should also be able to see their prefixes received via direct peering.
Its best practice to ensure you have a single group for all peering neighbours. This way, your preferred settings are applied to each neighbour automatically when you configure a neighbour in the peering group. Its important to ensure you have a max prefix limit on your BGP peering group so that a peering neighbour’s bad configuration can’t accidentally cause the internet routing table to be received by your router and you end up re-routing your transit traffic via the peering neighbour. Also, at the same time, ensure your group policy doesn’t advertise any other prefixes apart from your own.

It would take two to tango but mistakes in configuration where the whole internet routing table is sent and received would end up in asynchronous flow of traffic whereby traffic would leave via the peering neighbour but return via the main transit links provided by the teir 1 provider, not ideal, of course!

Peering relationships are strictly for traffic intended for the peering neighbour’s public address range and are not meant to be used as transit to reach an end point over the internet.

Firewall: State full vs State less

difference-statefull-stateless-firewall-alg

In my previous posts I explained the difference between routers and switches and how they are integral for networks to be formed and communication to take place between them. So where does a firewall come into play?

A firewall is anything that sits in between, that can deny traffic from reaching it’s destination even though a valid path was available. A firewall is also a router because it first makes routing decisions and then proceeds to make firewalling decisions. However, the key difference between the two is that a router allows all traffic that it can route by default whereas a firewall denies all traffic it has a valid route for by default.

This key difference is what separates a router from a firewall. A router can also perform some firewall functions with the use of access controlled lists (ACLs), but it is generally limited as will be explained later in this post.

 

Types of Firewall functions:

  • Stateless
  • State full

A stateless firewall feature is where a policy created to allow traffic between two networks doesn’t perform any function to allow return traffic by default. A firewall policy configured using an access list mimics this function.

Applications usually have a basic requirement of cross communication over ports. This requirement means that usually when an end device tries to communicate, it expects return traffic to come its way. With the use of stateless firewalling, you will have to create another rule to allow return traffic as well as the firewall feature in use doesn’t support the maintenance of a ‘state’ between the sender and destination end device. Routers usually only support stateless firewalling.

A state full firewall on the other hand supports the creation of a session between the sender and the destination address which then allows return traffic to pass through freely. All purpose built firewalls are usually state full firewalls.

 

Application Layer Gateway:

Application Layer Gateway or ALG simply means inspecting the packet further and allowing/denying traffic based on what application is being used. Referring to the OSI reference model, firewalls usually operate on the network layer level. They normally look at the source and destination IP address of the sender and receiver along with the port used. However, with ALG, this capability can be extended further to allow/deny traffic based on the kind of application in use. This function transcends them from a mere network layer device to a device that is capable of working at the higher layers of the OSI reference model. ALG has enhanced the capability of modern firewalls to unprecedented levels with most firewalls these days providing anti-virus support and threat management in their arsenal.

The enterprise firewall market is currently being dominated by Cisco, Juniper, Fortinet, Chechpoint and Palo Alto. Start today by learning how to configure a state full firewall policy on the Juniper SRX firewall here.

DDoS attacks: How to secure yourself

ddos-attacks-how-to-mitigate-them

Network security will forever be relevant so long as networks are in place. It’s pretty much like crime. As long as communities are around, crime will be rampant. At best, it can only be contained. Same is the case with network security. With ever increasing cyber crime, network security is more relevant now than ever before. From anti-virus to distributed denial of Service (DDoS) attacks to phishing attacks, it never stops amazing me how new innovative techniques are used in a sophisticated fashion to compromise a target machine.

This brings me to one of my own personal experiences with an attack that took place whilst I was enjoying my morning coffee on a nice friday morning. The last thing on my mind was what was to come. I was managing a multi-billion dollar enterprise at the time and so for one reason or another that enterprise found itself as a likeable target for cyber criminals.

All of a sudden every other monitoring platform went red and our website which was our main revenue generation source was hit by a massive distributed denial of service (DDoS) attack. Ironically, only a few days ago I had finished setting up monitoring based on packets per second, this is important as one of the key features of a DDoS attack is the high amount of packets per second, whereas the bandwidth stays relatively low. Anyhow, we already had DDoS protection in place so we contacted Akamai which were our DDoS protection provider and proceeded to re-route traffic via them so that they could brush off all suspected traffic. This process simply involved re-routing all our public traffic by advertising our public IP ranges via Akamai instead of our main tier 1 provider.

Even though we had protection in place, what was surprising was how much stress I went through whilst we were under attack. I saw a single address send 2.4 million packets per second! The funniest moment was when I saw one of the system admins trying hard to manually block IPs one by one before the scrubbing service was enabled. As the attack grew in size, the range of IPs grew into the thousands and traffic started coming from all over the world which meant it was humanly impossible to manually block IPs and expect to bring service back online. Being a global company, our customer base was all over the world, so its not like we could just block ranges belonging to all other countries apart from the UK and take a sigh of relief.

 

Do we need a DDoS protection service?

DDoS attacks are nothing new, their methods are very similar whereby a request is sent from many compromised machines to the target end device until the hosting servers loose their capacity to reply back to each request ultimately discarding legitimate customer traffic. If you have an edge firewall, in theory, you can implement DDoS protection yourself by implementing standard known fixes to ping attacks, tcp syn attacks etc. However, I would strongly advice against it as DDoS platforms have purpose built scrubbing systems that specialise in dealing with all sorts of attacks to date and are constantly updated. If you don’t have DDoS protection when an attack takes place, you will find yourself in a hopeless position at the mercy of the attackers.

Types of protection services:

I have come across two types of protection services:

  • Always on
  • On demand

In always on mode, your traffic is always passing through the provider’s platform and so as soon as malicious packets are received they are scrubbed straight away. This is achieved by building GRE tunnels between yourself and the protection service provider. Whereby, you advertise your public prefixes to the protection service provider and allow them to advertise your prefixes to the rest of the world. The rest of the world than sees the protection service provider in the path to reach you and so all traffic is scrubbed instantly.

I, however, don’t endorse this due to the excessive latency this can induce.

On demand service, as the name suggests, just like in the case where my network was attacked, requires requesting scrubbing service when under an attack. The protection service, in my case Akamai, were already aware of the SNMP traps from my end devices as they constantly monitored activity on the edge routers but because I was in the office and had placed sufficient internal monitoring in place, was notified of the attack almost instantly.
On demand services have come a long way. Since the attack, my company decided to move to Verisign, who offered me an on demand service which they could activate themselves. This was achieved by configuring conditional routing on the juniper edge routers. This way our public ranges were never advertised to Verisign. But in case of an attack they could simply advertise a prefix from an established BGP neighbour relationship to our edge routers which would result in the CONDITION becoming true. Once true,  the juniper router would automatically pull our public prefixes from the tier 1 provider and advertise them to Verisign. Verisign would then advertise our prefixes to the rest of the world and traffic would start flowing through them.

In my opinion, this is a better approach when compared with ‘Always On’ because it reduces latency whilst keeping the scrubbing service in the hands of the protection service provider to activate at will which can be usually be within minutes. This I state due to the fact that even for large enterprises, DDoS attacks don’t happen everyday and so, always receiving traffic with another AS in the path might not be worth it but, to each one their own.

For more information on DDoS, please see my short informational video that further breaks down the concepts using simple examples:

What is a switch?

what-is-a-switch-computer-network

You must have heard of a network switch at some point! This is because the internet is such an important parts of our lives that it has encroached the same space as electricity and water!

Network switches form the bases of communication between end devices and bundle them together in a single broadcast domain, i.e. all devices can send one-to-many messages to each other at will. This is way before communication gets more complicated and high end routers along with complicated routing protocols  etc come into play.

At it’s heart, communication always starts at the switch level. This means devices communicate with each other using each others physical address also called mac address instead of the IP address. IP address is a protocol address whereas mac address is the physical address given to the device upon it’s inception. All devices connected to the same broadcast domain don’t need to have similar structured mac addresses but same can’t be said about IP address structure where specific subnet based division defined by a subnet mask is required. If you would like to know more about subnetting than please refer to my post: What is subnetting of an IP address? .

Back to where we were, switches perform hard level data forwarding and achieve this function by utilising a mac address table where they build up a table of mac addresses based on the address of incoming traffic from each interface. This alerts them to what device sits on which interface. If the switch doesn’t know the destination mac address, as to which port it should forward traffic to, it simply broadcasts to all ports. This is where a switch is different from a router. A router will primarily discard a packet if it doesn’t know which interface to send it out from based on it’s routing table. More on routers here: What is a Router?.

To understanding switching better, please checkout my video below: