27,000 Intel cores. 190 Terabytes of ram.
27,000 Intel cores. 190 Terabytes of ram.
Tier 1 = Non-redundant capacity components. things such as power have single feeds.
99.671% Uptime
no redundancy
28.8 Hours of downtime per year.
Tier 2 = Tier 1 + Redundant capacity components.
99.749% Uptime
Partial redundancy in power and cooling
Experience 22 hours of downtime per year
Tier 3 = Dual-powered equipment and feeds for power, cooling, and essential services.
99.982% uptime (Tier 3 uptime)
No more than 1.6 hours of downtime per year
N+1 fault tolerant providing at least 72-hour power outage protection
Tier 4 = All components are fully fault-tolerant including uplinks, storage, chillers, HVAC systems, servers etc. Everything is dual-powered.
9.995% uptime per year (Tier 4 uptime)
2N+1 fully redundant infrastructure (the main difference between tier 3 and tier 4 data centers)
96-hour power outage protection
26.3 minutes of annual downtime.
If you want to read up on the N redundancies. https://en.wikipedia.org/wiki/N%2B1_redundancy
https://www.fs.com/products/75877.html
With an $11,000 price tag and Cumulus Linux this could be a switch to look at for folks needing 25 and 100 gig ports.
Ports | 32*100Gb | Operating System | Cumulus® Linux® OS |
Max. 100Gb Ports | 32 | CPU | Intel Rangeley C2538 2.4Ghz 4-core |
Max. 25Gb Ports | 128 | Switching Chip | Tomahawk BCM56960 |
Switch Capability | 6.4Tbps Full-duplex | Packet Buffer Memory | 16M |
Latency | 500ns | Max. Power Draw | 550W |
Airflow Direction | Back-to-Front | Hardware Warranty | 5 Years |
Fluke networks has fired another volley in the “zip ties vs. velcro” for cables front. While this article does not address velcro vs Zip ties directly, it does bring up some points about using zip ties.
https://www.flukenetworks.com/blog/cabling-chronicles/beauty-isn-t-skin-deep
For those of you not so familiar with routers
The new ANSI/TIA-568.2-D cabling standard which now allows for the use of 28 AWG patch cords. What does this mean and how does it affect you? Read this article from Fluke networks.
Number one takeaway.
-Recommended length no more than 15 meters. This means it is great for dense racks and patch panels.
http://www.flukenetworks.com/blog/cabling-chronicles/skinny-28-awg-patch-cords
Many ISPs run into this problem as part of their growing pains. This scenario usually starts happening with their third or 4th peer.
Scenario. ISP grows beyond the single connection they have. This can be 10 meg, 100 meg, gig or whatever. They start out looking for redundancy. The ISP brings in a second provider, usually at around the same bandwidth level. This way the network has two pretty equal paths to go out.
A unique problem usually develops as the network grows to the point of peaking the capacity of both of these connections. The ISP has to make a decision. Do they increase the capacity to just one provider? Most don’t have the budget to increase capacities to both providers. Now, if you increase one you are favouring one provider over another until the budget allows you to increase capacity on both. You are essentially in a state where you have to favor one provider in order to keep up capacity. If you fail over to the smaller pipe things could be just as bad as being down.
This is where many ISPs learn the hard way that BGP is not load balancing. But what about padding, communities, local-pref, and all that jazz? We will get to that. In the meantime, our ISP may have the opportunity to get to an Internet Exchange (IX) and offload things like streaming traffic. Traffic returns to a little more balance because you essentially have a 3rd provider with the IX connection. But, they growing pains don’t stop there.
As ISP’s, especially WISPs, have more and more resources to deal with cutting down latency they start seeking out better-peered networks. The next growing pain that becomes apparent is the networks with lots of high-end peers tend to charge more money. In order for the ISP to buy bandwidth they usually have to do it in smaller quantities from these types of providers. This introduces the probably of a mismatched pipe size again with a twist. The twist is the more, and better peers a network has the more traffic is going to want to travel to that peer. So, the more expensive peer, which you are probably buying less of, now wants to handle more of your traffic.
So, the network geeks will bring up things like padding, communities, local-pref, and all the tricks BGP has. But, at the end of the day, BGP is not load balancing. You can *influence* traffic, but BGP does not allow you to say “I want 100 megs of traffic here, and 500 megs here.” Keep in mind BGP deals with traffic to and from IP blocks, not the traffic itself.
So, how does the ISP solve this? Knowing about your upstream peers is the first thing. BGP looking glasses, peer reports such as those from Hurricane Electric, and general news help keep you on top of things. Things such as new peering points, acquisitions, and new data centers can influence an ISPs traffic. If your equipment supports things such as netflow, sflow, and other tools you can begin to build a picture of your traffic and what ASNs it is going to. This is your first major step. Get tools to know what ASNs the traffic is going to You can then take this data, and look at how your own peers are connected with these ASNs. You will start to see things like provider A is poorly peered with ASN 2906.
Once you know who your peers are and have a good feel on their peering then you can influence your traffic. If you know you don’t want to send traffic destined for ASN 2906 in or out provider A you can then start to implement AS padding and all the tricks we mentioned before. But, you need the greater picture before you can do that.
One last note. Peering is dynamic. You have to keep on top of the ecosystem as a whole.
You must be logged in to post a comment.