Network resilience is the ability to provide and maintain an acceptable level of service in the face of faults that challenge normal operations. More than just improving uptime or building in redundancy, network resilience adds a layer of intelligence to the backbone of an enterprise – the IT infrastructure.
In an independent survey of IT Leaders, 49% said that Network Resilience was their #1 Priority.
To ensure maximum uptime, enterprises need a comprehensive solution designed to provide the highest levels of business continuity through proactive monitoring and remediation.
Learn more about ensuring network resilience for your enterprise, download our white paper now.
Redundancy is an important part of the puzzle, and the reliability of a data center relies heavily on back-up generators, on-hand replacement hardware and secondary network connections. But while redundancy is a part of the resilience solution, it is not the only consideration. In smaller locations and satellite offices, it’s not cost effective to build in redundancy. And with no technical staff on site, the ability to monitor, manage and remediate the IT infrastructure remotely is a significant addition to a resilient solution.
Smart Out-of-Band management allows the network engineering team to securely access critical devices from a central location, to anticipate and remediate issues without sending a tech to site.
Discover the networking issues that are keeping senior IT decision-makers awake at night. Read the independent research commissioned by Opengear.
The Opengear Network Resilience Platform is based on the presence and proximity of a NetOps or Smart OOB Console Server at every IT location, centrally orchestrated through the Lighthouse management software.
The Network Resilience Platform provides secure remote access to your critical network devices through a separate management plane, with the ability to automate NetOps processes, such as securely deploying and provisioning equipment, and to access Remote IP devices at any edge location.
This platform is the Network Engineer’s Network. Secure remote access, via physical hardware at each location, providing an always-available independent network open only to the core network team; to be used on Day One, for everyday management, and during production network events.
Opengear’s Smart Out-of-Band technology, plus the use of open architecture NetOps automation tools (Docker support, Python runtime environment), provides secure access to console ports while also creating a robust management plane.
Schedule a Demo to learn more about our Network Resilience Platform
Nestled on the floor of the of the Pacific Ocean, three miles below the surface, is the ALOHA Cabled Observatory (ACO). Providing real-time oceanographic data, it is used by scientists all over the world, enabling them to conduct experiments under the water – where maintaining a high reliability network connection is critical.
The solution had to meet several criteria:
ACO chose Opengear to provide secure access and resilience. Opengear devices were placed at the AT&T Cable Landing Station on Oahu. Connecting ACO to land by a 147 mile long submarine fiber optic cable that enters through the station, the network equipment manages ACO’s power supply and communications. It also forwards data streams through the University of Hawaii to its Manoa campus.
To provide secure access, Opengear safeguards the connection between the landing station and the data center through an IPsec VPN tunnel. This enables Opengear devices in the data center to contact the station and control resources at the ACO with secure remote access to equipment at the ACO, established from anywhere using an integrated SSL VPN. This ensures that the scientists can easily conduct experiments and maintain security compliance while complying with Federal Information Processing standards. Using Opengear devices, the ACO is also surpassing the Department of Defense requirements for securing unclassified data.
To ensure resilience of ACO sensors, Opengear provides both in-band and Out-of-Band (OOB) control for streaming hydrophone and video data through a high availability failover pair technology of encrypted connections from the AT&T Cable Landing Station to one of the School of Ocean and Earth Science Technology’s (SOEST) data centers.
Opengear’s highly extensible and reliable solution provides resilient connectivity. That with secure access, versatile data management and redundant data storage enables them to plan for future upgrades. Opengear devices safeguard the availability of the underwater connection and the uptime of its land based network and data.
“We’ve all heard of the ‘deep web’, but maintaining the deepest reach of the internet – beneath three miles of water – represents an exciting new frontier and offers a uniquely insightful perspective as we work to better understand our oceans and our planet. We thank Opengear for providing dependable networking solutions for our forays into this underwater frontier.”
– Brian Chee, IT Specialist, University of Hawaii at Manoa
Opengear’s Smart Out-of-Band and redundant failover to cellular technology enables the coordination of computers and devices to offer real time data management. It also helps them achieve redundant copies of all datasets transmitted by the ACO that are stored at different physical locations to ensure safety from disaster.
Aloha Cabled Observatory’s need has been fulfilled and now:
Scientific Research Customer
The ALOHA Cabled Observatory (ACO) sits on the ocean’s floor, three miles below the water’s surface and sixty miles north of the Hawaiian island of Oahu. The observatory provides real-time oceanographic data using equipment including a hydrophone and pressure sensor, along with instruments for measuring and communicating temperature, salinity, currents, acoustics, and video images. The ACO holds several records associated with its depth: it is the deepest functioning observatory of its kind, the deepest power node on earth, and the deepest extent of the Internet. The data captured and transmitted by these instruments is available to scientists and the public online in real-time.
Ensuring network resilience is an important business priority for i3D.net. To maintain its global operation, i3D.net runs a complex low-latency network, with thousands of servers spread over more than 35 points of presence on 6 continents.
It is key that this network is kept up and running at all times. As Rick Sloot, chief operations officer, i3D.net explains: “Ten years ago, this was very difficult to achieve. There were very few ways of handling and managing your servers when the internet went down and those that were available were expensive and did not work well. As technology has advanced over the past decade, customer expectations have grown in line, and a high level of network resilience is now demanded by our users.”
“As we expanded as a business, opening up new locations around the world, it was becoming increasingly difficult to find a vendor capable of delivering a reliable way of keeping us up and running at all times, across all our locations.” continued Sloot.
“We struggled to find local vendors who could support us. Often, we weren’t able to pinpoint the issue or access the faulty equipment. We needed to find a way of managing equipment remotely, identifying and resolving the problem, while keeping the network online. “We started to look for a vendor who could work with cellular technologies such as 4G and who could deliver out-of-band management. Opengear fitted the bill perfectly. Opengear’s tools can support 4G cellular out-of-band connections in all our global locations which have enabled us to save time and money. We were looking for a partner that could deliver triple-A equipment on time all over the globe and knew we could rely on Opengear for this.”
i3D.net decided to work in partnership with Opengear and implement its ACM 7008-2-L Resilience Gateway to provide smart out-of-band management to its entire IT infrastructure, and also to provide a built-in backup LAN and/ or backup Internet connectivity option for all their remote sites.
Today, i3D.net uses the Opengear Resilience Gateway as a fall-back, providing out-of-band management, as and when required, to its routers all over the world. It has achieved a raft of benefits as a result. For example, it has helped to significantly streamline the process of bringing new sites online.
In the past, when i3D.net had to carry out a deployment outside the Netherlands, a technical operations engineer flew to the location and installed the new switches but would typically struggle to configure them. Today, i3D.net simply connects the Opengear Resilience Gateways to these network switches and its network operations engineers back in the Netherlands are then able to configure the whole site remotely via the Opengear device. “This is very powerful and a huge benefit for us in terms of the time and costs saved not only in provisioning but also ongoing maintenance” commented Sloot.
“We were looking for a partner that could deliver triple-A equipment on time all over the globe and knew we could rely on Opengear for this.”
Using Opengear Resilience Gateways also allows i3D.net to stay one step ahead of any pending network failures. According to Sloot: “We don’t like downtime and neither do our customers. Today, if there is an issue, we can directly connect to our routers anywhere in the world without having to wait for someone at the locations to access and fix it. The agility of the Opengear solutions means we can quickly help our customers and maintain our high standards as a performance hosting company.”
Today, i3D.net continues to expand and open global locations dynamically. It is also in the process of switching to a new Juniper Networks router platform. As Sloot explains: “This will give us the opportunity to launch our Global Low-latency Anti-DDoS solution (GLAD). Every region where we are changing the router and do not have an Opengear Resilience Gateway will now receive one.
“Finally, we have an enterprise customer for which we are remotely managing servers, running routers and firewalls in their own environment. We are planning to place the Opengear solution there. It marks the first time we have carried out a deployment of Opengear’s Resilience Gateway outside our own data centres.”
The rapidly expanding use of Opengear at i3D.net bears witness to the broad benefits it brings the hosting provider and also highlights a partnership that goes from strength to strength today and will look to evolve further in the future. i3D.net is on a fast growth path and Opengear continues to support its dynamic expansion worldwide.
i3D.net is a leading provider of high-performance hosting and global infrastructure services. The company notably excels in-game hosting and infrastructure, serving 100 million gamers daily for game publishers and independent developers on consoles, PC and mobile. The i3D.net network is one of the world’s top-10 most connected hosting provider networks. i3D.net is now a Ubisoft company, having been bought by the French publisher in 2018.
Today, we’re talking with Roy Chua, founder and principal at AvidThink, which is an independent research firm focused on infrastructure technologies, including IOT, SD-WAN, SDN, and of course The Edge.
Roy, first of all, thanks for joining the podcast and sharing your views from The Edge.
You are very welcome, and thank you for having me Steve.
I know you started out in various engineering and product roles within networking. Could you just describe briefly how you went from being someone who was in front of the rack to now looking out across the industry trends?
Yeah, I definitely can. Too long ago, very long ago, once upon a time, I started my networking career at Cisco actually, and I was actually inside the rack because I was writing firmware. I was writing firmware to bring up a new ATM where an ATM in networking stood for asynchronous transfer mode in that ancient times. And I thought it was going to be one of the hottest technologies, but it didn’t quite pan out that way, but I was writing firmware on Cisco’s ATM interface processor, the AIP on the Cisco 7000 router for a new interface.
Anyway, that’s how I got my start in networking. And obviously I started the same way many network engineers did. Doing a lot of testing and lab work and stuff with different protocols. And IP wasn’t the only protocol back then. We still had to learn DECnet and AppleTalk and IPX/SPX, and other protocols like that. So it was truly internet working.
But anyway, what happened after that, as I stayed in networking and the network storage space for some time moving from engineering into the field, and then into product management, and I also did a stint at a dotcom, like many others, and along the way, worked for different companies, help found a few companies. One was in network testing, and that evolved into a product that was eventually acquired by Spirent Communications, Layer 4-7 testing, and so I got to know a lot of network equipment in that job. And the other one in network identity management and authentication. And that, that company I founded in, in 2004, and unfortunately in 2008 after we had raised $26 million we couldn’t get our series to C close because Lehman Brothers had collapsed, the market had collapsed and it was unfortunate. But I stayed around after the assets were sold off, to attempt to buy it back. And I failed to do that. I was trying to get back my company. But in that process, I took on consulting – VP of marketing, VP of products positions – and eventually I teamed up with a friend and we started SDxCentral back in 2012 and, and SDxCentral had two lines of business. It was originally called SDN Central – software defined networking. It was very, very early. This was back in 2012, February when we launched the site.
We have two main lines of business, one in the media news which is very well known and the other in research and analysis. And we started publishing reports at the end of 2013 on SDN and network virtualization. So it was quite early in the space and then many years later, and many, many, many reports later in the end of 2018, we spun the research group out of SDxCentral into an independent company, which is AvidThink while the media news side stayed with SDxCentral.
And so, here I am, running a small boutique research firm that covers the same infrastructure technologies that I started my career with.
Perfect. Thanks for running through the progression. I think everybody has a 2008 story that they may or may not want to tell. Nice to hear that yours got you moving in the right direction.
So you mentioned SDxCentral. And you were certainly ahead of the curve looking at SDN and SD-WAN. Just give me your thoughts on the state of the SD-WAN industry now, particularly with everything that’s happened with the pandemic.
Yes, that SD-WAN term. So when we started covering SD-WAN, there was still a debate as to what SD-WAN meant, and there was actually a competing definition for SD-WAN and there was essentially the SDN-controlled wide area network. So the use of SDN technologies for interconnects and the bandwidth calendaring and all the cool features that you could get by applying SDN to the WAN, was what the SD-WAN could have meant. But instead it’s come to mean what it is today, which is the enterprise edge as the series of technologies that control the enterprise edge connections – the WAN, the enterprise WAN the lot
So that that was pretty early on and it’s evolved since, and there’s still somewhere in the 60-100 players, obviously probably 10 to 15 that are interesting, and I would say that where we’re seeing right now with SD-WAN today is a couple of elements.
SD-WAN has evolved itself as well. And one of the things that is pretty popular right now, because Gartner coined a term, the SASE right? And what Gardner did with SASE, it was just as well known, the term now is to combine essentially SD-WAN with a whole bunch of other edge technologies within the enterprise. A lot of those are all security capabilities. And then by doing that created an umbrella term that included SD-WAN, but taking a much more cloud-first approach to it.
So what we’re seeing is a couple of things with SD-WAN. First is, there are some branch focused SD-WAN companies that are struggling slightly. When COVID hit with the pandemic, everyone ran away from the offices, right? And so you had a bunch of branches and headquarters that are very lightly staffed today. And so the need and the urgency to turn on SD-WAN in some of those places slowed down. But I think that will pick up.
And instead, what happened is that the SD-WAN players that were cloud-first and more focused on SASE are doing very well. I have talked to some of those and they’ve seen amazing growth in their sign-ons during the last couple of months. I mean, it’s quite compelling, right? If you have a solution that’s software centric that you can enable for your employees working from anywhere, working from home, that gives them secure access to the internet in compliance with corporate policies, definitely quite useful. Plus, some of those offerings provide you with a quick on-ramp to the internet and a private internet backbone that gives you fast access to SaaS services, right? Your Salesforce, your Dropbox, your box.
And so there was quite a compelling capability. And so those vendors in fact have done very well. And my expectation is that as a vaccine hopefully becomes available or viable in the next six to 18 months, they will see a shift to hybrid model. So you see a mix of work from home, some work from anywhere and some will head back to the office. I think many of us are realizing sort of two things. One is to how some things remarkably can be done remotely and many things as well need to be done more effectively in person. So I think we’ll see that hybrid model.
So in that situation for SD-WAN vendors, I think the cloud centric approach is here to stay. I don’t see a change in that. And SD-WAN will evolve along those lines. But along with that, the trends that we see, which is beyond the cloud first, I would say security centric, the ability of SD-WAN to reach into the branch, so SD branch, adding mobile support, IOT support multi-cloud support. I think those will not change. Those are the key evolutionary areas for SD-WAN and SaaS and that will continue.
You know, it’s funny, we’ve been talking about hybrid for years. Never thought it would be applied to two days in the office (laughs) two days working from home, but that seems to be the way things are going and-
Yes. It’s redefinition of hybrid, thanks to a COVID-19 exactly.
Spoken like a true analyst.
The other thing it tells me is I’m glad I didn’t go into commercial real estate as a career. I think IT is the place to be.
For a while, people were looking at networking, that’s sort of pretty much done and not exciting. And then SDN came along and now it’s pretty exciting and all that. And I think now with the pandemic, I think a huge realization that telecommunications is critical. It’s really important. And so it’s good news for network engineers. It’s a good thing for us.
Absolutely. Yep. I think the importance of the network is definitely coming to the fore. So in your role at AvidThink, obviously you pay attention to what’s happening now, but I know a large part of what you do is look out into the future. So from an infrastructure point of view, what are the changes you see coming over the next few years?
I’ll start with the easy part, which is the physical portion of it and then we’ll go from there up the stack. I think as we get more sophisticated with application and data distribution and orchestration, and as network bandwidth to the edge increases right across a wire line, wireless; with all the efforts around fiber densification, right? So you’ve got 5G, you’ve got all these other elements of connectivity that maybe low earth orbit satellites coming. My expectation is that compute will be more distributed, right? So compute and storage will be more distributed and there’ll be multiple locations and you want to call it the edge, if you want to call it the distributed cloud. No matter what you call that, connectivity’s pretty much getting everywhere. And the speed of that connectivity is increasing, and so the capabilities of what you can do with that sort of matrix, I think will increase.
And in many ways I think the same thing that has happened on enterprise campuses which for the most part move to wireless. You very seldom plug into an RJ 45, (laughs) at a desktop workstation type approach. You take your laptop and you move around. So enterprise campuses have moved to wireless. And I think we’ll see something similar in many cases on the last mile side of things as well.
And similarly I think for a lot of industrial applications, we’ll also see a transition from wired to wireless. And so what was Industrial Ethernet potentially could be served by wireless technologies, particularly mobile technology since themselves in a private LTE, private 5G, and especially in the US with CBRS and the new spectrum models. I think we’ll see that coming sort of at the connectivity that fiscal layer. Slightly above that, I think as a result of that infrastructure topology change, then we’ll see this need for distributed compute and storage infrastructure wise to go everywhere and you have to connect that.
So from a networking perspective, you have to have some kind of secure fabric throughout that provides you with the visibility, that provides you with obviously the connectivity, but also the security across all these things, right? The ability to manage QoS, the ability to troubleshoot. So I see that secure access enterprise access layer across all these locations as being one of the key things that we’ll be focusing on. And at AvidThink we made up a term, just to make fun of it, we called it FUN, which is the Fabric for Universal Networking. So we could call it universal networking fabric, but UNF didn’t quite have the same impact! But I think that layer becomes very important and I think SAE and SD-WAN and ZTNA – zero trust network access -and all these things all fall into that category. So I think that part is critical.
And then beyond that, you have obviously compute and storage. And so the infrastructure technologies underlying that will be a lot of orchestration, right? So how do you orchestrate workloads? Where do you place your workloads? How do you break your workloads up, in terms of application and distribute them the right way? What kind of services should you have in those locations themselves? You know, obviously compute and storage, but beyond that upper layer stuff, in terms of telemetry and logging, in terms of database technologies, in terms of AI, ML, I see that evolution in terms of distributed compute.
So if you take Amazon and Azure or Google, those capabilities, I think will end up getting distributed everywhere. And there’ll be a lot more focus on what runs where, so placing workloads, placing portion of workloads, orchestrating them and enabling the overall system to be resilient. I think that’s what I do see coming.
So the main thing I got out of that is that you’re trying to bring the fun back into networking, (laughs) is that right? So that, that was Foundation Underlying the Network?
I think that’s a foundation. I think that the fun is what sits on top of the network. It’s where, if you go up one layer of abstraction, you get to have fun.
I like it. So, right at the end you mentioned the word resilience, and this is the network resilience podcast. Maybe you can just expand on that a little bit in terms of how you see that definition of network resilience developing.
I’ll actually probably take a layman’s definition of it in terms of network resilience. And I’ll define it simply as the ability to ensure that your network infrastructure is able to stay up no matter what. So it just stays up. And that’s the ability to function in the face of adversity, right? So whether your network is facing fiscal challenges like natural disasters taking down your network or fires burning up that in the ground fiber, things like that. Or, it, it could be hardware failure of your devices, hardware does fail. Or perhaps it could be not hardware related, but it could be just an overlooked condition. It’s just too much traffic. Can the network stay up in, in light of too much traffic? Or software failures. The software that we put on those network devices do fail.
And sometimes, no matter how hard you try, you get into a weird situation with your software and it’s got to be restarted. Or even configuration failures, right? So the human failures, or whether it’s an immediate failure upon a configuration, that immediate mistake, or inadvertent mistake, you know. You put some configuration in place that doesn’t trigger until some weird set of conditions happens. And then suddenly traffic doesn’t flow.
So, I think the goal of the network is to be able to be as reliable as possible, but sometimes failure happens. And so the ability to recover quickly to a good known state as fast as possible, I think that’s important and I view it as resilience.
You know that failure will happen and you obviously want your network to be reliable. So you do want network reliability, but at the end of the day, you know that bad things happen. And when it happens, you need to recover quickly and that is network resilience.
That makes sense. As we’re obviously seeing a lot more being pushed out to the edge, are you seeing that drive a change in how network resilience is considered by enterprises?
Yes. I think as people are pushing more and more things out to the edge, there is really a concept of compute or server resilience, right? The ability to, to try to get systems up. And the way we architect software today is to actually plan for failure. So the applications stay up no matter what, if you look at some of the architectural changes on the application side with the use of microservices, with the use of containers, and to try to make things less stateful, to contain state in a constrained environment that you can control, you can replicate, you can make sure it stays up.
So I’m seeing those changes, obviously on the compute side. And I think on the network side, we’ll see the same. We’ll see a realization that as you push things out to the edge that you have to account for the fact that things will fail and you need to be able to recover. Obviously you need visibility in those failures first and foremost. But after that happens, you need a fast way to recover as well. So I do see enterprises starting to pay attention to that more and more. And with COVID-19, there’s a realization that sometimes you don’t have hands locally and remote hands have to do it. And if you can automate that, if you can detect that, if you keep it up, it’s even better.
So, there were a couple of phrases you used in there that really screamed NetOps. You mentioned containers and remote hands and automation. How do you see NetOps changing the way networks are managed?
For me, I used to get down on the console on a CLI. You type, the first thing you did on a Cisco thing was you go in there and you say, enable passwords and the first thing you do is write terminal. “wr space t” right? So it’s shortened, the config running across, and then you go in and change it. And that was the way we managed networks. And it’s a one-to-one situation. And that doesn’t work at a scale we’re talking about across the locations we’re talking about.
I think what’s happening on a network operation side is automation. The ability to scale operations and gain efficiencies, right? So the first thing is automate the things you are doing by hand, but beyond that, you want to manage your network infrastructure as code, right? So the same thing as we saw on the cloud side of things, with heat templates and the like bringing up an entire application, across compute and storage. We need the same thing on the network side to manage the network infrastructure as code templatize them.
And then beyond that, understanding that networks are complex and there’s a lot of variables, especially going forward. It may have to be simplified even further to take an intent based approach and say, look, fundamentally I want to connect these workloads in the following ways and be compliant and here are your constraints. Go do it for me. And there should be sufficient intelligence on the system to be able to do that. And so I think I classified that whole group of capabilities as NetOps. And I think eventually, part of NetOps will be the use of telemetry feedback loops and to use that, to start building AI assist capabilities within that that realm as well.
So the way you talk about that suggests that the skillset for a typical network engineer is going to need to change to adapt to that. Any thoughts on, or are you seeing some of that happening already?
I think I would say that I wouldn’t say we’re slightly behind on the networking side, because I started my life as a network engineer, but I would say that very often we do look to the compute side to see what things are happening, then we adapt best practices. And sometimes they’re relevant, sometimes they’re not. In many cases, they are. So if you look at what we did in networking, we looked at compute. Compute went virtual, storage went virtual and we’re like, Oh yeah, maybe the network can go virtual as well. So we copied that. We looked at DevOps and we say, hey, maybe NetOps on that DevOps may be a thing. And we are on the journey to make that a reality as well.
And on that NetOps network resilience, I think we can take a look at something that Google popularized called Site Reliability Engineering. So this was back in the early 2000s. And this was even before DevOps. And what Google was trying to do was to make their large scale infrastructure more reliable. And so it started out being sort of an operations role. And then they realized that if you put engineers in it, that they could automate it for themselves. So the goal obviously was to code yourself out of the equation, essentially. Because good engineers tend to want to be efficient.
I won’t use the word lazy, some people use that. They want to be efficient with the time. (laughs). And so the goal is to automate as much as you can, using code. And so this concept of a site reliability engineering SRE became a full fledged domain in and of itself within IT. And its important role at all the major hyperscalers and companies now do it, and so, it’s basically the ability to use a platform, to use automation, to use a sort of intent based framework or a config based or template framework to drive reliability.
And obviously we could use the same thing on the resiliency side as well. So SRT- SRE teams are responsible in general for availability, reducing latency, the overall performance efficiency, change management, monitoring, all those things, with regard to a site. And I think there could be the same role for Network Resilience engineers. So the evolution of NetOps, if you will, who perhaps saw it as network operations to begin with, but by using coding and automation and software to improve first the reliability and secondly, the resiliency of the network infrastructure.
I see the same thing. So I think those things will happen going beyond automation, infrastructure, or network infrastructure as code, template based deployments, and then intent based, network configuration, and eventually, AI assisted? So that’s the evolution I see.
That’s an interesting trend. So you’re seeing it move from a DevOps team to then adding a Site Reliability team. And then you think maybe beyond that, it’ll be network resilience, perhaps to focus more on the broader distributed network and then still a layer of network engineers to do the day-to-day work?
I would say probably it wouldn’t be the simple day-to-day, but obviously that you have ongoing troubleshooting. But I think that the ratio of network engineers to network elements will change dramatically by orders of magnitude. And so it’d be assisted. So yes, you probably will still need the network engineers to come in and help troubleshoot. But I think the vast majority of the capabilities and the work eventually could be automated, but you will need… The human eye does a lot of things that are still hard to train AI’s to do. And I would view it as sort of human assisted. So the hard problems are human assisted, but resiliency, some of those components, I think could be automated to say, look if this fails and that fails, then take this action or take that action.
Or the system could learn. We’re seeing that with ML and machine learning systems. In fact, as the network goes down, they see the network engineer, the NetOps engineer, take the following steps. And over time it’ll, it’ll learn to figure out it says, Oh, if that goes down, then I do this. Or if that goes down, then I do this. So I think that there’s going to be that that ongoing evolution over the next couple of years.
And that lines up with the idea that networks are becoming more complex. You obviously need more complex tools and organizations to manage them. So that seems to be heading in the right direction.
Just changing tacks here, a little bit. Part of the goal of the Network Resilience Podcast is to highlight the value of the network engineering community. So just curious, is there anyone in your past – a mentor or an influencer that you’d like to give a hat tip to and raise them up a little?
On the network engineering side there isn’t one person in particular. I’ve worked with a lot of great people over the years. But I would say that on the analyst side, interestingly enough there is someone I do respect and we try to run on principles that are similar to his. There’s a company called Infonetics Research that’s part of IHS Markit. Very famous, very large company today.
The founder of Infonetics was a guy called Michael Howard. And he was a very well-known analyst in the telecoms and the networking industry. And he was great because he always tried to find and dig through all the fluff and to try to get to the heart of things. And he was able to do so in a very nice way. His approach was always very pleasant, very polite, very thoughtful. I think he was one of the greatest analysts that we had. I would see him at all the different trade shows and talk to him. And he was always very kind and I think that’s the same way we run our practice here. He was well-known in networking. And I think we aspire to carry on that same culture, that same tradition, that he had.
Very good. Thanks for sharing that. So I know that AvidThink is producing a lot of good material. It’s always top of my reading list. You mentioned in our call prior to this, that you are working on a couple of new resource sites for SD-WAN and the edge. Can you just give us a little background on that?
Yes, we are. AvidThink is primarily research and we’ve partnered with Jim Carroll a well-known publisher at Converge! Network Digest. He’s been running a newsletter for the last 20 years. It’s quite well-known in the networking industry as well. And together we’ve we put up a resource site called NextGenInfra.io. So that’s next generation infrastructure, but obviously we shortened it. So it’s NextGenInfra.io. And on that site, we try to cover key technology areas across infrastructure technologies. We have interviews with well-known luminaries and thought leaders in the space.
And, we’re launching the 2020 edition of our SD-WAN and SASE site in Q4 this year. And so if you’d like to get a flavor for the type of research and the content that we have, go to NextGenInfra.io and check out some of the cool resource sites already up there, download the report and we’ll be sure to let you know when the SD-WAN and SASE resource site is up, and we’re definitely looking forward to it. It’s been a lot of change in the last year and pretty exciting space for sure.
Perfect. And we’ll put those links in the show notes for anyone that wants to go there and take a look at that. Any other resources Roy – outside of AvidThink that you find useful and you think you listeners to the podcast might find a good resource as well?
I go to the mainstays for, for my news. So the usual suspects, right? Fierce Wireless, Fierce Telecom, Light Reading, sometimes RCR wireless and… And on the cloud stuff, I go to New Stack and then of course I visit SDxCentral, certainly. And then for podcasts, I actually like Greg Ferro and Ethan Banks. We’ve had lots of chat over the years. See them less because there’s no conventions and conferences. But, Greg and Ethan always do a good job and, and the rest of the folks over there at Packet Pushers do a good job.
And finally, we’ve collaborated with Stephen Foskett in the past. And he runs a good operation with Tech Field Day. There’s always good content there that I like to check out. So those are the places I tend to go to on a regular basis.
Perfect. Thank you. I’m sure that’ll help some people. I agree, I’m a big fan of Tech Field Day. I’ve been involved in some of those-
Yes. Stephen and team do a good job of covering very pertinent content.
So before we wrap up here, if people want to know more about AvidThink, where would they go for that? I’m, I’m assuming it’s AvidThink.com. Is that right?
It is AvidThink.com. It’ A-V-I-D-T-H-I-N-K one word.com. Or you can find me on Twitter @Wireroy, W-I-R-E, wire, like the wire, Roy, just my first name. So, check us out, download a report. And, if you have any comments, we are always open to hearing from our readers. You can always reach us at firstname.lastname@example.org. And we’ll get back to you within a day usually.
Well, perfect. Roy, I’d like to thank you again for taking part in Living on The Edge, the Network Resilience Podcast, and all of the links that Roy mentioned will be in the show notes. And I would encourage you to reach out to Roy and, and have a conversation with him. He’s always a good guy to talk to. So thanks very much Roy, and we’ll talk to you soon.
You’re very welcome Steve, and enjoyed the conversation as always.
You’ve been listening to Living on The Edge, the Network Resilience Podcast from Opengear. To add resilience to your network in data centers and out to the edge, visit opengear.com.
Network RQ is a way of looking at all the elements of network resilience, and explaining their impact to others in the organization. It’s a set of resources and language to help you succeed in building a reliable infrastructure.
To learn more visit: https://opengear.com/resources/network-rq/
Network Resilience: Keeping the network running at the core and out to the edge of the infrastructure, with no disruption to the customer experience in the event of human error, external issues or hardware failures.
Network outages have a direct impact on an organization’s revenue, customer retention and brand. Network resilience plays an important role in ensuring business continuity. When the network is down, money can be lost, productivity can be stunted and data security can be at risk.
Achieve true network resilience, when an organization is able to maintain services, remove points of failure and has an understanding of how to bring the network back up during a disruption.
One of the largest data breaches ever occurred to one of the largest financial corporations in the country, Capital One. Over the span of two days, more than 100 million Capital One credit card applications were accessed.
The hack occurred in March, however it took months before Capital One was aware of the event. The suspect is Paige Thompson, a 33 year old, former software engineer from Amazon Web Services. By exploiting a misconfigured web application firewall, she was able to access the Amazon server hosting Capital One’s customer information. After hacking into the organization’s server, it wasn’t until July 17, that a tipster informed Capital One of the existence of the stolen information.1 Thompson gained access to credit scores, balances and addresses, along with:
Finance is one of the top targeted industries when it comes to cyber-attacks. These types of institutions have a steady stream of sensitive customer information being entered into their databases making them an ideal target for hackers. Letitia James, General Attorney of New York was asked about the Capital One breach and stated, “It has become far too commonplace that financial institutions are susceptible to hacks, which raises questions like, why are these breaches occurring? Are companies doing enough to prevent future data breaches?
Just last year, more than 2 billion people were affected by data breaches and over the past decade some of the largest attacks have occurred with well known financial organizations such as JP Morgan Chase, Equifax and Citi Financial.2
Not only do these cyber attacks pose as a danger to customers, it also threatens the stability of a network. Downtime causes:
Banks depend on the network for a multitude of reasons such as making transactions, transferring funds or processing applications. Not being able to complete such actions can lead to loss of customers and revenue.
Highly regulated industries such as finance, have a large number of regulations to comply with in regards to global privacy, data and security. Any type of network downtime can cause non-compliance and result with million dollar fines.
Many times, these types of disruptions occur in a remote branch, where without the proper solutions, network engineers don’t have the ability to quickly diagnose and remediate the issue. During a cyber attack, visibility is a necessity.
Opengear devices extend your reach, providing full visibility to remote sites. Smart Out-of-Band and Failover to Cellular™ enable engineers to securely access the network and continue to run applications needed to ensure business continuity while the issue is diagnosed. Providing a wide range of capabilities such as the ability to disable access to the impacted network equipment, disconnect the WAN connection to isolate an affected branch and even reconfigure devices to factory default, Opengear devices ensure that normal business operations aren’t effected by a cyber attack.
As these attacks continue to evolve, the likelihood that your bank may be affected increases.
To learn more about ensuring network resilience for your organization, read our white paper.
Learn how one of the top performing banks in the United States utilized Opengear smart solutions to decrease their downtime, reduce the need for site visits, and ensure compliance requirements are met.
It’s now officially 2019. The year of network resilience? That’s where we come in. To ensure that your data center is up and running for the new year, we have the top 5 things that you need.
Anshul Sadana of Adobe said it perfectly – the number of engineers that run a the network doesn’t need to grow with the number of network devices they manage – this is because of automation1. Inside data centers, branch locations and even during SD-WAN deployments there can be thousands of moving parts. In order for solutions to run seamlessly automation is needed. This ensures that these parts continue to move with as little manual intervention as possible, meaning less error.
As more data centers and remote sites pop up, the ability to have eyes monitoring every device is impossible. Network outages are a very real part of branch offices and can be caused by many variables like a manual error, temperature issues or the old backhoe cutting through a fiber. All of these things which may seem small can have a large impact. The average cost of downtime is $5,600 per minute according to Gartner. Smart Out-of-Band (Smart OOB™) allows organizations to have eyes at every site, setting automated alerts through email or SMS to notify of an issue; and link to environmental sensors for temperature, humidity and door openings. This provides the ability to manage the infrastructure remotely and enables troubleshooting and remediation at the network edge. This then ensures that infrastructure can be accessed even during a system or network outage.
It’s a guarantee that in 2019, cloud providers will deploy more data centers that will need to be larger scale. How will these be built and deployed quickly enough to meet the market demand? Zero Touch Provisioning (ZTP). This process is when managed devices in their unconfigured state request and are delivered initial setup resources through the local management network. What does this mean for network engineers? It means the provisioning process becomes automated so there is less need for manual intervention and skilled technicians.
The increase of IoT technologies are creating a need for information to be processed closer to the edge2. The edge lightens the load on central servers and response times are fast. Network engineers understand that this is considered the new computing architecture that will distribute resources for millions or even billions of different devices and locations. Edge computing distributes the support necessary to meet the increasing need for services and applications because the data is processed is closer to the end user.
SD-WAN deployments are increasing in popularity. Eliminating the need for expensive MPLS circuits and maintaining secure network connections, these deployments allow network engineers to utilize and aggregate high bandwidth internet connections. When paired with Smart Out-of-Band, organizations are able to ensure the network can be accessed during an outage remotely.
Ensure that your data center is secure and resilient in 2019. Opengear is constantly evolving our solutions to meet your needs. Read our white paper to learn more.