Blog Archives

Always-On Connectivity, Even In The Air

Download our white paper today!

Transportation organizations are key components in the supply chain, facilitating the global economy by ensuring goods and people get to their appropriate destinations on time. This new digital age has brought with it an increased demand for services beyond a person or businesses immediate vicinity. This has caused companies within the transportation industry to continuously deploy new technologies to keep up.

The airline industry has specifically begun investing in digital initiatives to open new revenue streams, increase productivity and improve customer satisfaction. According to a recent study, the airline digitalization market is expected to reach approximately $35 billion by 2030.[1] Many have already committed to migrating to the cloud within the next decade. However, most airlines are grappling with infrastructure that fails to meet modern requirements and those needed for transformational efforts. These complex ecosystems require a resilient network to facilitate all of their applications.

The air cargo airline industry has lacked effective integration digitally. Preventing stakeholders from efficient collaboration, this has created a reliance on manual processes resulting with little visibility for customers.

Benefits Of Digitalization

As passenger revenue decreased due to travel restrictions, cargos greatly increased. Air cargo is now making considerable contributions to airlines bottom lines, with some trade routes between individual regions seeing freight rates increase by 100%. As demand continuously grows, the industry is optimizing operations by deploying new technologies.

For example, most cargo airlines have relied on long term contracts and bulk allocation models for sales. This is a high friction process, but spot bookings are now on the rise, making up almost 90% of individual reservations for airlines. This has made it crucial that airlines have the capability to price dynamically using digital platforms. Allowing e-sales alone is expected to ensure optimal returns of an estimated 12% in the first year[2].

To achieve an air cargo supply chain that’s fully, digitally connected, airlines and international trade organizations are collaborating to deploy new technologies. These innovations are helping to address challenges by:

A preferred method of transportation because its security during cargo moves and fast transit times, the industry is already underway, deploying new initiatives for supply chain visibility. Some of these include:

All these initiatives require a connected network. Having secure, always-on access to infrastructure is a necessity to ensure effective communications and operations with all of these platforms.

Smart Solutions To Network Resilience

Dealing with distributed networks, harsh environmental conditions and a continual flow of data, airlines need always-on access to simplify network management. Enabling secure, remote management, these organizations have the ability to ensure business continuity, even during a disruption.

Airlines demand complete confidence in the networks that power their projects. Specifically, regarding security, compliance and performance, engineers rely on a multitude of applications, including Transportation Management Systems (TMS). They are constantly adding new ones to the network. These systems create a standardized process, allowing operators to manage all transportation activities across the entire organization and supply chain. However, without an independent management plane, any type of network outage will deny access to a TMS and completely disrupt overall operations.

A Smart Out-of-Band network allows airlines to separate and containerize functions outside of the management plane. Operating free from the primary in band network, when combined with Failover to Cellular™, these businesses have always on availability to maintain visibility of their entire network, even during an outage. Engineers will ensure business continuity at all times with the capability to access and manage remote infrastructure. They will have the critical ability to remotely identify and remediate network issues, without interrupting operations.




A Key Consideration In Digital Transformation Strategies

Download White Paper

Rapidly evolving customer expectations, growing market pressures and overall organizational goals have fueled digital transformation in every industry. It’s projected that by 2025, global spending for transformational efforts is expected to reach $2.8 trillion.[1] Digital transformation is the process of leveraging new technologies, across all areas of an organization, to optimize operations and drive growth by providing new value to customers.

Digital transformation evolves the way an organization conducts business, enabling them to transition from outdated processes and legacy equipment with the use of new technology. Best thought of as a continual adaptation to a constantly changing environment, it’s a journey that strives towards optimization across processes, divisions and the business ecosystem as a whole.  There’s many components and intermediate goals along the way. Rarely is there an end to digital transformation, it’s a way to meet strategic objectives and embrace a more agile operative method. It’s also critical in turning marginal into exponential growth.

It allows businesses to improve in three key areas:

Technologies of Digital Transformation

There’s a variety of technologies powering this digital transformation journey. Advancements in networking, hardware, software have all provided organizations with the ability to implement a digital strategy.

Cloud Computing: Organizations are moving from legacy systems to cloud based applications. Providing flexibility and scalability to meet business demands, data is accessible from any device.

Artificial Intelligence: AI can support many business needs including, automating processes, engaging with customers and enables smart decision making based on the large amounts of data it continually collects.

Internet Of Things (IoT): These sensors and software are being used in almost every industry. Continuously connecting, exchanging and analyzing data, businesses are able to leverage its insights. Real-time data is invaluable, helping to improve productivity, increase customer engagement and optimize efficiency.

Strategizing For Digital Transformation

A digital transformation has many considerations, stakeholders and components to it. Before executing a plan, a strategy must be built.

Assess Current Business Situation
An organization must assess their current organizational structure, everything from their culture to their processes. Conducting a business audit will help key stakeholders understand one of the most critical aspects of digital transformation efforts, what a company’s values are. These values shape what they want to provide their customers and how they want to do that.  From there, it’s important to outline current operations, roles and processes. Another significant takeaway is to establish an organization’s weaknesses and anticipate future challenges, like network access.

Determine Goals
To successfully execute a digital transformation strategy, an organization must make sure that the plan they’ve created is aligned with their business goals. It’s the how and why.

Conduct Pilot Testing
For most companies, digital transformation completely overhauls their way of doing business. Processes that may have seemed to work good before may be changed, new teams may be interacting with each other and overall the customer experience will have been altered. Run pilot tests and collect feedback throughout the process. This will help gauge customer feedback before a full roll out, identify bottlenecks and help determine if an organization is ready for full scale implementation.

Create A Roadmap
Executing each previous step will now allow a digital transformation team to develop a strategy on how to overcome challenges and meet goals. Many times, a strategy will begin with considering reframing corporate infrastructure. They also must understand their employees’ current skillset and evaluate if any training is needed to meet the businesses goals. This will help determine the key roles and players at each stage of the journey.

Preparing For Change
Although digital transformation revolves around technology, people play a large and integral role in its success. To successfully execute a digital transformation strategy, employees must feel empowered to embrace change and to work differently. Constant communication and explaining to them the goal of this process is extremely significant. It’s important to understand their concerns, take feedback and encourage them.

The Future Is Now

Digital transformation builds bridges between organizations, their technology and customers. These new systems will allow businesses to build a bridge to the future by establishing new networks and ecosystems that will result with new business models to achieve future growth. This new digital world will revolve around data, actionable intelligence and most importantly connectivity.

A connected ecosystem is integral throughout digital transformation. Any type of network disruption will render these applications useless, effecting the organization and customers. During a digital transformation roll out, it’s critical to have always-on network access. The continuous addition of new applications places a strain on the network, which increases the likelihood of an outage. How to access critical applications when a disruption occurs should be one of the main considerations during the strategy phase.

When it comes to network outages, it’s never a matter of if it will occur, it’s just a matter of when and how long it’ll take to recover. Opengear’s Network Resilience Platform ensures that organizations have the connectivity needed throughout their digital transformation journey. Smart Out-of-Band and Failover to Cellular provide secure access to network devices through an independent management plane, the IT team can always access infrastructure, at every site. They’ll have a full view of the network at all times remotely, which gives them the ability to identify and remediate the issue quickly.

Also providing the necessary NetOps capabilities, necessary for many digital initiatives of future thinking organizations, Opengear solutions also enable secure provisioning of new remote sites. Something many businesses will constantly be undertaking, like in retail, finance and hospitality.

Opengear is here to help facilitate your digital transformation, on Day One, during an outage and every day.










Network Predictions For 2022

Click the image to download.

Another year is in the books and looking back, 2021 didn’t have many dull moments. Bennifer reunited, we learned most of us would never want to become a contestant on Squid Games and TikTok became an authority on pasta. We’re sure that 2022 will be another exciting year, one filled with new technological advancements and innovations to address the evolving needs of enterprises.

Here’s predictions from the Opengear team on what 2022 has in store for networks.

Working Hybrid

Many organizations have moved to a hybrid working environment which can create specific network challenges. Companies have had to become more agile on the fly and are having to reevaluate their infrastructure to support this ongoing trend.  Businesses need to have a network that facilitates collaboration and productivity no matter where an employee is located. IT teams have to be prepared to easily adapt. Successfully doing so will entail bringing together traditionally disparate networking domains such as access, IoT and physical security. Networks need to be secure, connected and scalable delivering a consistent experience to every employee, wherever they may be working. A Smart Out-of-Band solution ensures that critical applications are always accessible to employees, because when a disruption does occur, network reams are able to identify and remediate the issue remotely, without impacting operations.

Continued Threats To Uptime

Network outages are on the rise with downtime disrupting some of the largest organizations in the world in 2021. Some of the biggest threats to uptime for enterprises will be software upgrades gone wrong, cyber attacks and misconfiguration errors like mistakenly writing a configuration file. However, organizations that deploy Opengear solutions have the ability to manage their infrastructure remotely and securely, using an independent management plane. This allows engineering teams to quickly roll-back or undo these breaking changes, minimizing downtime.


There has been a dramatic shift when it comes to implementing a NetOps approach for growing, virtualized environments. Enterprises need to be able to automate processes, for things like securely provisioning new sites or even to test network processes to check latency. Companies will need tools that provide automated insight into how their remote sites are functioning at any given time. Two advantages of NetOps are upskilling and business efficiency. Employees will have access to more advanced technologies, streamlining operations and organizations can ensure critical business continuity.


The adoption of IoT is being driven by overall business goals. This type of technology opens up a range of possibilities for organizations to digitize processes and activities. However, as more organizations leverage IoT, a large strain is place on the network. Requiring constant connectivity, any kind of network outage will affect the usage of these devices. Failover to Cellular provides always-on network connectivity using 4G LTE when the primary link is unavailable. Enabling continued internet access ensures IoT devices can be used.

Click the image to download.

The Edge

We’ve already seen a massive migration to the edge. It’s expected that by 2025, 75% of data will be processed there. The next generation of infrastructure for the internet and the cloud, there are many drivers of this move. As devices continue to get smarter, enterprises need the ability to capture data securely, reliably and securely. More critical applications are placed at the edge and Smart Out-of-Band allows businesses to manage infrastructure at remote sites. Providing the same resilience, scalability and connectivity found in the data center.

Enterprises need to have a futuristic mindset. They must address current organizational challenges and goals, while anticipating what will be needed in the future. Continuously evolving demands and markets have made business agility critical.  As new technologies become available, enterprises need to be able to confidently deploy them. Opengear provides the network resilience needed for all of your 2022 goals and beyond.

Reduce Risk From Ransomware

Ransomware is malicious software, also known as malware, that can encrypt an organization’s critical data, denying access to entire systems, including databases, files and applications. Quite literally holding this information hostage, as the name would imply, enterprises then must pay a ransom in order to regain access.

Designed to paralyze an organization, quickly spreading across private networks, this growing threat is predicted to cost organizations $20 billion this year.[1] These types of attacks have grown this year by 64%. It only takes one employee opening an email to compromise an entire organization and once hit, many enterprises aren’t able to recover.

Ransomware attacks usually gain access to a computer through a phishing campaign, where a victim opens an email and downloads malware mistakenly through an email attachment or a link. An attack vector then establishes itself on the device. If the exploit is successful, it can attach a malicious application to the infected system. The application can then search for valuable files. Word documents, database files and images are some of the the most sought after, once they’ve been found, they encrypt them. Some of the most dangerous attacks exploit a system and its network vulnerabilities, causing it to spread across entire organizations. Using asymmetric encryption, a pair of public-private keys are created to encrypt and decrypt a file. Created by an attacker for each victim, the private, decrypt keys are the only way to access the files and is only given once the ransom is paid. Victims are usually given 24-48 hours to do so, or the files will be lost permanently. If a data backup isn’t available or it was encrypted as well, organizations must pay the ransom to get the files.

Ransomware variants are constantly evolving and because it’s straightforward to execute, has become more prevalent. There’s always new techniques being used to try and get access to data, hackers just have to look around for organizations without strong network security protocols.

There are many different types of ransomware, these include:

Preventing Ransomware Attacks

To help prevent ransomware attacks some best practices to employ are:

Full Visibility Incase Of An Attack

Opengear devices aren’t cyber security solutions but when a breach does occur, they provide enterprises with full visibility. Lighthouse Enterprise enables organizations with the ability to control every part of a network through a central hub.

During a breach, it allows organizations to:

When a ransomware attack occurs, many organizations aren’t able to reestablish themselves. Paying a ransom doesn’t prevent attackers from attempting to hit an organization again or necessarily ensure a successful recovery of data. The vast amount of enterprises that have been targets of these attacks have experienced significant impacts, including loss of revenue, damage to brand reputation and have been the cause of workforce layoffs.

A recent research report found[2]:

Opengear provides enterprise-grade security for core and edge sites. To decrease disruptions and safeguard customer information, our solutions have the most stringent encryption features built-in to ensure the latest compliance standards are met. Be ready for any threat with a resilient network.


The Downtime Dilemma: What Shook Social Media?

Read our white paper today!

Something went wrong. That’s the message that billions of users saw last week when the world’s largest social media giant went dark for hours.

What could’ve just seemed like a minor inconvenience for everyone that likes to scroll through social media for a few minutes during the day, actually had much larger repercussions. Enterprises use these platforms to stay connected. Advertising is one of the biggest draws. The outage affected the more than 10 million brands and businesses who use the platform to promote their products. Organizations using the tech titan’s advertising services reported their sales dropping between 30% to 70%, compared to the same period a week earlier.1

Network outages aren’t uncommon, but what caused this disruption and how could it have been prevented?

Understanding The Outage

The outage was triggered by a system that manages their global backbone routers that coordinate network traffic between their data centers. It connects all its computing facilities together. As you can imagine, this consists of thousands of miles of fiber optic cables all over the world. During a routine maintenance job, a faulty configuration change occurred. A command was issued which caused a complete disconnection between their servers, data centers and the internet… that sounds bad, but it gets worse.

This faulty configuration change also blocked the ability of devices and employees to communicate, creating a cascade of network failures. Their BGP routes for DNS nameservers were withdrawn making it seem like their domains didn’t exist although those servers were still operational. This resulted with internet traffic not being able to resolve URLs or making routing decisions. This still sounds bad… and it still gets worse.

The tech titan’s data centers couldn’t be accessed because their networks were down. Their loss of all DNS broke the internal systems and many of the tools they’d use to try remediate the outage. Having their primary and Out-of-Band networks down, engineers were sent onsite to debug the issue, but like many employees of the social media giant, the disruption also caused them to be locked out of all buildings. Then once they were in, there’s many security layers that make it difficult to modify hardware, even once they can be accessed physically.

After 6 hours and about $100 million in revenue lost, the social media giant was back online, but this outage could’ve been resolved a lot quicker with the Opengear2.

Reducing Downtime With Opengear

When a disruption occurs, engineers need remote visibility of their entire network. Not being able to log on or be able to even badge into their buildings was a major challenge. This could’ve been overcome by using an Opengear device with Smart Out-of-Band and Failover to Cellular.

Providing continued internet connectivity for remote LANs and equipment using highspeed 4G LTE once the primary link is unavailable, Failover to Cellular automatically activates a secondary connection. This re-establishes inbound and outbound network access without manual intervention. Once failover is enabled, Opengear devices are able to detect failures sending ICMP ping requests from the primary network interface to a primary and secondary address remotely. If these requests fail, the primary connection has also been deemed as having failed. When the primary connection’s been restored, the devices automatically fail forward and resume normal operations. In this case, restoring access to devices and BGP routes.

The Opengear Network Resilience platform could’ve been leveraged to back up device configuration files prior to making network changes. This would’ve enabled the social media giant the ability to restore the known, good configuration files immediately upon discovering the change had caused the outage. Pushing the saved configuration files from the Opengear device back to the affected equipment would have restored the network quickly.

They’d have another set of tools, on a separate network, to remediate the issue. Having this immediate access would’ve significantly shortened the duration of the outage. The Network Resilience Platform is based on presence and proximity of a NetOps or Smart Out-of-Band console server at every location and is centrally orchestrated through Lighthouse software. Providing an independent management plane, organizations have secure, remote access to all their devices, even during an outage. Engineers can remotely identify and remediate issues.

It can be good to make the headlines, but not for something like this. A resilient network means your customers are always connected. Learn how we can help keep your network up and running because, Opengear means business.



The New Age of NetOps

Download our Research Report today!

Next in our Road to Resilience webinar series is “The New Age of NetOps”, hosted by Roy Chua, Principal at AvidThink, and joining him was Jason Gooley, Technical Evangelist at CISCO, AJ Murray, Senior Network Engineer at Red River and Dan Baxter, Senior Sales Engineer at Opengear. The panel discussed the changing responsibilities of the network engineer and the role that NetOps has played in this evolution. Below are some of the key takeaways:

The Role of NetOps in Modern Businesses

The road to resiliency is a difficult path to tread, with networks becoming more complex and harder to manage. A key approach to simplifying network management is adopting NetOps, which Jason succinctly described as the streamlining of operations via value-adding technologies such as automation, with AJ concurring and stating that it’s the marriage of the technology with the business.

Roy gave an overview of the research study by Opengear of network engineers, architects and managers, which found that NetOps is rising in popularity, with 87% of organisations acknowledging increased investment in the past two years. NetOps however has implications for both network and business culture, and delving into these trends is crucial for organisations to truly understand how the approach differs.

The Changing Responsibilities of the Network Engineer

Network engineers have historically been viewed as the saviours when something goes wrong in an organisation. The culture around saving the day means that one professional is viewed as the keeper of the keys and steps in when a fault occurs. Roy asked the panel about how NetOps is changing this culture. AJ said that NetOps is enabling more engineers to be daily heroes by saving time for the organisation with automation, and Dan agreed, stating the benefits of removing the dependency on a single hero and covering why this allows others to apply their skills. One such example of automation is management of devices to ensure the manual method of logging in to each device can be avoided. The panel agreed that there’s now more opportunities to save the day in more ways for more people.

Much of the fear around increased use of automation under a NetOps approach is the reduction of responsibility for network engineers in this new culture. If everything is automated, Roy asked the panel as to whether they are a redundant entity. They collectively agreed that this isn’t the case. Dan said that automation actually frees up the time for network engineers to reskill or upskill in other areas of the business, and focus on critical thinking areas where they are better suited. Virtualisation and the move to the cloud previously opened up new job opportunities for skilled professionals, and automation will provide the same in the networking world.

Starting Small in the Automation Journey

Jason also explained how many engineers are also fearful of NetOps and automation due to a potential lack of understanding of this new approach. Jason said that starting small is the key here, even if it’s something as simple as using a Raspberry Pi at home for them to gain experience in that environment, and will help them make the shift from network engineers to network developers. This also plays into the development of new skills for those in the industry.

Relying on external support such as the technical expertise of Opengear and the helpful engineer community will be key for professionals when adopting NetOps. Dan explained that for young network engineers entering the business, collaboration will be crucial, and the onus will be on leaders to ensure they learn from every aspect of the organisation. The panel agreed that a NetOps culture is more than just integration of new solutions; it’s a mindset among the professionals that work within it. Taking these considerations into account will ensure effective integration of NetOps and enable a smoother journey on the road to resilience.

Missed the live stream of the webinar? Be sure to check out the video here for the full rundown.



The 3G Network Sunset

The transition to 4G is underway, bringing with it faster download speeds, increased reliability – and among many other things, the sunset of 3G devices.

So, what does sunsetting 3G actually mean? This means that carriers, like AT&T, Verizon and T-Mobile, will be shutting off the cellular infrastructure, needed to operative devices that’s based on that network’s technology. Once it’s turned off, devices will no longer draw a connection to that network. Since there are limited amounts of spectrum available to carriers, sunsetting old infrastructure makes room for new technologies and frees up bandwidth for faster, 4G and 5G signals.

The 3G Network Sunset

As carriers migrate to newer, faster 4G LTE networks, that gradual end of 3G support has begun. As the slower, outdated networks are phased out and new towers are built, all 3G powered equipment, not matter who the manufacturer is, will become less reliable – causing connectivity issues and eventually will no longer work at all.

Since the turndown has begun and with carriers already committing to no new 3G activations, the time to migrate is now. To avoid any service interruptions, enterprises should begin to transition at least 6-12 months out from the sunset date, which varies from carrier to carrier.

Source: Image Courtesy of Digi International®: How to Stay Ahead of the 3G Network Sunset

Strengthening The Connection

Don’t get left behind, it’s time to trade up to Opengear 4G devices. Replacing your legacy units with our smart solutions, let’s you leverage the latest cellular technologies and ensure a resilient network with carrier-certified units.

Opengear appliances are specifically engineered to provide cellular Out-of-Band capabilities. They are purpose-built from individual components, uniquely providing Smart Out-of-Band with an integrated 4G LTE cellular capability that has been certified by the carrier as a complete end device.

Most competitors use a pre-packaged cellular board, like a SocketModem™ – that were never designed to support critical infrastructure. Better suited for ad-hoc installations and small projects, these aren’t truly integrated into the end device. This decreases speeds and makes services unreliable. The carrier may certify the module from the original manufacturer but won’t test the performance of the overall Out-of-Band device.

Limitations of certified, pre-packaged cellular solutions:

Opengear carrier-certified capabilities:

All this, means that our investment in certifications from each wireless carrier lets you know that you’re deploying reliable and secure solutions on your network. Choosing our carrier-certified console servers reduces uncertainty during deployment, with a single point of connection for and cellular-related issues.

Devices from the Network Resilience Platform also allows you to:

Learn more about upgrading your Opengear legacy devices before the full 3G sunset and how you can earn with our trade-in program.

Learn More

The Definitive Guide To Out-of-Band Management

Most enterprises understand that they need to ensure maximum uptime, but they don’t exactly know how to go about doing it. The answer, is deploying a Smart Out-of-Band network.

There are two primary ways for an organization to manage their network, In-Band or Out-of-Band, but first, let’s talk about planes. Networks have three planes – data, control and management. It’s how packets travel to, from and through a device.

In-Band and Out-of-Band

In-Band management is when devices are managed through protocols such as telnet or SSH connection to a router or by using some kind of SNMP based tools. Data, production and management traffic use the same path for communicating various elements and having the management and data planes combined poses a large issue. When an outage occurs, without an alternative path to the primary network, engineering teams are locked out of the management plane. This means the reliability of the network is reduced and devices can’t be accessed until the issue is remediated.

Out-of-Band management provides an alternative method to securely connect to remote equipment during an outage. A secondary, secure access path, it creates an always on independent management plane that provides reliable access to monitor and manage infrastructure. Network engineers can then lock down the most critical functions on the production network to prevent access from other users.

Choosing the Right Console Server

Placing a console server in each rack provides the necessary physical presence at the site with direct proximity to the IT infrastructure, creating a secure network. This type of capability has become critical for engineers as remote sites are deployed – but how do you choose a console server?

When choosing a console server, it’s critical to evaluate your enterprise’s needs – however there’s some features that are needed across the board.

Looking for a console server? View our product selector to find the perfect match for your network environment.


Best Practices

Secure Cloud Hosting Case Study

Enterprise needs are constantly changing and they need a scalable solution that will meet their growing demands. Engineers need a console server that has all the capabilities we discussed.

Don’t manage your network with your network: Make sure you have a device that provides Out-of-Band management. Providing an independent management plane,  you don’t have to rely on your network to get access to your network. If you do, you’re doing it wrong. There’s a much easier way

Out-of-Band Interfaces: You need an Out-of-Band interface that gives you options around cellular. You need the ability to not have your public interface up and available at all times. When console servers have the cellular option, they have a public IP address and so you can get there  at any time, however this also means anyone can see it. You need a console server that has the ability to automate the interface based on internal connectivity tests and provide the ability to manually control it as well, access can bee SMS based or from a trusted phone list

Power Management: Console servers with a managed devices configuration, allows you to correlate a serial port to the specific power outlet you need. This enables you to power cycle the device right from the serial session, the process is more seamless. The console server is the facilitator of all of the connectivity and you don’t have to interact with the device at all to make it happen. The console server provides the connectivity. Engineers don’t have to go to a console server first to connect to the product or go to the GUI to turn off  the power, some managed devices pull this all into one feature so engineers can can stay inside the device and do what they need to do

Control Access: You need a console server that has a firewall, enabling you to restrict access. If someone hacks into the network, you need the ability to fix it remotely and securely. Engineers want a device that can be used as a  jump post where they can connect into the console server and log into a Linux device to the remote network. They can then ping to to test connectivity and SSH to the box that’s located at the remote site. Also they can get access to web interfaces, if they have a centralized management software with an IP access feature, they can get direct GUI access on the remote LAN or port forward on the console server. Connecting to the console server first and then doing a TCP dump, allows you to grab sniffer traces to gather information and troubleshoot at the remote location, eliminating the need for a truck roll. You don’t have to go on-site, or call a non-technical person and walk them through what to do

The Purpose-Built Console Server

The Opengear Network Resilience Platform has a full range of console servers and is based on Lighthouse Management Software. All devices are configured with Smart Out-of-Band, standard in each appliance.

Smart Out-of-Band by Opengear raises secure remote access to a new level.  It goes beyond traditional Out-of-Band management by enabling secure monitoring, access and management, from anywhere. When paired with Failover to Cellular, enterprises have enough bandwidth to run critical business processes while remediating the issue. It is built in to every device in the Opengear Network Resilience Platform and provides enterprises with uninterrupted availability, even during network disruptions.

Architected to meet network resilience needs, Smart Out-of-Band:

Smart Out-of-Band by Opengear

Smart Out-of-Band allows engineers to access equipment remotely with automated management and support capabilities. Customized rules and policies allow the Opengear device to automatically detect and remediate issues as they occur while providing enterprise grade security during an outage. It’s beneficial for Day One deployments, SD-WAN and every day device management. Learn more about the range of Smart Out-of-Band console servers.



The State of Reliability Engineering

A Q&A with the Director of Site Operations at PayPal

TJ Gibson, Director of Site Operations at PayPal

As Director of Site Operations at PayPal, TJ Gibson runs the company’s network command center based in Scottsdale, Arizona. The 600-person organization is responsible for site reliability and cloud engineering. They handle functions including incident response, network operations, monitoring, and alerting. I recently interviewed TJ for an episode of the Living on the Edge podcast where we talked about site reliability engineering (SRE), among other things. Here are some of the highlights.

What does Site Reliability mean at PayPal and how has that changed over the last few years?

At PayPal, SRE really started before it was even a term. It was seen as a bug fixing team, there to clean up messes. In the last decade, the industry has come a long way in terms of technology to bolster resiliency and defining SRE as a best practice.

Our mission is to ensure that all of the products that we deliver and all of the capabilities exposed to customers have reliability, resiliency, fault tolerance, and usability baked into them from the beginning.

What does the phrase network resilience mean to you?

Network resilience means that a company is capable of surviving faults, which requires the ability to meet business needs and customer expectations in a way that is efficient and effective, and allows us to respond and react and absorb and grow to provide network resilience. From an industry perspective, I see an opportunity to make SRE more prominent, perhaps similar to how information security practitioners have been able to up-level and make a tighter connection to a company’s policy or regulatory obligations. I think SRE professionals will increasingly be able to step up and link what they do in the network stack more directly to business objectives.

Is security interwoven with SRE or do you still see that as a separate entity?

It is interwoven with everything we do and something that we hold ourselves accountable to our customers for. I think we’re in a little bit of a transition period here where SRE has kind of come into its own. It’s become more of a mature discipline within the industry. And I think that will stay true as we go forward. But I think some of those things that SRE is bringing today will become part of core workflows for products, network architecture, and data centers ― as opposed to always being a centralized IT function.

Do you see anything over the next few years that will change the way that SRE is implemented or do you think it will evolve more slowly?

More and more large enterprises are looking to the public cloud and moving workloads there. I think I saw recently that Capital One essentially was declaring victory in their cloud journey. That brings an entirely new perspective on large-scale applications and SRE. Going forward there are things that I think we haven’t quite yet accounted for.

Machine learning and artificial intelligence will bring aspects to our technology stacks that we just don’t fully understand from a resiliency and operations perspective. I think the frameworks, structure, and accountability with SRE will start to be baked into our cloud applications.

I know you’ve championed the importance of understanding the different phases in a technical career. How would you describe them? 

I think early on in your career, the value that the business perceives from you as an individual really boils down to how much you know. And the deeper you know a particular technology or domain area, the higher your value is to the organization.

I think you hit a point usually after five to ten years when your value becomes more about who you know. It’s more about your ability to bring people together; to find the right answers and the right resources within the organization. That’s where a lot of people tend to struggle. It’s about being able to bring your experiences and skillset to bear on specific problems and understand the context quickly, get to relevance quickly, and be able to help find technology solutions.

Some IT veterans tend to fall back on the things that have worked for them in the past without learning the latest and greatest. They need to dive deeper into whatever technology stack is in front of them. But mostly, more senior roles are about being able to layer on their relationship skill set, their understanding of the business and how to translate requirements into business outcomes and technology solutions.

What studies or forums would you recommend for people interested in SRE?

There are several certifications. The Cisco Certified Internetworking Expert (CCIE) is the most well-known one. And recently Cisco added a DevOps section. I’m not aware of anything similar that focuses on SRE or reliability or even site operations.

USENIX, the Advanced Computing Systems Organization, has a yearly conference they call SREcon. They have some very deep tracks on how to use machine learning to bring better insights from your observability platform. Other tracks cover how to build networks for resilience on a global scale and how to provide SRE with hybrid cloud. There are also disciplines around problem management, root cause analysis and how to determine business logic failures differently from systemic technology failures.

I don’t think it’s enough to have only an application development background or only a networking background and be able to step cleanly into an SRE career field and be successful from day one. You really have to understand all of the constituent parts and how they play together to contribute to reliability, operability, and performance.

The pendulum the last couple of years seems to have swung back from everybody wanting to be a specialist in something to an understanding of the value of being a generalist where you can bring in parts of different skillsets or different backgrounds to combine them and get something done. And I think the world of SRE is probably a perfect situation when that makes sense.

It’s not enough to understand how to build the most reliable, most resilient most scalable network, if the application on top of it doesn’t know how to consume and use those benefits.

I’ve got to believe, in your role, you have a hundred “oh crap” stories. Do you have one that you’d like to share with us?

I had some advice early on in my career when I was doing some consulting and I was very nervous about standing up in front of a boardroom full of executives and trying to tell them where their vulnerabilities were. I just had no idea where the questions were going to come from and what the agendas were of the people in the room. One of my colleagues sat me down and he said, “Look, consulting is 10% technical and 90% people.” It’s true. My uh-oh moments really come down mostly to people things mixed with technology.

In the Air Force, my boss pushed a configuration file to every machine on our network of about 180 nodes spread across 40 countries. That file set the same IP address on every device on the network. It was the mid-1990s, so a lot of our automation that we take for granted today did not exist.

I spent three days talking to pilots and people loading airplanes all over the world about how to change the IP address to something that matched. That to me was one of the hairiest experiences, not just because it was so complex and so people-focused but because it highlighted how fragile some of these things are that we take for granted. One simple human mistake essentially shut the network down for three days. The irony was that my boss got a medal for fixing that problem.

When you recover from a situation like that, you can look for opportunities to build automation or gates or controls that would prevent that mistake from ever happening again. So it’s a win. The fact that we found it when we weren’t getting shot at was also fortunate.

Bonus question: Would you ever hire a former hacker?

I truly believe that a person’s background really has nothing to do with who they are today. We all change. If we were held to account for all the things we did when we were 17; if we had social media when we were in high school, for example, we would probably be second guessing a lot of decisions we made. It’s good to be open to evaluating every person’s talent and capabilities regardless of their past.

Watch the full webinar now.


The Future of Out-of-Band, Part 3: Minimizing Downtime

Out-of-Band management gives you both presence and proximity to your critical devices. That’s the secure foundation you need to add NetOps into the fold.

Download our white paper today!

Today, there is a growing awareness of NetOps across the business and IT worlds. Larger enterprises  may have set up NetOps teams in the same way they have for DevOps teams. However, there’s little understanding of what NetOps actually means across many organizations.

NetOps Skillsets

Organizations won’t be shifting their existing network engineering team wholesale into that role. More than likely they’d be adding another layer, charging the network engineering team to focus on the day to day work, and commissioning a NetOps team to look at the orchestration and the automation of it.  This represents a major investment, so this is why it’s more likely that larger companies are considering it at the moment. It also represents a big change for network engineers in terms of the skillsets they need. For years the badge of honor has been being a certified engineer and CCIE (Cisco Certified Internetwork Expert) or a CCNA (Cisco Certified Network Associate).

Although those certifications are still valuable for NetOps, there is a whole additional skillset that engineers need to train themselves on. For example, they’ll need to be able to program in Python and be able to understand how Docker containers work. They’ll also have to be able to deploy commonly-used toolsets like Chef, Puppet and Ansible.

Minimizing Downtime

Read our overview sheet.

This evolution of the role of the network engineer is all part of an ongoing process, through which network management is becoming increasingly automated. Linked to that is the rise of the independent management plane, which as we’ve discussed, helps network engineers provide organizations with secure remote access to their network today. This approach also delivers network set-up and configuration, ongoing day to day management and network remediation when something goes wrong. At the same time, it’s helping to fuel the move to NetOps automation, as a smart Out-of-Band network can now also be used to run these standard automation tools

Most important of all, it helps keep the network up and running and keeps costly network downtime to a minimum. Outages causes organizations to lose money and also impacts their reputation. The independent management plane together with NetOps automation helps reduce this threat and ensure business continuity. At a time when most businesses are focused on doing more with less, that’s absolutely key.

How Out-of-Band Adds Resilience To SD-WAN

As part of our “Road to Resilience” webinar series, Opengear recently hosted a panel discussion on out-of-band (OOB) device management and software defined wide area networks (SD-WANs). SD-WAN upgrades enterprise WANs in ways never possible before, aggregating multiple types of traditional enterprise network connections with internet connections for greater efficiencies and lower network costs.

Beyond offices and branches, SD-WAN deployments have extended to edge networks with a lot of diverse hardware and software. Resiliency is critical and OOB network device management, with failover to a cellular connection, has become a key solution to ensure network devices on the SD-WAN never go down.

Evolution of SD-WAN

Click here for more SD-WAN resources

With SD-WAN, applications considered essential for customer interaction, commerce, and employee productivity can be sent over a traditional WAN connection ― like IP/Multiprotocol Label Switching (IP/MPLS), a Layer 2 or Layer 3 VPN, or a virtual private LAN service (VPLS). Non-critical traffic, such as guest Wi-Fi and video, can be routed over a much less expensive Internet connection with requisite levels of security and guaranteed performance. Application-aware routing lets you create policies to determine which applications will travel over which transport options, with flexibility to automatically shift traffic as load and conditions change. SD-WAN allows companies to increase bandwidth while lowering network costs and introducing flexibility and choice.

“SD-WAN has its roots in SDN. SDN was popularized by the arrival of OpenFlow, a programmable network protocol designed to program the behavior of switches from various vendors,” said Roy Chua of AvidThink, a research and advisory service, who led the webinar. “The key to SDN and OpenFlow was the segregation of the management plane from the control plane and data plane. Historically, they’ve been bundled together. With SD-WAN, the same separation principle applies, with the cloud used to provide the management and control plane.”

Todd Rychecky, Vice President of Americas for Opengear, remembers that before SD-WAN, it required truck rolls and manual, box by box configurations to reprogram CPE in branch offices or at the edge. “SD-WAN changes the paradigm by using cloud-based principles to provide policy-based automation and application awareness.”

With cloud-based management, “You can create service chaining, the ability to link multiple network functions that can be either locally hosted or in the cloud,” said Saurabh Sandhir, Vice President of Product Management at Nuage Networks from Nokia. “And SD-WAN lets you utilize multiple WAN links and build virtual tunnels to create overlays on top of any transport.”

Sandhir went on to describe the three elements that are behind “SD-WAN 2.0”. Firstly, the idea of a multi cloud strategy, meaning for an enterprise to have a single pane of glass for its VPN branches and public cloud connectivity. “It’s almost becoming table stakes as the world moves to public cloud, and it plays into how an enterprise can move workloads between public cloud providers. Having network connectivity that follows along with fully redundant and resilient connectivity using MPLS, Internet and LTE”. Next, there’s a long-tail of “Value-Add” services that can be made available beyond standard network connectivity. And perhaps most importantly, end-to-end security across branch WAN and the cloud, and how it can detect threats that are enterprise wide, quickly and take proactive action.

Adding Resilience to SD-WAN

Network resilience, the ability to withstand a network disruption, is a critical component of business continuity and a highly important consideration when deploying SD-WANs because staying online is vital for every enterprise business today. Once, POTS lines were used to dial into modems that were used to access and recover a router at a branch location. “Now, OOB management is foundational to network resilience and running a sound business,” said Opengear’s Rychecky. “Using the 4G LTE cellular network as the independent management plane, a network team using OOB management solutions can remediate issues in remote offices and network edge locations quickly. Failover to cellular allows companies to continue online while a problem is being resolved.”

Rychecky sees more automation coming through Out-of-Band, with the ability to bring up a remote site or a data center without being physically present. Devices will call home to the cloud, grab configuration downloads, and configure hardware ― all directed from a centralized management console. “Sort of like NASA’s approach with the Mars Lander.”

“Resilience is a philosophy and approach that transcends network planes and functions,” believes Nuage Network’s Sandhir. “OOB management over cellular is used by one of our large customers in Canada. They use a smart OOB management solution when they need access to an on-premise device if cloud connectivity is ever lost.”

Nuage Networks also has a large European service provider customer that rolled out a global SD-WAN footprint. One of their end customers, a large consumer electronics and manufacturing company, had a large number of sites around the world. Their biggest concern moving from a traditional MPLS network to SD-WAN was resilience. Some sites were factories and for these, resilience meant getting the highest performance and SLA for access to applications running in the central data center. For consumer-facing sites, resilience meant having always-available guest Wi-Fi. Other sites were used for software development and they needed resilient access to cloud-based software-as-a-service (SaaS) applications hosted in the public cloud via the public Internet.”

Using remote OOB access, all of these different types of environments can be made highly resilient with proactive and real-time automated monitoring and management features.

Proactive and Realtime OOB Management

With SD-WAN, the distributed nature of today’s enterprise, there’s a lot of infrastructure responsible for the management, control, and data planes. Smart Out-of-Band management provides secure access to all of the infrastructure ― routers, switches, firewall, power distribution unit (PDU), and universal power supply (UPS).

“Proactive monitoring and alerting take care of the problems before they disrupt a business,” says Rychecky. “Using cellular, you can get SMS text and email alerts over the cellular line, which isn’t possible with a POTS line. One of the simplest fixes for a locked-up router is a basic power cycle, which you can do easily and quickly with OOB tools. Proactive monitoring of IoT sensors could include video cameras, door contacts, vibration, water, and smoke ― anything running on five-volt power can be connected to an OOB management interface.”

SD-WAN has provided new flexibility and value to the enterprise. Now OOB management with failover to cellular helps enterprise organizations keep those hybrid network connections up and running if data center or cloud services fail. Adding greater automation and software intelligence has made remote network management ever more precise and proactive.

Watch the full webinar now.


The Future Of Out-of-Band, Part 2: Adding NetOps Into The Fold

We’ve already discussed that Out-of-Band plays a critical role in NetOps automation, but today we’ll really get into how the two work together. (Read Part 1 Here)

Download our white paper today!

NetOps is, in part, an evolution of DevOps. It’s a mindset that fosters communication, collaboration, integration and automation among software developers and operational IT teams. Today, we are seeing DevOps tools increasingly applied to networking. This has created a NetOps approach to building and maintaining a reliable infrastructure to support evolving demands.

NetOps Put Into Practice

One approach to introducing NetOps into an enterprise is to layer it into an existing system. This adds functionality and flexibility to a familiar component of the infrastructure. An independent management plane, like Out-of-Band, already provides presence and proximity to critical network devices, so it’s well-placed to bring NetOps automation to your network. It’s there on Day One, enabling the deployment process to be managed via a centralized management software and ensuring network equipment can effectively self-configure.

It’s also there for the standard day-to-day process of keeping the network running. Out-of-Band provides an alternative route to remediate the network when it’s down. The separate management plane is the facilitator, giving engineers the infrastructure to make use of NetOps today and in the future, whatever the approach evolves into over time.

NetOps for Network Engineers

Read out overview sheet.

These tools and capabilities are already changing the nature of the network engineer’s role. Engineers have traditionally followed a manual process with most of their time working on the command line interface (CLI) typing in a range of esoteric commands. Over time, that shifted to engineers wanting to use a graphical user interface (GUI) – a more intuitive approach which doesn’t require the learning of specific commands. To fully automate their approach, they’re increasingly using NetOps.

NetOps has become so prevalent because of the growing complexity of modern IT networks. Most enterprises have more distributed networks than before. This has caused the network engineer role to evolve.

Organizations need a more automated approach to networking which NetOps has provided, reducing a lot of the repetitive applications and routines that engineers would typically have had to run through previously. Most companies would expect an engineer to log in, run through five or six routines to work out what was happening and then remediate the problem if an outage occurs, but with NetOps, remediation is a lot easier.

NetOps can automate that entire procedure so that when that event happens, the system automatically runs through those five or six steps.  If that resolves the problem, all well and good. If not, the issue is escalated to the network engineer to handle the next level of troubleshooting.  This simplifies the process while also removing human error because so many downtime incidents are caused by someone pushing a wrong configuration, or typing in the wrong letters when they are sending commands. By using a NetOps approach to correctly program an automation routine, an enterprise can effectively reduce these challenges.

NetOps is concerned with automation and reducing human error. With so many new data centers, and more network locations moving out to the edge, there are not enough engineers available to comprehensively staff and support all these sites. Businesses can use NetOps tools to concentrate their resources at a Network Operations Center with ‘follow the sun’ support regardless of time zone, rather than having to find resources to staff each and every site. This saves time and money.

In Part 3 of our blog series we’ll discuss how an independent management plane and NetOps reduces downtime.