SCHEDULE A DEMO

Blog Archives

Don’t Rely On Your Network To Manage Your Network

Enterprise IT organizations, following the lead of hyperscale service providers, are adopting virtualization and automation technologies in order to scale infrastructure effectively. But no matter how much you automate and virtualize your systems, you rely on the physical network infrastructure that underpins it all to remain resilient. While the DevOps philosophy drives the efficient management of the logical layer, the hardware required to run it remains as important as ever. And so the network engineer’s role in keeping those physical devices operational at all times is critical to the health of the organization.

Networking, at its most basic level, is about connecting Point A to Point B, and keeping that connection secure and reliable regardless of external conditions. One challenge is that many architectures rely on using the production network to manage that same network, which becomes problematic when congestion or an outage occurs. It’s no different to having to call the phone company to report that your phone isn’t working – problematic. To avoid this, companies have traditionally used a separate out-of-band management network to ensure secure access to their critical devices, often using a cellular modem to provide a reliable alternative path. A console server is installed at each data location, providing a physical connection to routers and switches, which can be accessed remotely from a NOC or central location.

In many cases, this is seen as being “for emergency use only” when in fact it should be used as an independent management plane at all times, not only during an outage. It allows network admins to lock down many features on the production network, and to perform configuration management only through the more secure out-of-band network, with limited login access, effectively becoming the “Network Admin’s Network.”

To manage the ever more complex expectations of enterprise organizations, a NetOps approach is becoming increasingly popular with network engineers, using many of the tools and processes familiar to the DevOps community. To keep it simple, many are looking to their existing network architecture to support this, rather than relying on Server or Applications teams to allow them access to the production server. Advanced console servers are now available with x86 CPUs, capable of running both the out-of-band management network and common NetOps tools such as Docker containers and Python scripts. By upgrading, or expanding, their existing console server deployments, a network engineer can now extend their reach to the edge of the network, with advanced automation routines at each satellite location.

With secure Out-of-Band management access and the ability to run NetOps tools in a single appliance, Network Engineers can now create their own independent management plane, for Day-one, for every day configuration management, and for secure access when the network goes down. With this resilient physical network in place, your virtualized environments continue to operate reliably, with no disruption to the end user.

To learn more about this, take a look at a recording of our webinar from ONUG. Watch now.

 

*Originally published by ONUG.

In the News: Hundreds of thousands of Cisco switches attacked

In the last couple of weeks, there have been multiple reports of hackers exploiting a security vulnerability in Cisco Smart Install Client tool to attack hundreds of thousands of network switches. Cisco had previously identified the vulnerability, published an advisory and provided patches. As is often the case, it appears that only a very small portion of network devices were updated immediately, leaving many of them vulnerable. In most of the attacks in this event, the router or switch had their configuration wiped out and were left inaccessible and inoperable, causing the network to become unavailable.

Notable with this Cisco hack is the unprecedented severity and scale of an attack affecting networking devices. While most network devices today leverage a mainstream Operating System (such as Linux), network vendors normally remove components and disable services that are not needed in a purpose-built network appliance, reducing the possible vulnerability surface. Another factor is that the fragmentation caused by proprietary customizations by each vendor make it more difficult for a hacker to affect a very large number of devices by exploiting a single vulnerability.

That is why most broad attacks usually target mainstream versions of hardware, operating systems and application frameworks used in consumer or data center computing devices.

A growing threat: With SDN, NFV, and white box networking, quick patching becomes essential

The current trend is the increasing use of commodity hardware with a mainline operating system and software in network nodes. This convergence enables the use of common tools to manage all systems in a large infrastructure, including networking, compute and storage systems. But it also gives hackers a large vulnerability surface to attack systems that were not attractive targets before.

With the adoption of these new technologies, a network can no longer count on the “security by obscurity” of vendor-proprietary architectures. As network devices use more mainstream hardware and software components, network engineers will have not only to pay more attention to security advisories and react quickly, but also make sure to have the tools and processes to prevent attacks and recover from a disruption.

Preventing attacks: The need for better configuration management and provisioning systems in networking

We have all become used to the automatic update in our cell phones and laptops to fix vulnerabilities quickly, but the same is not true for networking devices – and for good reason. Without vendor-neutral automated configuration management and provisioning systems, network operations groups in enterprise networks are conservative when introducing changes. Human error and configuration change are the two primary causes of network downtime, and so Network Engineers minimize firmware updates because they are afraid of creating disruptions.

Automated configuration management and provisioning systems not only eliminate human errors (by far, the biggest cause of downtime), but also allow the roll-back of changes if they cause disruption. Monitoring of security threats is important, but without enabling patches and fixes to be quickly deployed, we will continue to see major attack events in networking from time to time.

Recovering from a Hack: Don’t rely on the network to manage the network

A reliable network infrastructure can leave network engineers with the feeling that they can always manage the network in-band (i.e. relying on network connectivity to manage network devices). Events such as this attack, which left the devices inoperable and brought the network down, are a strong reminder: when there is a network disruption, engineers need an alternative path to access and manage these devices without reliance on the production network. With an Out-of-Band Management infrastructure in place to maintain uptime, including 4G-LTE cellular access, network engineers can reduce the time-to-repair and ensure maximum network availability. So, when a vulnerability is exploited, the network can be up and running again in as short a time as possible.

Why do you call my Terminal Server a Console Server?

Before the 1980’s most of computing happened in centralized mainframe or mid-sized computers. Users accessed shared computing resources thorough “dumb terminals” (those green-screen terminals displaying text-based interfaces we used to see in commercial applications). Those terminals often connected using serial (RS-232 interfaces) directly to the main computer.

Not so Dumb After All

Terminal Server as a Console ServerAs computers got connected to a network and there were enough dumb terminals that were not network-enabled, there was a need for an appliance that converted RS-232 connections to network connections so that users at dumb terminals could continue to access the main computers. That appliance was a called a terminal server, and some of the early devices were manufactured by Digital Equipment Corporation (DEC).

In the 1990’s most business applications started moving to PC computers, but serial terminals  could still be seen in gas stations, retail stores and other places using legacy commercial applications. By then, several companies were selling Terminal Server appliances.

When the commercial Internet arrived in the mid 90’s, some of those Terminal Server vendors saw an unexpected high demand for their previously declining products. Internet pioneers like Yahoo! were building the first Internet data centers and found that Terminal Servers could be used “upside-down” to allow for management of a large number of Internet servers. Rather than connecting the terminal from the RS-232 ports to the network, this use case allowed users coming from the network to gain remote access to the RS-232 ports, which were in turn connected to the management (console) ports of servers and network equipment.

Terminal Server on Steroids

Some terminal server vendors, noticing the new trend, developed software features targeted at making the old appliances more effective as a tool to manage servers in a data center and remote sites. Those features included increased security and facilities to make it easier for the user to select ports without having to type in specific addresses or port numbers.

To differentiate the new products from old terminal servers and also to more accurately describe the new use case, those vendors started referring to the new products as “Console Access Server” or “Console Server”. A new product category was created and the “Out-of-Band Management” became the standard term to describe the new market space.

Not everyone adopted the new terminology, so we still see Console Servers being called the name used to designate the old appliance that connected terminals to a central computer. We can even find old terminal servers designed more than 20 years ago being used to manage console ports.

The current generation of console servers now support high-density applications (up to 96 ports in a single unit), enable zero touch provisioning, and often feature embedded cellular modules to allow access even when the primary network is down. With all of these new features, it seems as shame to still dismiss them as “terminal”!