Blog Archives

Any (Console) Port in a Storm

Storms. Floods. Heatwaves. System outages can be caused by a natural disaster, general hardware failure or any number of environmental conditions. When the system fails, how do you access your network equipment? Maybe it’s not just via the serial console port anymore.

Our reliance on network infrastructure continues to grow, especially with the acceleration of Big Data and IoT. This demands even denser infrastructure as more data traverses networks, with much of the processing occurring at the distributed edge. Without an Out-of-band (OOB) management system, even small problems can cause havoc, leading to significant downtime, lost revenue and brand damage.

Any range of problems can start the count-down to an outage storm. Cable interconnects (copper or fiber), power supplies, switches, dense compute chassis, storage arrays, and air-conditioning are just a few potential sources of pain. Most of these network elements are increasing in complexity, running sophisticated software stacks and susceptible to bugs, exploits, cyber-attacks or environmental issues.

Out-of-band management appliances such as terminal and console servers have helped network designers and administrators avert disasters for more than 25 years, giving access to a node’s serial (RS232) console port to get it operational again or perform a firmware upgrade.

But access to system console ports has changed in the last decade. The humble serial console port is not the only option. More systems and devices are deploying USB and Ethernet management ports. This move was fueled by deletion of serial ports on laptops which were often used as onsite crash-carts. Now USB ports reign supreme, so many switches and routers have a USB or serial or both consoles.

Sophisticated, software-defined network devices may now employ separate control-plane elements (x86 server blade, Ethernet MGT port, possibly motherboard or lights-out controllers) and data-plane elements (ASICS and ARM SoCs with serial or USB console). Each often has a different management port but within the same chassis.

It’s now more challenging for network designers, installers and maintainers to anticipate the mix of console port types, and decide on a management appliance, hubs and switches to suit every environment. Nowadays, systems  are sourced from a range of traditional full service hardware vendors and white-box vendors (branded or ODM), and so relying on there being a serial console port on every device, is now risky. But adding extra USB hubs and Ethernet switches in the management infrastructure is not ideal, since this introduces more points of failure.

To protect installations from outage storms and eliminate the chance of new devices having a console port type that you didn’t anticipate, Opengear has released a range of mixed console port infrastructure managers and resilience gateways. They provide both serial and USB console support (IM7216-24U) or serial and Ethernet console support (IM7216-24E), or in some cases, all three of them (ACM7004-5).

These models eliminate the need for extra management switches and USB hubs for medium and high density applications and save on space and power consumption and being integrated, add an extra level of reliability.

With the widening variety of console and device management ports on new network, storage, compute and power devices, whenever the storm hits, Opengear has the port you need.

Hop on the High-Speed Bus

The first general purpose computer known as ENIAC (Electronic Numerical Integrator And Computer – circa 1946) was heralded as the “Giant Brain”. It was literally larger than a dozen passenger buses and weighed as much. It was made of tens of thousands of vacuum tubes and relays, hundreds of thousands of resistors and capacitors and millions of hand-soldered joints. It operated at lightning speed, a whopping 0.1 MHz.  Skip forward 67 years, when the average computer is more than ten thousand times faster and one hundred thousand times smaller.

Every computer since then has employed a system known as a “bus” for transferring signals and data both internally and to peripherals. For the past 60 years these computer buses were mainly parallel wires or circuit board traces that could contain hundreds of signals. In the heyday of mainframes and early minicomputers these buses were proprietary, highly guarded designs, and specific to particular models or families of computers. It was only in the 1970s-1990s that the proprietary nature of buses was turned on its head spurred on by the advent of the microprocessor (Intel, Texas Instruments, Motorola, Zilog and others) and availability of a wide range of general purpose integrated circuits (led by Fairchild).

A number of minicomputer vendors (including Digital Equipment Corporation – now part of HP) started documenting their computer bus architectures (Unibus, Q-Bus, LSI-11 bus). A whole circuit board or multiples could be dedicated to the CPU function, other boards for memory, still others for disk controllers and so on. These boards were often 19”x19” in size and connected by an expansion bus with gold plated fingers that slotted into a multi-connector backplane. Many third parties quickly developed massive add-in cards to supplement the vendors’ selection of peripherals.

As the general purpose microprocessor (8080, Z80, 8086, 68000) effectively replaced proprietary minicomputer CPUs during the 1980s, vendors building those systems immediately released their expansion bus specifications (S100, Multibus I & II, VMEbus 1-10 MHz) and the open-architecture add-on card industry began. However it was only after the release of the IBM PC in 1980, which was IBM’s first open architecture computer with an ISA bus (5MHz), did the add-on card industry (memory, video, network, disk, comms) grow to billions of dollars in that decade.

During the 1990s the popularity and power of the PC architecture (80286/386/486/Pentium, etc.) and follow-on improvements to the ISA bus (EISA and MCA) paved the way for more sophisticated buses which allowed 32-bit operation, higher speeds, multi-processor support, CPU independence and so on. So in 1993 Intel released the PCI bus (Peripheral Component Interconnect) which supported 32/64-bit transfers at 33 and 66 MHz and dominated for a decade.

Post 2000 we’ve seen a dramatic shift in the performance, miniaturization and transformation of computing devices. Most modern CPUs used in these systems have absorbed discrete functions into a 2 or 3 chip-set or a single System on Chip (SoC) so those old buses make little sense. However the need to add high-speed peripherals, storage, displays, and communication devices still exists. The PCI bus was found wanting and its physical attributes made it impractical. In 2004 the PCI bus evolved into PCI Express which was an ultra high-speed serial bus that implemented the nearly 100 pin PCI bus on a handful of wires on a board or on a cable to an external device. Not only that, it introduced the concept of lanes so that you could aggregate up to 16 channels into one for 256 Gbps transfers. Most modern systems support at least a “one-lane” PCIe interface. Add-on cards are palm-sized or smaller. Some are also available in a mini-card format such as wireless modules.

At Opengear our engineers have significant design and business experience that covers the major “open architecture” buses spanning the last 30 years. Many of our products and future products employ these popular buses. That’s the high-speed side covered. An article summarizing some key medium and low-speed serial buses, which are also industry stalwarts, will follow, so “don’t miss the bus”.

In a cloud-centric world is your “Out-Of-Band” solution up to the task?

Out-of band (OOB) access to critical infrastructure for reconfiguration or repair was pioneered more than 30 years ago. It began as a DIY solution where engineers used terminal servers, repurposed server computers or routers with serial ports to access their infrastructure. Reverse telnet (later reverse SSH) functionality allowed serial over Ethernet redirection and command line/terminal access to the device console.

Fifteen years ago, OOB experienced a massive transformation resulting from the growth of crammed data closets, machine rooms and sophisticated data centers. Due to the density and wide array of critical IT, networking and power infrastructure, tens, hundreds and thousands of serial consoles needed to be accessed and monitored to keep the corporate IT engine running. To cope with this, (more…)

Channel surfing with 4G mobile broadband – a certification nightmare

As first seen on

LTE (Long Term Evolution of GSM/UMTS), marketed as 4G, emerged as the winner of the broadband cellular wars following the initial release of 3G (3rd Generation Partnership Project UMTS-3GPP & EV-DO-3GPP2). During the past 5 years a plethora of technologies filled the void between 3G and the promised 4G with titles such as 3.5G (HSPA, EV-DO Rev. A), 3.75G (HSPA+, EV-DO Rev. B) and 3.9G(DC-HSPA+, EV-DO Rev.C).

Although claimed as an evolution of GSM/UMTS, LTE (3GPP Rel 8 or later) is incompatible with existing 2G and 3G wireless interfaces and requires a separate spectrum. So why go with a new standard and new frequency bands that can’t slot into the existing North American 800~850/1900 MHz bands? Well LTE is completely different to GSM (TDMA) or CDMA or even WCDMA and it boasts (more…)

Channel Surfing With 4G Mobile Broadband: A Device Certification Nightmare

LTE (Long Term Evolution of GSM/UMTS), marketed as 4G, emerged as the winner of the broadband cellular wars following the initial release of 3G (3rd Generation Partnership Project UMTS-3GPP & EV-DO-3GPP2). During the past five years a plethora of technologies filled the void between 3G and the promised 4G with titles such as 3.5G (HSPA, EV-DO Rev. A), 3.75G (HSPA+, EV-DO Rev. B) and 3.9G(DC-HSPA+, EV-DO Rev.C). Although claimed as an evolution of GSM/UMTS, LTE (3GPP Rel 8 or … Continue reading