SCHEDULE A DEMO

Blog Archives

SD-WAN and the Enterprise, to boldly go?

With all the fanfare of a JJ Abrams reboot, cloud-managed SD-WAN is being heralded as the next generation of branch connectivity.  With human-friendly management tools and promising a reliable, warp-speed uplink at commodity internet prices, tried and tested technologies like MPLS and CLI-managed branch routers are starting to look as retro as 1960s sci-fi.

But beyond the hype, industry commentators and network engineers are questioning whether SD-WAN really is the universal, set and forget solution for branch networks.  While an SD-WAN overlay can deliver big savings and simple, flexible network policy control, enterprise networks also need guaranteed performance and availability.  So how bold should you be in adopting SD-WAN in your enterprise?

Preparing for launch

Starting on day one, cloud-managed SD-WAN differentiates itself with ease of install and initial setup.  No longer do you need specialised certifications or on-site expertise to get your WAN up and running.  Anyone who can plug in network and power cables is qualified.

This setup process, however, pre-supposes that your branch site has at least one operational underlying WAN link, e.g. an Internet connection from your ISP, and that the WAN router or modem is set up to auto-configure the SD-WAN appliance with DHCP and DNS settings.

It may seem like a minor point, but it’s compounded by another headline feature of SD-WAN – the freedom to choose any Internet provider and service on a per-site basis, rather than being contractually committed to a single carrier.  This heterogeneity has the side-effect of introducing complexity into your network, albeit nicely hidden beneath the SD-WAN overlay, in normal operation.

Practically speaking, it means although many of your per-site WAN uplinks may be appropriately configured there’s a risk some won’t be, and this must be audited site by site, or an alternative path put in place.  Without a known good connection to the cloud on day one, pain-free cloud provisioning is a non-starter.

It’s worse than that – it’s dead, Jim

A third feature of SD-WAN is the separation and centralisation of its control and management planes – this is what enables the data centre grade, Software Defined aspect of the technology.  However, unlike data centre networks where the control, management and data planes may run over separate or redundant networks, the traffic used to manage and control the SD-WAN itself is running over the in-band data path – the “production network” in data centre lingo.

Going back to our failure case, when the SD-WAN overlay is down, chances are you’ll have no way to reach the SD-WAN appliance to figure out what’s gone wrong, much less fix it.  You may find SD-WAN’s layered flexibility comes at the expense of accountability.  Is it an ISP, overlay, appliance or user error?  With no out-of-band remote access or trained staff on site to troubleshoot, you’re flying blind.

SD-WAN’s ease of bonding extra WAN circuits into the overlay can help mitigate this risk, provided the circuits are truly diverse.  Take care when choosing multiple ISPs which may for example share backhaul paths or other single points of failure.  In this context, consider utilising LTE as a secondary or tertiary WAN circuit.  It provides resilience from severed cables and perhaps enough bandwidth to sustain overlay comms during primary WAN circuit failure.  Plus, using a dedicated cellular appliance gives you a separate on-prem jumping off point for reachability, visibility and out-of-band management.

Is resistance futile?

Demand is booming, and analysts predict that the SD-WAN market will grow at around 70% year on year between now and 2021.  That said, enterprise is right to approach SD-WAN with caution.  As a technology it’s still “growing the beard” and its relative immaturity and complexity may give you cause to hesitate.  However, by carefully considering and mitigating the risks in the context of your business requirements, you may find yourself ready to engage now, or in the not so distant future.

 

NFV: The Virtual Reality

NFV network diagramInfrastructure and application virtualization has been the modus operandi in the server world for over a decade. With Network Function Virtualization (NFV), it’s poised to become mainstream in the networking world as well.

Traditionally, computer networking functions such as routing, switching and firewalling have been performed by vertically-integrated appliances combining hardware, operating system and the network function “apps” or services themselves. So say you want to protect your branch office network with a firewall, you’d buy a firewall appliance from your firewall appliance vendor and cable it in between the branch LAN and the Internet, job done.

While simple and robust, this model is starting to show its shortcomings in the context of a rapidly scaling Internet. So your new firewall works great, and now you’re noticing a lot of suspicious blocked connections in the firewall logs – bad guys are trying to breach your network.

From truck rolls to remote roll-out

Turns out the firewall you installed doesn’t support advanced intrusion detection and prevention.  Perhaps it’s an entry-level model where the vendor has omitted the feature, or perhaps it doesn’t have enough onboard storage for the threat signature database. So you buy a new firewall that supports deep packet inspection or perhaps a secondary appliance – either way it’s back out to the branch office to reinstall.

Now what if your job was managing hundreds of branch office LANs all over the country?  At scale, your “simple reinstall” will be slow, expensive and prone to human error.  It’s no wonder much of the push for NFV is by telcos and network service providers with large deployments of CPE on their hands.

Using NFV, the firewalling and IDS/IPS functions are decoupled from the appliance operating system. Instead, the OS is basically a hypervisor under which you can install virtual machines, each VM running a Virtual Network Function (VNF), like firewalling. Rather than route through physical cables, VNFs are chained via virtual interfaces inside software.

This means you can now deploy your new IDS/IPS function much in the same way you’d install a software app.  What’s more, the software roll-out can be orchestrated centrally, no need for site visits.

Because network interfaces can also be virtualized, this design also lends itself to using commodity hardware, i.e. x86 servers. Relative to specialized network appliances, off-the-shelf servers are cost effective to overprovision with storage and CPU grunt. General purpose, future proof hardware means longer refresh cycles and therefore fewer truck rolls.

Is NFV for you?

If rapid deployment of new network functions is key to your business, NFV is a no brainer. Your engineering teams can now focus on developing and delivering those services, free from vendor-imposed constraints. However most organizations following traditional network engineering practices have cause to think twice.

Until NFV matures and its inner workings are nicely wrapped up into platforms, you’re going to struggle without a team of developers fluent in technologies like OpenStack, Linux and Python. Besides, there’s a lot to be said for turnkey functionality and having a “single throat to choke” when things go wrong.

The widespread adoption of network virtualization is inevitable. However, it can only go so far. Recall that layer 1 of the OSI model is the physical layer – fundamentally networking is about physically connecting point A to point B. This means some low-level networking technologies, like cellular radio and out-of-band management, may prove immune to virtualization.

The Cost of Network Resilience

You may have noticed that we tend to bang on about the ROI of our solutions.  But it’s not without good reason – when your alternative is a rude wake up call followed by a 3am drive to the data center, all while the downtime bill racks up and your customers scream, deploying Opengear remote management makes for a very attractive proposition.

Perhaps not surprisingly, we tend not to focus on the other side of the equation.  The cost of not having network resilience is clear, but what’s the cost to have it?

Running the numbers

The initial capital outlay is easy to calculate, once you’ve worked out how many sites and devices you’re managing.  With our intuitive product interfaces (no certifications needed!) and industry leading, free technical support, it’s fair to say the cost of adopting Opengear is minimal.

Ongoing operational costs take a bit more figuring out.  For starters there’s your cellular data plan, which can be a little as $5/month/site – a bargain vs. traditional phone lines.

But we’re skirting around the real issue here – when you deploy out-of-band, you’re deploying a secondary infrastructure, and in many cases an entire secondary network.  This means more network endpoints to maintain and more network complexity to manage.

The hidden costs of resilience

When you’re up to your eyeballs managing a production network, the last thing you want on your plate is more work managing your out-of-band network.  (Actually the last thing you want is a remote outage where you haven’t deployed Opengear yet!  Sorry couldn’t resist.)

Manually configuring and installing console servers one-by-one is time consuming and not particularly fun.  Plus it’s the kind of task that tends to get delegated to a relatively inexperienced junior network admin or contracted remote hands, making it all the more prone to human error.

And when network operators need access to a remote router that’s gone dark, are they fully trained to determine the network state, connect either to the Opengear unit’s primary or backup cellular IP address (assuming there’s no carrier NAT), then start a session to the correct serial port?  You’d hope so.  But in reality, ops don’t necessarily know or even care what a console server is, they just need that router console.

Self-managing remote management

We built Lighthouse 5 Central Management to address these very challenges.

Using Lighthouse Console Gateway, you don’t need to know how the router or switch console is cabled up.  You don’t even need to know how the Opengear unit managing it is connected to Lighthouse – be it in-band, behind a firewall, or tunneling over an insecure public network like the cellular WWAN.   Just type the name of the target device using your preferred SSH client or web browser, and hey presto – you’re connected.  Lighthouse manages the underlying complexity so you don’t have to.

Zero Touch Enrollment extends Opengear’s class-leading Zero Touch Provisioning capabilities, enabling Opengear units to be automatically and completely configured, and securely connected to Lighthouse within minutes.  This redesigned enrollment process even integrates with your AAA servers to automate user permissions management, matching metadata tags to automatically grant access to authorized users.

Finally, with Lighthouse 5’s open, RESTful API you can integrate out-of-band access directly into your monitoring, trouble ticketing and collaboration tools – “there’s a problem with this switch, click here to fix it” – so ops are more efficient than ever.

So while the cost of network resilience can be non-trivial, in Lighthouse 5 we’ve designed a solution that allows you to equip your network for maximum resilience with maximum ROI.  And after all, what price do you put on a good night’s sleep?

REST easy with our new API

RESTful APIsA Representational State Transfer or RESTful API uses standard web methods (e.g. HTTP GET/POST for CRUD operations), conventions (e.g. URIs to select represented resources, HTTP response codes as return codes, cookies for session authentication tokens) and data structures (JSON, XML) to allow applications to operate or interoperate programmatically over the network.

During the previous decade, RESTful APIs became the de facto standard for providing web services, e.g. social media integration, weather and map data for mobile apps. The advantage of these kinds of web APIs is that applications remain loosely coupled, with distributed components communicating via the well understood, firewall friendly lingua franca of HTTP(S).

This decade, cloud computing has leveraged RESTful APIs to manage virtual infrastructure, e.g. create, reboot or destroy virtual machines in Amazon EC2, allowing VMs to be controlled programmatically through common scripting languages such as Python, Ruby and Perl – ubiquitous in web programming and well equipped with supporting libraries. This has helped solve the problem of managing a dynamic estate of infrastructure at scale, and given rise to the DevOps movement.

Today, developer-friendly hardware vendors are starting to integrate RESTful APIs into their physical infrastructure. In that spirit, Opengear have firmware version 4.0 and the upcoming, all new Lighthouse version 5 – incorporating the initial release of our very own RESTful API!

Opengear’s RESTful API serves two main purposes:

First, the API forms the primary interface by which Lighthouse 5 accomplishes central management, such as node enrollment and config synchronization. API engines run on both Lighthouse and Opengear’s manager, gateway and console server appliances, enabling north- and southbound communications and operations.

Second, our users can leverage these open, documented APIs to easily extend and integrate Opengear and out-of-band management features into their own applications and systems, including network device provisioning and management, orchestration and automation, monitoring and telemetry, inventory management and auditing, validation and burn-in testing – and most exciting of all – those new applications no-one’s thought of yet!

Over the coming months, we’ll be working hard to broaden the functionality of both our Lighthouse and appliance APIs to build a true first-order interface. Our goal is simple – if you can do it using the CLI or browser GUI, you can do it using the API.

In the meantime, we invite you to upgrade to firmware 4.0 now and get in touch for a free trial of Lighthouse 5.  How would you like to take advantage of our new RESTful API?  Let us know!

What’s in a port number?

Shakespeare What’s in a number? A port by any other number would secure shell as sweet.

Oh, the glorious, halcyon days of the Internet circa 1995!  Pre-browser wars, a nascent WWW under construction by an army of animated digging men, Usenet buzzing with spirited but netiquette-proper debate, and… Telnet.  Okay, perhaps the passage of time has tinted my glasses a bit rosy.

Nevertheless, Tatu Ylonen’s post on how his brand new Secure Shell protocol came to be assigned port 22 does make for nostalgic reading.  The tale is short and sweet (spoiler alert): having written SSH as a secure replacement for Telnet (port 23) and FTP (port 21), Tatu saw that the port between those two was unclaimed. So he asked IANA, and had it allocated – the very next day no less.

SSH in reverse

Opengear products make extensive use of SSH to serve secure remote serial and USB port access over the network.  This common console server feature is sometimes called “reverse SSH”, after the venerable “reverse Telnet” feature of the early serial terminal servers.

Why “reverse”?  As Opengear CTO Marcio Saito explains, terminal servers were originally designed to connect serial-only dumb terminals out to network servers via Telnet.  As dumb terminals went the way of the dodo, users reversed the application – allowing clients to connect in from the network to the serial console ports of servers and other equipment.

Getting straight to the endpoint

While you can use TCP port 22 to access the Opengear CLI and the consoles of its serially-connected devices via the portmanager chooser menu, it’s often convenient to SSH directly to a console with minimal interaction with the intermediary console server.

One way to accomplish this is by connecting via an SSH high port.  To calculate the high port, take the base TCP port for the SSH console service (3000 by default) and add the Opengear serial port number that the console is connected to (say, port 6).  Then use your SSH client to connect to TCP port 3006, rather than the usual TCP port 22.  Voilà, direct reverse SSH to port 6.

Scale it out with DNS

But what if you want direct access to large numbers of consoles, like 96 consoles via a single CM7196A console server? Or hundreds or thousands of consoles via a fleet of console servers in a lab or data center?  Memorizing hundreds of magic IP and high port number combinations is hardly convenient.

Thankfully, DNS can help.  Opengear products have a useful feature that lets you assign unique IP addresses to an individual console ports.  Say we have managed switch switch01, with its serial console connected to Opengear port 6.  Assign 192.168.1.206 to serial port 6, SSH to 192.168.1.206 on the usual TCP port 22 and you’re connected to switch01’s console.

Now to make this setup really shine, create an easy-to-remember name for 192.168.1.206 in your corporate DNS server.  A useful convention may be to create an out-of-band DNS name based on its regular, in-band DNS name.  For example, if switch01’s main address is switch01.mgmt.dc.corp.com, resolve switch01-con.mgmt.dc.corp.com to 192.168.1.206.

Now when you need an out-of-band console to that troublesome switch, just add -con to its hostname and SSH in.  Sweet!

VIEW OUR PRODUCTS

VPN: All Eyes on Privacy

ISP mandatory data retention laws rolling out worldwide are a reminder that your Internet communications are not just being watched, in many cases they’re being logged and stored.

Affected citizens are uneasy not just with government surveillance but with the magnitude of the attack surface – and a data breach leading to intimate details of their private lives and personal habits, as revealed by their browsing history, being bought and sold on the Dark Web.

It’s no wonder the personal VPN (Virtual Private Network) provider industry is booming.

While VPN may have caught the gaze of the public eye relatively recently, it’s been a staple of the corporate world for over 20 years. In the 1990s, VPN allowed businesses to do away with expensive leased lines, and securely interconnect geographically distributed office LANs over the public Internet. As broadband became the norm, VPN displaced dial-in RAS servers for providing remote workers and travelling “road warriors” with secure remote access to the corporate network from their homes or hotel rooms.

Similarly, Opengear products are purpose built to provide secure remote access, often over public networks such as the cellular WWAN, so VPN is a key part of our solution. In fact, a good number of Opengear’s engineering and management team have backgrounds building enterprise-grade VPN appliances, so it’s a natural fit in more ways than one.

So what should you look for in a VPN?

Strong cryptography: As computing power increases and costs decrease, older ciphers like RC4 and shorter encryption keys like 1024 bit RSA become susceptible to brute force cracking. Protocols that support perfect forward secrecy like IPsec and OpenVPN help protect sessions being logged today against a future compromise of encryption keys. Full details of our products’ services, protocols, ciphers & hashes can be found in this knowledge base article.

Open algorithms and implementations: It may seem counterintuitive, but public code is the safest bet for securing private communications. A security algorithm must stand on the strength of its mathematics, not the ability of its creators to obscure its inner workings. Open, peer-reviewed source code means bugs and backdoors can be identified, disclosed and fixed swiftly and responsibly. These days, proprietary security libraries have become relatively uncommon.

A responsive vendor: Since the PRISM disclosure, the many eyes of the security community have refocused on the open source building blocks that underpin online banking, messaging, remote access and information security generally. Significant vulnerabilities are published regularly in the form of CVE (Common Vulnerabilities and Exposures) reports. The onus is on vendors to disclosure vulnerabilities, and expedite mitigation instructions and patches, but also on users to keep device firmware updated – so sign up for Opengear security notifications via email or RSS.

If you’d like to know more, our knowledge base has further reading on using VPN to help secure your Opengear deployment.

NetDevOps, why not?

In 2016, DevOps is no longer a mysterious, edgy new concept – it’s a well-established set of principles and practices driving distributed IT applications and systems development and delivery, from startups to Fortune 500 enterprise.

DevOps extends the tools and processes used in Agile development, QA automation and continuous integration, applying them down the stack to the IT systems responsible for application delivery.  Contrast this with traditional IT development, with programmers, testers and sysadmins operating in siloized teams, a model that has proven inefficient and error prone at Internet-scale and Internet-pace.

DevOps: Before & After

DevOps: Before & After

Infrastructure as code

This shift has been made possible of two key technology trends that have transformed IT over the last ten years: virtualization (e.g. VMware, libvirt) and Linux-based open source development tools (e.g. git, Python & Ruby, Jenkins).

Rather thinking of IT servers in terms of individually maintained physical boxes, virtualized servers are dispensable “instances” that can be automatically created, provisioned, updated and destroyed in response to load and changing business needs.  System configuration is abstracted into high-level descriptions and templates that can be updated, tested, deployed and audited using the same revision control systems and workflow used to develop the applications they’re serving.

Barriers to adoption

At the next layer down the stack lies the network.

Historically, network engineering has been highly specialized world, with vendor-specific tools, appliances, OS environments and certifications.  While the same challenges of scale addressed by DevOps do exist, the preconditions haven’t been as conducive to rapid adoption of DevOps-like practices for the network (aka NetDevOps).

By contrast, network management practices may seem archaic, where router and switch boxes are individually configured by typing console commands, or pasting in configuration files (which may be have been automatically generated but by in-house or specialized systems, rather than DevOps-centric general configuration orchestration systems such as Puppet, Chef, Ansible and SaltStack).  And while RANCID has proven a valuable tool for post-hoc configuration revision control, it’s a far cry from a git workflow where configuration change requests can be automatically tested, merged and pushed to production.

The culture of specialization also means that most network and security teams continue to reside behind ticket walls, with responsibilities that begin and end with their network SLAs.  Network engineers are less likely to be familiar with the “well known” programming tools that make up the DevOps stack, than their IT and sysadmin counterparts.  Network engineers are right to be wary of seismic shifts in controlling a system where a one character typo in a netmask can bring down their entire data center.

Therefore the adoption of NetDevOps, while in our view inevitable, is coming at more considered pace – as the DevOps movement continues to mature it will evolve to embrace network engineers as much as vice versa.  At Opengear, we recognize the enormous benefits NetDevOps adoption as outlined by Ken Wilson in his presentation at AusNOG 2016, but equally recognize the considerable challenges, and continue to support our users to ease the transition.

The USP of the USB Console

Previously, I’ve examined why RS-232 is still king of the out-of-band management console.  As Opengear launches its latest USB console server solution, let’s revisit some of the pros and cons of using USB vs. RS-232 to manage your network device consoles.

Plug and work

Type_A_USB_Connecter_altRJ45 or DB9? DCE or DTE? Yost or custom? Straight or rollover? – said no USB console user ever.  RS-232 wiring, especially over the ubiquitous RJ45/8P8C console connector, comes in a staggering (and often frustrating) variety of pinouts.  And if you find yourself without the correct adapter or cable, it’s time to pull out the crimping tool.

USB connectors are limited to just a few standardized Types, Mini B being common for consoles.  Opengear’s USB Type A ports make cabling a console just like plugging it into a laptop.

What’s more, USB consoles typically auto-baud, so go ahead and configure that console for 9600, 115200 or 230k – chances are it’ll Just Work.

Driven to tears

As is often the case, the price of hardware simplicity is software complexity.  Not all USB consoles are created equal, and communications requires a compatible USB TTY driver on the device you’re connecting from.

While the vast majority of network vendors’ consoles utilize UARTs from a just a few chipset vendors (e.g. FTDI, Silicon Labs), if you’ve ever had to hand-hack a custom .kext to get your Macbook talking to that USB to serial adapter, you’ll know the frustration when drivers Don’t Just Work.

Our USB console server solution handles this complexity automatically, so Opengear users can connect and manage USB consoles from the widest range of vendors completely oblivious to driver requirements – the way it should be.

Run cable run

USB is limited to 5m/16’5″ for self-powered devices, whereas RS-232 stretches as far as 150m/500′ at 9600 baud, making USB is better suited for in-cabinet connections, rather than extended runs around a row, room or building.

The best is the one you have

Your decision about whether to use RS-232 or USB may be dictated by whatever the device has.  USB is more common on consumer grade and branch office equipment, while enterprise grade and rack mount infrastructure will be more likely to offer RS-232 – and many devices have both.

And of course your Opengear doesn’t have infinite ports either (we’ll let you know when we’ve invented that!), so you’ve got to consider both port supply and demand.

Now may be a good time to mention: if you have an IM7200 or ACM7000-series unit, upgrading to the new 3.16.5 firmware release unlocks the full suite of new USB console server features – including the ability to use all four of its USB ports for console connections, potentially doubling the port count!

Is your branch office network out on a limb?

tree-house-695420_960_720Network infrastructure management in the data centre context is well established and well understood. High-density console servers cable up to the serial consoles of routers, switches and other network equipment across the rack or row, to provide admins with out-of-band access via a local management network.

However the majority of the world’s networks are outside data centres, and offsite network administrators are discovering the limitations and inadequacies of traditional console servers in managing branch office, retail store and other small, remote networks.

Here are the top six:

(more…)

Responsive Resilience

FirefighterIt’s been over a year since security guru Bruce Schneier declared that this is the decade of incident response.

The 90s saw the mass internetworking of previously sheltered IT systems and local networks. Firewalls and IP masquerading (SNAT) were installed to “keep the bad guys out”, ushering in the decade of incident protection. From around the turn of the century, in response to increasingly pervasive and sophisticated attacks, firewalls were beefed up with deep packet inspection and intrusion detection capabilities – this was the decade of incident prevention.

Flash forward to the present day.  It’s been an article of faith in the open source community that “many eyes” examining freely available source code leads to more secure software. While it has been effective particularly in mitigating nefarious backdoors (whether malicious or well-meaning, one can only imagine the impact of PRISM in a closed source parallel universe), high profile and widespread security bugs such as Heartbleed and more recently DROWN demonstrate that it’s by no means a silver bullet for securing software.

Software, including device firmware, is exceedingly complex, complex software has bugs, bugs create security holes. The good guys have to find and patch every hole, the bad guys only have to find and exploit one – they have the upper hand and will always be a step ahead.

The conclusion? Hope for the best, but expect the worst. In the decade of incident response, your network will be compromised – whether by hackers, worms or infrastructure faults and failure. When the clock starts ticking, seconds may mean thousands or hundreds of thousands of dollars in damage, stolen property and lost revenues.

How will you respond?

Online Demo Tour: Vendor-Neutral Power Control

Welcome to the second instalment in this series of articles about our online virtual demo, where we’ll start to dig deeper into Opengear’s advanced features.  In part 1, we explored the various ways of reaching its virtual serial ports to try out out-of-band console management.

Since then, we’ve updated the online demo to our latest firmware release (3.12) which brings with it a host of new features.  One of these is our all new Manage Devices UI that allows you to take advantage of the advanced capabilities of the Opengear’s vendor-neutral power management subsystem, via a simple to configure, easy to use browser interface.

What is vendor-neutral power management?

Vendor-neutral power management simply means your Opengear appliance has built-in capabilities to monitor and control power devices (such as switched PDUs, UPSes and IPMI-enabled servers) from over 100 vendors, including Eaton, Raritan, APC and Server Technology, via serial, network (directly or routed) or USB.

(more…)

Locked Out of the Back Door

Last week, HP announced that running a Heartbleed vulnerability scan against its iLO and iLO2 lights-out management interfaces found in older blade and rack mount servers, can cause them to completely lock up. Since the iLO itself controls power to the server, the only way to restore management access is to (more…)