Is your branch office network out on a limb?

tree-house-695420_960_720Network infrastructure management in the data centre context is well established and well understood. High-density console servers cable up to the serial consoles of routers, switches and other network equipment across the rack or row, to provide admins with out-of-band access via a local management network.

However the majority of the world’s networks are outside data centres, and offsite network administrators are discovering the limitations and inadequacies of traditional console servers in managing branch office, retail store and other small, remote networks.

Here are the top six:

Continue reading

Responsive Resilience

FirefighterIt’s been over a year since security guru Bruce Schneier declared that this is the decade of incident response.

The 90s saw the mass internetworking of previously sheltered IT systems and local networks. Firewalls and IP masquerading (SNAT) were installed to “keep the bad guys out”, ushering in the decade of incident protection. From around the turn of the century, in response to increasingly pervasive and sophisticated attacks, firewalls were beefed up with deep packet inspection and intrusion detection capabilities – this was the decade of incident prevention.

Flash forward to the present day.  It’s been an article of faith in the open source community that “many eyes” examining freely available source code leads to more secure software. While it has been effective particularly in mitigating nefarious backdoors (whether malicious or well-meaning, one can only imagine the impact of PRISM in a closed source parallel universe), high profile and widespread security bugs such as Heartbleed and more recently DROWN demonstrate that it’s by no means a silver bullet for securing software.

Software, including device firmware, is exceedingly complex, complex software has bugs, bugs create security holes. The good guys have to find and patch every hole, the bad guys only have to find and exploit one – they have the upper hand and will always be a step ahead.

The conclusion? Hope for the best, but expect the worst. In the decade of incident response, your network will be compromised – whether by hackers, worms or infrastructure faults and failure. When the clock starts ticking, seconds may mean thousands or hundreds of thousands of dollars in damage, stolen property and lost revenues.

How will you respond?

SmartOOB™ unlocks the IoT potential

The Internet of Things (IoT) is a hot topic that promises much but is still in its infancy. The concept can unlock innovative new services, improved efficiency and the potential for cost reduction in areas like maintenance and support. However, a large portion of the devices that can benefit from this evolution; everything from vending machines to traffic lights, were never deployed with the ability to connect seamlessly through the internet. Worse still many of the devices that are IoT enabled have little connection resiliency if the primary network should become unavailable.

Continue reading

Achieving Network Resilience in Retail Operations


On the eve of Retail’s BIG Show 2016, Opengear’s Todd Rychecky sat down for an interview discussing Opengear’s experience working with retailers of all sizes – and why achieving network resilience has become increasingly critical in the industry. Opengear’s Resilience Gateway product line continues to expand (including a new release to be announced at the BIG Show), and the company will be available to offer demos and discuss solutions with new and existing customers at Booth #831.

Continue reading

Hop on the High-Speed Bus

The first general purpose computer known as ENIAC (Electronic Numerical Integrator And Computer – circa 1946) was heralded as the “Giant Brain”. It was literally larger than a dozen passenger buses and weighed as much. It was made of tens of thousands of vacuum tubes and relays, hundreds of thousands of resistors and capacitors and millions of hand-soldered joints. It operated at lightning speed, a whopping 0.1 MHz.  Skip forward 67 years, when the average computer is more than ten thousand times faster and one hundred thousand times smaller.

Every computer since then has employed a system known as a “bus” for transferring signals and data both internally and to peripherals. For the past 60 years these computer buses were mainly parallel wires or circuit board traces that could contain hundreds of signals. In the heyday of mainframes and early minicomputers these buses were proprietary, highly guarded designs, and specific to particular models or families of computers. It was only in the 1970s-1990s that the proprietary nature of buses was turned on its head spurred on by the advent of the microprocessor (Intel, Texas Instruments, Motorola, Zilog and others) and availability of a wide range of general purpose integrated circuits (led by Fairchild).

A number of minicomputer vendors (including Digital Equipment Corporation – now part of HP) started documenting their computer bus architectures (Unibus, Q-Bus, LSI-11 bus). A whole circuit board or multiples could be dedicated to the CPU function, other boards for memory, still others for disk controllers and so on. These boards were often 19”x19” in size and connected by an expansion bus with gold plated fingers that slotted into a multi-connector backplane. Many third parties quickly developed massive add-in cards to supplement the vendors’ selection of peripherals.

As the general purpose microprocessor (8080, Z80, 8086, 68000) effectively replaced proprietary minicomputer CPUs during the 1980s, vendors building those systems immediately released their expansion bus specifications (S100, Multibus I & II, VMEbus 1-10 MHz) and the open-architecture add-on card industry began. However it was only after the release of the IBM PC in 1980, which was IBM’s first open architecture computer with an ISA bus (5MHz), did the add-on card industry (memory, video, network, disk, comms) grow to billions of dollars in that decade.

During the 1990s the popularity and power of the PC architecture (80286/386/486/Pentium, etc.) and follow-on improvements to the ISA bus (EISA and MCA) paved the way for more sophisticated buses which allowed 32-bit operation, higher speeds, multi-processor support, CPU independence and so on. So in 1993 Intel released the PCI bus (Peripheral Component Interconnect) which supported 32/64-bit transfers at 33 and 66 MHz and dominated for a decade.

Post 2000 we’ve seen a dramatic shift in the performance, miniaturization and transformation of computing devices. Most modern CPUs used in these systems have absorbed discrete functions into a 2 or 3 chip-set or a single System on Chip (SoC) so those old buses make little sense. However the need to add high-speed peripherals, storage, displays, and communication devices still exists. The PCI bus was found wanting and its physical attributes made it impractical. In 2004 the PCI bus evolved into PCI Express which was an ultra high-speed serial bus that implemented the nearly 100 pin PCI bus on a handful of wires on a board or on a cable to an external device. Not only that, it introduced the concept of lanes so that you could aggregate up to 16 channels into one for 256 Gbps transfers. Most modern systems support at least a “one-lane” PCIe interface. Add-on cards are palm-sized or smaller. Some are also available in a mini-card format such as wireless modules.

At Opengear our engineers have significant design and business experience that covers the major “open architecture” buses spanning the last 30 years. Many of our products and future products employ these popular buses. That’s the high-speed side covered. An article summarizing some key medium and low-speed serial buses, which are also industry stalwarts, will follow, so “don’t miss the bus”.

Opengear Plunges into the Deep Pacific to Help Researchers Bring Internet to the Ocean Floor

Sixty miles north of the Hawaiian island of Oahu and three miles down to the ocean floor sits the ALOHA Cabled Observatory (ACO). Providing real-time oceanographic data through a retired and donated AT&T HAW-4 submarine fiber-optic cable, ALOHA station is the deepest working observatory of its kind, as well as the deepest power node on earth and the deepest location that’s connected to the Internet (so bring your laptop if you’re SCUBA diving around there). Utilizing Opengear technology to safeguard the continued availability of this unique underwater connection, the station includes a hydrophone and pressure sensor, along with instrumentation for measurement and communication of temperature, salinity, currents, acoustics, and video.

Continue reading

Physical + cyber security in a converged IT + OT world

Enterprise and governments are struggling to maintain their complex IT infrastructure in the face of ramping security pressures and rampant attacks. The Internet of Things (IoT) is set to magnify this complexity, introducing billions of connected devices that sense and control the physical world. The resultant convergence of IT and operational technology (OT) infrastructures will significantly expand the threat landscape.

Continue reading

Out-Of-Band Management Delivers Business Resilience

Network downtime is frustrating and very costly to millions of businesses all over the world. Recent network outages at the NYSE, United Airlines and the Wall Street Journal highlight opportunities where out-of-band systems might have helped mitigate the costs and frustrations of network downtime.

Opengear developed this infographic to help illustrate the issues involved and the potential risks that can be mitigated with a solid out-of-band management strategy: Continue reading

Smartly connected products are transforming competition

You may have seen the recent Harvard Business Review article in which Michael Porter and James Heppelmann describe How Smart Connected Products Are Transforming Competition. These smart connected products (a.k.a. the Internet of Things) are seen to be unleashing the third wave of IT-driven transformation and a new era of competition. Porter and Heppelmann say that the first two waves (the IT automation of the 1960s/70s followed by the Internet wave of 1980s/90s) radically reshaped competition and strategy, and delivered huge productivity gains and economic growth. Continue reading

In a cloud-centric world is your “Out-Of-Band” solution up to the task?

Out-of band (OOB) access to critical infrastructure for reconfiguration or repair was pioneered more than 30 years ago. It began as a DIY solution where engineers used terminal servers, repurposed server computers or routers with serial ports to access their infrastructure. Reverse telnet (later reverse SSH) functionality allowed serial over Ethernet redirection and command line/terminal access to the device console.

Fifteen years ago, OOB experienced a massive transformation resulting from the growth of crammed data closets, machine rooms and sophisticated data centers. Due to the density and wide array of critical IT, networking and power infrastructure, tens, hundreds and thousands of serial consoles needed to be accessed and monitored to keep the corporate IT engine running. To cope with this, Continue reading