Get off my lawn
RS-232 serial is a bit like the old vet Walt Kowalski, Clint Eastwood’s character in Gran Torino. On one hand he’s a little old-fashioned and set in his ways, but he stands his ground and you know he’s got your back when there’s trouble.
Up until the mid-90s, RS-232 was commonplace in every day computing, it was the way mice connected to PCs and the way PCs connected to the Internet via dial-up modem. Looking back even further to the 60s and 70s, RS-232 was a way for green/amber-screen terminals around offices and computer labs to connect to the central mainframe or minicomputer (the original cloud computing, in a sense) – in fact the origins of ASCII text-based data comms over RS-232 go back to teletype machines from the start of last century. Over time, RS-232 has been displaced by USB and other interfaces for these common applications.
However, it’s still going strong in the networking world in the form of the out-of-band management or console port. It’s interesting to think that the state-of-the-art infrastructure that makes up the fabric of Internet still uses (and during outages may utterly rely on) an old codger like RS-232.
This is because the console provides low-level CLI access independent of network connectivity. This gives ops a convenient interface for initial provisioning (getting infrastructure connected to the network in the first instance), routine maintenance (making changes that may affect connectivity) and fault finding and repair (when it’s dropped off the network). RS-232 has got your back.
While USB can and does provide similar connectivity, the plug’n’play simplicity of the protocol that made it so popular actually puts it at a bit of a disadvantage by increasing the hidden complexity. Using RS-232, both ends must be manually wired and configured before the hardware UARTs will talk, and once it’s set up there’s not a lot to go wrong.
Like Mr Kowalski, RS-232 has its idiosyncrasies (or was that idio-async-rasies?).
Each year for the past 6 years I have felt compelled to return to the Large Install System Administrator tradeshow put on by USENIX to meet with some of the smartest Sys Admins in the world and get reacquainted with my inner geek. One must be careful though as LISA can be a humbling experience. I think I am an Opengear expert until I talk to these guys. They know more than mere mortals with only half their brain and can type faster with both hands tied behind their back.
Without question, this show has the most innovative giveaways and best tee-shirts of any tradeshow I have been to (and you don’t need a scan to get one). I got a Dropbox shirt at this show a few years ago before people even knew what it was. Sys Admins love great tee-shirts and so do I! There was incredible energy and excitement again this year. Hot topics at the show were how to build a cloud, big data, data storage and the convergence of network, compute and data. The Twitter IPO contributed to that buzz on the last day of the show as it opened at $45/share.
The LISA crowd expects Opengear to show new innovations every year and this year we did not disappoint. 3G/4G cellular out-of-band management was a huge hit, I heard “man, that is so cool” at least two dozen times. Environmental sensors were also of interest to many of the attendees this year – they asked about IP cameras in particular.
There is nothing better for an exhibitor when an attendee walks up to your booth and says “tell me why I should buy your product” and another attendee will overhear it and actually step in and tell him. Awesome!
Best question at the booth? How do you transmit data from the bottom of the ocean? Heck if I know (you have to be honest as you can’t make something up and fool these guys) but I do know someone who would know that lives in Hawaii and is a great Sys Admin in his own right. See this Infoworld Article.
Next year the show is in Seattle, WA and we plan on being there. I love having the opportunity to speak with Netflix about how the IM4200 is the best investment the company could ever make, Harvard University about the IM7200 with fiber for their lab, Thomson Reuters about the need for console management of their servers in the data center and DOD/NIST who needs a FIPS 140-2 certified cellular OOBM solution for their remote offices.
Now it’s back to work and the final stretch run for 2013.
This was another great Cisco event, the first time in South America for Opengear. The Moon Palace Resort , in Cancun was very welcoming, everyone super happy enjoying the local Caribbean hospitality. There was lots to see and do for all audiences. Spanish for the most part optional, but definitely useful. The locals at the show and in resort areas were more than happy to speak English, it was only when going into town when it became more difficult.
Most that stopped by had never really heard of Opengear. This was the main reason for being at the show. South America has been an area that hasn’t yet had much Opengear exposure. We do some business, but events like this only add to our success. It was evident that everyone throughout the world can benefit from Opengear’s out-of-band management solutions. It was exciting to see the reactions on many of the faces that stopped by the Opengear booth; when they finally found the solution to their problem. Some were literally in awe, as we described our products. Such as the New! IM7200, the ACM5000 and the swiss army knife of OOBM ACM5504-5. One potential customer said, “Really, cellular, flash storage, VPN, tftp-server, etc…” all for under $700 USD.
If you have never been to Cancun, Mx it is one place you have to visit. No better time than during Cisco Live, Cancun #clmex, please join us in 2014. You will not be disappointed…Mexican’s definitely know how to have a good time.
As Arnold says, “I’ll be back…”
Comparing and calculating pricing to justify the purchase of a product is typically a matter of looking at the purchase price and offsetting this against the costs you are likely to incur if you do not invest in the solution under consideration. For any business with earnings reliant on networks for inter-office and cloud connectivity, there are considerable potential costs associated with not installing remote management.
IT systems are now just as likely to be in the cloud as the back office, which makes business increasingly reliant on 24/7 network connectivity. With IT teams having centralised offsite, the time and cost of network repair has also increased. These factors combined mean the cost of an outage has skyrocketed. When there is an outage and the network fails, the business has call out cost and downtime cost to consider. Analyst Gartner estimates that businesses have 87 hours of downtime each year, while the IT Process Institute estimates the average MTTR (mean time to repair) of an unplanned outage is 200 minutes.
You can work out the cost of an outage to your business by multiplying together the MTTR, your business’s revenue rate, and a “severity factor” – the percentage impact on revenue generation, e.g. 100% for a total network outage during business hours, affecting all systems and staff. Out-of-band network access enables remote repair of the network within seconds of the incident, obviating inefficient “break-fix” call outs and reducing the critical MTTR factor, and therefore overall cost of an outage.
Additionally, remote management reduces the cost of the lifecycle operating of the network infrastructure. Full lifecycle cost savings relate to making network provisioning simpler and less time consuming with fewer errors likely, reducing costs associated with time and travel for maintenance, configuration and re-configuration and providing report and troubleshooting capabilities to pre-empt, find and fix faults before they impact business earnings.
Network connection speeds in the 21st century have skyrocketed, whether on wired networks, wireless networks or mobile networks. The new 4G-LTE standard is a thousand times faster than 2G with up to a hundred times less latency. But how do you exploit next generation mobile data for accessing, monitoring and recovering critical IT resources?
Instead of the slow telemetry applications of 2G, modern 4G provides the basis for reliable primary wired-network fail-over, IP-VPNs over cellular, cloud-based end-point registration and management. If you’re a DIY engineer and want to create a hardware platform with fast cellular support, you may start with an embedded Linux system/board with a consumer-grade 3G or 4G USB carrier modem. But what does this platform really provide?
For starters, the consumer-grade USB 3G/4G dongles have a life span of barely 6 months, before vendor replacement, making a consistent roll-out almost impossible. Device driver support is scant and poorly featured and embedded antennas offer poor performance (some have fragile external antenna options), especially for fixed location devices (e.g. equipment racks). Cheap dongles sometimes overheat or cause the port to shut down from over-current conditions.
How reliable is such a platform for monitoring or providing fail-over for critical infrastructure? Using consumer grade products can create a platform that is likely to fail more often than the monitored equipment or site itself does. Instead, Opengear advocate that out of band smart management appliances should have dedicated embedded 3G and 4G connectivity complete with relevant RF and carrier certifications.
Without proper consideration to cellular connectivity, reliability and extended features such as cellular fail-over and VPN support, the cost saving from a dongle option can end up as a painful liability in the event of a failure!
What do you think?
Over the past year we have seen the notion of “fog computing” emerge to describe the dispersal of cloud technology out to the edges of the network. This term has particularly been embraced by Cisco to describe a new management paradigm for the Internet of Things (IoT).
In this IoT model all the local devices and sensors deployed at the edge attach to local gateways, which connect through the service-provider’s access and edge networks to the cloud. These local gateways are becoming increasingly smart and sophisticated – driven in part by the security imperative (to avoid the global IoT descending into chaos:) .
With the Fog model, the cloud retains its central “think-tank” role (analyzing data and making all the big decisions). When there are no resource constraints and there’s a flood of data with linkages among multiple data sources, it obviously makes sense to centralize and perform everything in the cloud.
However the cloud can also delegate some tasks out to the smart gateways and access systems as it often makes more sense to localize analysis and decision making at the edge. For example when there are constraints on time the Fog model enables the IoT to deliver a quick response at the edge, without being burdened by network latency. Also while it’s not the mission of the smart edge device to undertake in-depth analytics, the device can actively filter local data and selectively relay data to the cloud (e.g. don’t transmit video of empty rooms) and the traffic savings for widely distributed sensor networks can be considerable.
So we see Fog computing will bring a whole new breed of applications and services … all built on cloud technologies but with selected applications distributed out to appliances at the edge of the network. Today at Opengear we refer to our smart ACM5000 and ACM5500s as remote site managers and gateways … but tomorrow we may be referring to them as Fog appliances.
New York is known as the city that never sleeps with restaurants, theaters and night life that rival the best in the world, but the focus last week was Interop 2013 at the Javits Center. Opengear is a proud sponsor of the Interop NOC providing support at the Las Vegas and New York shows.
The New York Interop show usually draws a smaller crowd, but this year proved different with higher than typical attendance. The aisles were crowded and everyone was hyper-focused on the hot topics of the show.
The show was a great one for Opengear. We had a continuous stream of people with OOB Management needs visit our booth. They came for our product innovation, and their requirements had common themes:
Opengear is perfectly set up to meet these needs as the only company continuing to innovate products required to manage “old faithful” serial connections. Attendees visiting our booth told us about problems connecting to their remote locations. One visitor said “no one else has these connection options” and ”this is exactly what I need”. Our recently announced ACM5000 enhancements were met with rave reviews as attendees relayed stories about the increasingly difficult job of managing environments with IT/OT requirements.
Dinner and a show in NY? $800.
Customer input and validation of product strategy at Interop 2013? Priceless!
There is something strange about showing up the week before a trade show, arriving at a mostly empty Convention Center to help build the network infrastructure. Firstly this involves a lot of walking, because Convention Centers are large by nature and the awesome and sometimes crazy loud technology that is used to drive the network is hidden in far-flung corners. The strangeness I refer to, is that the landscape changes rapidly over time as a maze of booths and other temporary structures are erected between you and what should be a well worn path to the closet and/or rack you’ve visited many times. Like a rat in a maze you are forced to solve the A to B problem over and over again while some mad scientist watches rubbing their hands together and sinisterly muttering “they’re learning” under their breath.
Also you don’t get a lot of sleep. The NOC team at Interop is a social lot and the urge to “hang” after work until late is fairly strong. They do put in long hours with an early start each day. When technical challenges arise there can be high levels of pressure. Despite this, if you ever get the chance to join the well-oiled machine of volunteers and vendors who design and implement the biggest/fastest/awesome-est temporary network around I highly recommend it.
We’ve blogged about why out-of-band management via an external PSTN modem is not optimal, and noted how these shortcomings are driving the explosive growth we’re seeing in our 3G and 4G LTE out-of-band management solutions.
However there are still sites where cellular may not be suitable, take for example a remote office in semi-rural Scotland managed by one of our network MSP customers. I’m told this site is at the end of a muddy lane off a couple of B-roads, where their client might get a 2G signal on their mobile phones on a good day, if they wave them around enough.
Before discovering Opengear, the MSP managed to hose their client’s WAN connection in a classic fat finger manoeuvre during routine maintenance. No bother, activate the contingency plan – that is, dial in to the external modem hanging off the main gateway’s console port and roll back that last config change.
When I was in Middle School, about the time of the boom box, I would pop in a cassette tape and record my favorite songs from a live radio station feed. It could take your entire summer to get all your favorite songs on both sides of a 90 minute tape but when it was finished you had the right music, it was predictive and you could drive your Datsun 610 down the road with a sense of musical accomplishment.
Now fast forward 30 years to internet radio that uses algorithms, codified genomes and musical genes to predict your musical preferences. My interest in internet radio is from my experience working with the companies who provide these services. At Opengear we design the out-of-band management solutions they use to build redundant management LAN’s for their network infrastructure and power. As the expansion and footprint of high-speed cellular networks grows, so does the pervasive access to these popular streaming services.