The Future of Edge – “demassification”

Published on

Heads up! This information might be outdated since it was last updated over a year ago. Please double-check the information before relying on it.

The Edge is anywhere the internet isn’t. The internet as-is wasn’t designed for the current demands placed upon it. We live on a hyperconnected planet - few places are without internet.

Microsoft’s new blog on using Azure Quantum to manage communication with space missions had me thinking, about the Future of Edge, Space Networking, and the Internet of Things.

First, let me set the scene with IoT…

The Internet of Things - commonly known as IoT – describes the interconnection of everyday objects and spaces, using new and existing devices, leveraging improved connectivity and platforms, to enable brand new experiences. These experiences can be enterprise-focused to support optimized operations, such as predictive maintenance, or quality control – as well as consumer and employee focussed, e.g. tracking worker fatigue, or personalizing products for customers.

The Internet of Things - commonly known as IoT – describes the interconnection of everyday objects and spaces.

What’s driving this increase in IoT?

IoT is being driven by the massive opportunity to unlock new opportunities, which weren’t previously possible, and to make better use of the data that existing devices were generating.

The intersection of data platforms and the changing nature of network infrastructure has converged with the availability of cheap and easily deployable hardware modules.

Connectivity

These connectivity changes range from wide-area network standards such as LoRa, increasing speeds available with WiFi 5 onwards (as well as urban city-wide adoption schemes), and the rise of 5G cellular phone availability.

Intelligence

In addition to connectivity and data platforms, it’s easier than ever to deploy intelligence, from specialized algorithms to tiny machine learning models, to low power devices. This is both supporting the ability to process out of the cloud and where events are happening, but it’s also enabling networking infrastructure to be better managed, for example, through software-defined networking; a truly virtuous circle.

How does IoT relate to Edge?

Edge means a lot of different things to a lot of different people. The truth is, the internet as-is wasn’t designed for the current demands placed upon it; it grew from a high-trust environment from military and academic collaboration to supporting global hyperconnectivity between parties that know nothing about each other.

Our hyperconnected planet is a patchwork of internet coverage, of varying latency and bandwidths; full of “dead zones” and network dead-ends. To that end, the Edge is no longer computing outside of the cloud or a datacenter – instead, the Edge is anywhere the internet isn’t.

The Edge is anywhere the internet isn’t.

How is networking changing the game?

Networking is increasing bandwidth, and expanding up to space, and down under the sea – but whilst bandwidth might increase, latency is a fundamental constraint, due to processing limitations, and the limitation of the speed of light as a transmission speed.

The rise of new architectural patterns

Software Engineers are building hyper-scale applications to serve the world – or at least many different locations; but the software engineering patterns and practices of the last five years coalesced in a period of co-located containers, burgeoning content delivery networks, and replicated databases [bounded by the CAP theorem of consistency, availability, and partition tolerance.]

Our enterprise systems were often built upon synchronous assumptions, where availability was a given, common attitudes towards data usage and privacy were expected, and data transfers were outside the realm of nation-states.

To put it simply, those assumptions no longer hold true. From now, and for the next couple of years, we’re seeing devices proliferate to avoid the constraint of latency and networking speeds. We’re seeing companies figure out how they can update and recycle these. Bandwidth might increase, but latency will remain the hard limitation.

Resolutions

Engineers will begin designing for the “demassified era” – understanding how to handle processes with components that may be located across short or infinitesimal distances. Partition tolerance will become a guiding principle. Companies will be designing for the edge by default, and focus on how they make better sense of the data. For the purposes of Cognitive Search, or Machine Learning, this could likely involve bringing algorithms to the data, and returning indexed results.

demassified (past tense) divide or break up (a social or political unit) into its component parts.

Optimizing bandwidth usage

If the Edge is where the internet isn’t, the focus will be on optimizing bandwidth usage to avoid overage charges and the cost of shifting data from point to point. We’re shifting from this idea that data transmission is free, to pay for the fuel which shifts our data currency.

Instant Gratification

Systems will move away from “instant gratification” to reconciliation and retransmission – this means that they won’t expect a synchronous response, and that software will need to consider whether processes are “immediate”, “resolving”, or “long-running.”

Short bursts of data will happen on the edge, and data will be processed and used when needed, and when cheapest. Like charging an electric vehicle on your mains electricity, your autonomous systems, such as in-car devices, will drive and “report” when it returns to a “home” wifi station.

Radical overhaul of the internet transport layer

If we consider the fact that computing through the Cloud is what enables overage charges to become an everyday reality, then the internet transport layer itself will be radically overhauled for the future of networking systems. From Cloudflare here on Earth, computing at the Edge of the well-connected network, to OneWeb, and Starlink, we’re seeing the rise of new paradigms that our messaging systems need to contend with. A few years ago, who would have considered that commercial space telecommunication and standards could be in our near future?

Second-order effects

As the network solidifies up to space, and down under the sea, the rise of standards could see prices dropping. This isn’t the first time that mass infrastructure has required global adoption. Telegrams followed the model established by the International Telecommunication Union in 1865 – hard problems generate collaboration.

From Space to Global Applications – further out on the Horizon

The Growth of Space Infrastructure

In the long term, even Space needs standards. When space travel was limited to the capabilities of nation-states, one or two spaceships noisily hogging the analogue spectrum was fine. With interference in space, we’ll need to consider how communications take place over longer distances, from lensing of laser signals, physical throwing of data storage, to how we slice and use the available spectrum. It’s our current grappling with radio networking writ large.

Shared Infrastructure?

Will we see an establishment of solar-system wide backhaul of cooperative satellites to store, buffer, and forward messages? Will space ships have electromagnetically hardened and standardized black boxes, capable of locating, and contacting the backhaul? Be that Starlink, OneWeb, or another consortium yet to be built.

The Sneakernet is still a competitor

Andrew Tanenbaum said in 1981, “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.” The SneakerNet phenomenon - aka, putting on your sneakers and physically carrying your data – never really left us. Microsoft has Azure Data Box Heavy – a 500lb disk to transfer a petabyte of data into Azure, and AWS has their Amazon SnowBall.

And as XKCD naturally took to its logical conclusion, the Internet wouldn’t surpass the “bandwidth” of FedEx until 2040 – and that’s using today’s storage tools.

As we expand geographically, physically transferring data and then completing data reconciliation may remain an attractive option; particularly with those data ingress/egress fees…

Just let it crumble

One common problem with managing physical infrastructure is the requirement to update, secure, and track your IoT assets. Who is responsible when hardware needs to be upgraded?

One solution on the horizon could be taking a biological approach. Microchips may not always be silicon-based – nano-cellulose or other biodegradable alternatives are one way to avoid the burden of updates and replacement. Imagine growing your microchips with embedded software to avoid leaving an attack surface for your software to be attacked; and then simply letting them biodegrade.

Some ponderings and possible futures…

  • To avoid the constraint of latency and networking speeds, devices are proliferating, to compute closer to where the activities are happening.
  • Engineers will begin designing for the “demassified era” – understanding how to deal with processes with components that may be located across short or infinitesimal distances.
  • Systems will move away from “instant gratification” to reconciliation and retransmission – this means that they won’t expect a synchronous response, and that software will need to consider whether processes are “immediate”, “resolving”, or “long-running.”
  • Microchips may not always be silicon-based – nano-cellulose or other biodegradable alternatives are one way to avoid the burden of updates and replacement.
  • A solar-system wide backhaul of cooperative satellites will be established to store, buffer, and forward messages.
  • Space ships will have hardened standardized black boxes, capable of contacting the backhaul – be that Starlink, OneWeb, or another consortium yet to be built.
  • Edge is only where the internet isn’t – and the internet could be everywhere, with the right commercial impetus
  • The internet transport layer is being radically overhauled - Edge will include space, and laser links could proliferate
  • Commercial space telecommunication will follow the model established by the International Telecommunication Union in 1865 – and nation-states will only regulate space.

As NASA launches more frequent and complex missions into space, managing communications with the growing number of spacecraft is becoming increasingly challenging. NASA’s Jet Propulsion Laboratory (JPL) has turned to Azure Quantum to explore ways to communicate more efficiently with spacecraft exploring our solar system and beyond.