Old Ideas Redesigned for Edge and Hybrid Computing
It’s a data grab out there! It feels like everyone in every industry is trying to generate, store and process more data. Despite the excess of privacy breaches and data mis-handlings this past year, consumers are still more than happy to exchange their interaction data for social media and other services.
Industry after industry is redefining itself using data. Two example sectors make the case. Up to 30% of the world’s data is generated in the healthcare industry alone, and the amount of data generated in healthcare continues to grow at a rapid pace. Healthcare is relying on everything from wearable health monitoring devices to drug management sensors to AI-based local medical image analysis platforms to drive new data and, with it, to bring new revenue-generating insights. In the traditionally conservative automotive sector, the connected car has really become one giant “edge” computing device. According to the former CEO of Intel, Brian Krzanich, one autonomous car will use 4,000 GB of data/day.
The explosion of data generation is being driven by the growth of smarter computing devices “at the edge” – or near the customer rather than in the cloud. Edge computing is expected to grow by 30% from 2018 to 2022. With these new local compute devices come two things: an opportunity to collect more data; and an opportunity to develop tools to manage that data locally instead of in the cloud.
The question that prompted me to write this post is – who are the new entrants that are taking the ideas we use to manage data and compute in the cloud and bringing them to the edge.
Peter Levine, of Andreessen Horowitz, is quoted as saying “There’s going to be a symbiotic relationship between the edge and the cloud.” That’s true. We’re not abandoning cloud computing – although we might be relegating it to our more resource-intensive, but less latency-dependent processing needs. But now there are opportunities to provide new services for both the edge and this hybrid compute environment.
Here are a few companies I think are tapping into those opportunities.
Machine Learning at the Edge
FogHorn Systems is one company that has built a machine learning solution with a tiny software footprint that can run on smaller compute power, which is more typical of edge computing. This enables advanced data analysis to happen locally instead of having to send everything up to the cloud, a particularly useful feature when you need analysis in real-time.
It wasn’t really that long ago that Google, Amazon and others popularized offering machine learning capabilities in their cloud environments. How quickly things become passé! While there are a lot of companies talking about machine learning at the edge, there are still very few that can actually do it well.
Data Silos at the Edge
AtScale actually looks like the opposite of an edge computing company. AtScale’s platform makes it easier to move data from local sources to the cloud, creating a virtual “data lake” that allows the customer to run analysis.
But as I said above, we’re not abandoning the cloud altogether but creating a hybrid of cloud + edge. Hybrid cloud environments have a data problem. Local compute devices and “micro clouds” create new silos of data. Data silos prevent businesses from drawing full insights from their data.
The question is whether companies like AtScale can expand their solutions to federate data silos from edge computing devices into a unified data lake.
AWS at the Edge
We’ve had “edge devices” for a long time – we used to call them embedded systems and now we call them IoT devices. But in the past, these devices were very limited in what they did. Companies could deploy the devices and then leave them for years to operate without worrying much about updates.
Today’s devices are being asked to do much more, including generating and even processing all this data. As a result, updates are becoming a more frequent requirement – one which the existing infrastructure wasn’t designed to support.
The cloud offered agility to the enterprise. Companies like Zededa are bringing that agility to edge computing. Zedada’s technology creates an abstraction layer between the hardware and software on the edge, allowing the Zedada platform to deploy services to the edge device in a similar way to services that run in an Amazon Web Server environment in the cloud. WindRiver’s StarlingX project is also tackling the need for common requirements for virtualizing edge computing servers
These three examples only scratch the surface of the list of learnings from the rise of the cloud that become opportunities in the new world of intelligent, data-generating edge devices.
For example, we also need new customer analytics solutions similar to how Mixpanel, Gainsight, and others currently support cloud solutions. However, in addition to “customer” analytics, we need “machine” analytics since many of these edge devices serve other machines rather than human users.
And obviously, we need new brands of security software that can handle the distributed nature of these devices and the large number of attack vectors that exist in edge computing compared with cloud computing. Happily, there is no shortage of companies and conversations thinking about that problem.
Will Smith is quoted as saying “Life is lived on the edge”. It used to be that data lived in the cloud. But today, like the rest of life, it too can live at the edge. What other cloud-like solution needs to be redefined for us to fully take advantage of life on the edge?