Cloud providers such as Amazon, Google, Facebook and Microsoft are already rolling out distributed cloud infrastructure. Whilst the central cloud is established as an integral part of current and future networks, there are key issues that make the central cloud simply not the solution to several use cases.
- Latency, also known as the Laws of Physics: The longer the distance is between two communicating entities, the longer the time it takes to move content there. Whilst the delay of reaching out to the cloud today might be tolerable for some applications, it will not be the case for emerging applications that will require nearly instantaneous responses (e.g. in industrial IoT control, robots, machines, autonomous cars, drones, etc.).
- Data volume: The capacity of communication networks will simply not scale with the insane amount of raw data that is anticipated will need ferrying to and from a remote cloud center.
- Running costs: The cost of a truly massive computational and storage load in the cloud will simply not be economically sustainable over the longer term.
- Regulatory: There are and will very likely be new constraints (privacy, security, sovereignty, etc.) which will impose restrictions on what data may or may not be transferred and processed in the cloud.
So it certainly does make sense to distribute the cloud and interconnect this distributed infrastructure together with the central cloud. This process has already begun. One good tangible example is Amazon’s launch of the AWS GreenGrass (AWS for the Edge) product and their declared intentions to use their Whole Foods Stores (in addition to the small matter of selling groceries) as locations for future edge clouds/data centers. In general, cloud providers, perhaps driven by their real estate choices, have a relatively conservative view of the edge, restricting it to a point of presence typically 10 to 50 km from the consumer.