IDG Contributor Network: Challenges in realizing the promises of the holistic edge

Cloud providers such as Amazon, Google, Facebook and Microsoft are already rolling out distributed cloud infrastructure. Whilst the central cloud is established as an integral part of current and future networks, there are key issues that make the central cloud simply not the solution to several use cases.

  • Latency, also known as the Laws of Physics: The longer the distance is between two communicating entities, the longer the time it takes to move content there. Whilst the delay of reaching out to the cloud today might be tolerable for some applications, it will not be the case for emerging applications that will require nearly instantaneous responses (e.g. in industrial IoT control, robots, machines, autonomous cars, drones, etc.).
  • Data volume: The capacity of communication networks will simply not scale with the insane amount of raw data that is anticipated will need ferrying to and from a remote cloud center.
  • Running costs: The cost of a truly massive computational and storage load in the cloud will simply not be economically sustainable over the longer term.
  • Regulatory: There are and will very likely be new constraints (privacy, security, sovereignty, etc.) which will impose restrictions on what data may or may not be transferred and processed in the cloud.

So it certainly does make sense to distribute the cloud and interconnect this distributed infrastructure together with the central cloud. This process has already begun. One good tangible example is Amazon’s launch of the AWS GreenGrass (AWS for the Edge) product and their declared intentions to use their Whole Foods Stores (in addition to the small matter of selling groceries) as locations for future edge clouds/data centers. In general, cloud providers, perhaps driven by their real estate choices, have a relatively conservative view of the edge, restricting it to a point of presence typically 10 to 50 km from the consumer.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing

With Cloud Dataproc, Google promises a Hadoop or Spark cluster in 90 seconds

Getting insights out of big data is typically neither quick nor easy, but Google is aiming to change all that with a new, managed service for Hadoop and Spark.

Cloud Dataproc, which the search giant launched into open beta on Wednesday, is a new piece of its big data portfolio that’s designed to help companies create clusters quickly, manage them easily and turn them off when they’re not needed.

Enterprises often struggle with getting the most out of rapidly evolving big data technology, said Holger Mueller, a vice president and principal analyst with Constellation Research.

“It’s often not easy for the average enterprise to install and operate,” he said. When two open source products need to be combined, “things can get even more complex.”

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing