Wal-Mart tests direct-to-fridge; Amazon ups restaurant game

(Reuters) – Wal-Mart Stores Inc is testing a service to stock groceries directly to customers’ refrigerators as it seeks to take on e-commerce giant Amazon.com.

The delivery of groceries and meal kits is emerging as the next frontier of competition among retailers.

The world’s biggest brick-and-mortar retailer said on Friday it is partnering with August Home, a provider of smart locks and home accessories, to test the service with certain customers in the Silicon Valley. (bit.ly/2ffqqvT)

The grocery business is set to be upended through Amazon’s acquisition of upmarket grocer Whole Foods last month and the online retailer is also entrenching itself more deeply in the restaurants business.

Amazon Restaurants on Friday teamed up with online food ordering company Olo whose network of restaurants includes Applebee’s and Chipotle.

A Wal-Mart Stores Inc company distribution center in Bentonville, Arkansas June 6, 2013. REUTERS/Rick Wilking

The partnership will help Olo’s restaurant customers connect with Amazon’s delivery services.

The competition in the meal-kits business is also heating up. Supermarket operator Albertsons Cos Inc said it would buy meal-kit delivery service Plated while rival Kroger-owned Ralphs started selling meal kits in stores this week.

ONE-TIME PASSCODE DELIVERY

As part of the test, Wal-Mart delivery persons gain access to a customer’s house using a pre-authorized one-time passcode and put away groceries in the fridge and other items in the foyer.

Homeowners would receive notifications when the delivery is in progress and could also watch the real-time process from their home security cameras through the August Home app.

The Bentonville, Arkansas-based retailer has been exploring new methods of delivery and in June said it was testing using its own store employees to deliver packages ordered online.

Reporting by Vibhuti Sharma and Sruthi Ramakrishnan in Bengaluru; Editing by Martina D’Couto

Our Standards:The Thomson Reuters Trust Principles.

Tech

IBM’s Cloud CTO: ‘We’re in this game to win’

IBM saw from the get-go that the cloud was going to cause a major disruption to its business.

“We knew it was a massive opportunity for IBM, but not in a way that necessarily fit our mold,” said Jim Comfort, who is now CTO for IBM Cloud. “Every dimension of our business model would change — we knew that going in.”

Change they have, and there’s little denying that the cloud businesses is now a ray of sunshine brightening IBM’s outlook as its legacy businesses struggle. In its second-quarter earnings report last week, cloud revenue was up 30 percent for the quarter year over year, reaching $ 11.6 billion over the preceding 12 months. Revenue from systems hardware and operating systems software, on the other hand, was down more than 23 percent.

To read this article in full or to leave a comment, please click here

Network World Cloud Computing

Google’s cloud benchmarking tool ups its game

Google’s PerfKit toolset for benchmarking cloud environments was originally released earlier this year in a pre-1.0 version. Today, it’s officially been bumped to a 1.0 release, with expanded support for various cloud providers and automation of 26 different benchmarks, up from the 20 originally provided.

Given how tough it can be to reliably benchmark any cloud, having an open source, cloud-agnostic toolkit to help make it happen is a net boon.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

Public cloud providers’ end game shouldn’t surprise anyone

In the beginning, public cloud was the only choice. If you had an existing environment on-premise, colocated or with another web hosting company, you couldn’t connect it up to the public cloud. You could set up your own site-to-site VPN across the internet, but this had its challenges and limitations.

Indeed, this was all part of the cloud provider’s strategy — you had to go all in. Everything should be deployed into the provider’s public cloud and nowhere else. It made sense when cloud providers were mostly focusing on new projects and new applications, but it has proven a big challenge to migrate existing workloads or run systems in parallel.

So we started to see features released designed to connect into existing environments. These have ranged from storage appliances that cache files locally to official site-to-site VPN functionality and direct connect. These have all been advertised as helping you connect existing environments to take advantage of public cloud, and it’s true that they do help with this. However, I don’t think that’s the end goal.

There are many cases where it’s an advantage to run your own environment and hook into public cloud for burst capabilities and flexibility. A site-to-site VPN allows you to do this, as does direct connect. However, while these types of features help the cloud providers in the short term, they mean that cloud resources merely complement existing environments for very specific use cases like bursting.

Examining the product portfolios of the big cloud providers (Amazon, Google and Microsoft) suggests that this isn’t the true goal. It’s very clear that these providers are competing to run your entire workload, from email to file storage to compute to data processing — they don’t just want to act as capacity insurance. The investment in the underlying core components has allowed these supporting services to be built, and it all contributes to convincing you to move ever more workloads into the public cloud.

Strange bedfellows make sense

Partnerships such as Google teaming up with VMware seem strange on the surface. Using vCloud to tap into Google Cloud Services seems like Google’s recognition of the fact that private deployments are here to stay. In reality, it has to be viewed as part of a long game to make existing on-site users comfortable with public cloud resources, with the end goal of moving more and more workloads into the public cloud.

Getting existing deployments to accept some public cloud components is a clever way to get into an existing environment, especially in large enterprises that are already familiar with the likes of VMware. For managers and CIOs, everything appears within existing frameworks, with support, SLAs and systems they recognize. For developers and sysadmins, the familiarization strategy is the same, which is why so much effort is being put into supporting common tools. If you can use Kubernetes to manage your existing private environments, you’ll be comfortable using it to manage public cloud resources too.

Docker here, Docker there, Docker everywhere

Why do you think there’s so much support for Docker, and containers generally? It’s certainly an interesting technology, but why is every cloud provider putting in so much effort to rapidly develop specialized services to support deployment and management of containers? Because the format makes it incredibly easy to deploy anywhere — it makes your applications completely portable, so applications can be moved to the public cloud much more easily than if they were VMs. The fact that it makes it easy to move between cloud providers is just a side effect — the biggest hurdle is getting into the public cloud in the first place. Once it’s there, each vendor can battle it out to compete for the workloads.

The strategy is transition. These features are transitory with all eyes focused on the prize of every workload deployed in the public cloud. There will always be hybrid deployments, especially in those cases where it makes more sense to run your own environment, but the goal for the vast majority is not just public cloud first, but only public cloud. This is what the cloud providers are ultimately trying to achieve. Don’t let their partner announcements and discussions around hybrid clouds fool you.

David Mytton is the founder of Server Density, a SaaS tool which helps you provision and monitor your infrastructure. Based in the U.K., David has been programming in Python and PHP for over 10 years, was one of the earliest production MongoDB users (founding the London MongoDB User Group) and was one of the founding members of the Open Rights Group.

Public cloud providers’ end game shouldn’t surprise anyone originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Cloud