Don’t let cloud providers kick you off like United

As everyone knows, last week a United Airlines passenger was asked to deplane because the airline overbooked and needed his seat for a staff member, then was dragged off the plane by Chicago airport cops when he refused to leave. Yes, the passenger didn’t follow the rules, but the situation ultimately was United’s fault.

Believe it or not, what happened at United is an object lesson for any business that signs up for cloud services. I’ll explain shortly.

Back in 2007, I boarded a United flight that was overbooked, and I was asked to deplane as a result. It was inconvenient and humiliating. However, I didn’t go limp, and the cops didn’t drag me bleeding off the flight.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

IDG Contributor Network: When the Big One hits Seattle, will cloud providers stay on?

The San Andreas Fault gets all the attention, media coverage and movies, but it’s not the fault line the tech sector needs to worry about. A much bigger problem lies to the north, and some of the most important tech firms are directly in its crosshairs.

The Cascadia subduction zone runs north-south from Canada to northern California and sits roughly 80 miles offshore. That’s the good news, since it’s 80 miles out to sea, as opposed to the San Andreas and Hayward faults, which run right through the Silicon Valley and East Bay, respectively.

The bad news is it is capable of a much more severe quake. The Cascadia fault is believed to be capable of a 9.4 magnitude quake. Residents of the Pacific Northwest got quite a fright last year when The New Yorker published an article called “The Really Big One,” which detailed the potential of a 9.4 magnitude earthquake hitting the area. The article outlined projections for 13,000 immediate deaths, one million left homeless, and the whole region left without power and water for months.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing

IDG Contributor Network: When the Big One hits Seattle, will cloud providers stay on?

The San Andreas Fault gets all the attention, media coverage and movies, but it’s not the fault line the tech sector needs to worry about. A much bigger problem lies to the north, and some of the most important tech firms are directly in its crosshairs.

The Cascadia subduction zone runs north-south from Canada to northern California and sits roughly 80 miles offshore. That’s the good news, since it’s 80 miles out to sea, as opposed to the San Andreas and Hayward faults, which run right through the Silicon Valley and East Bay, respectively.

The bad news is it is capable of a much more severe quake. The Cascadia fault is believed to be capable of a 9.4 magnitude quake. Residents of the Pacific Northwest got quite a fright last year when The New Yorker published an article called “The Really Big One,” which detailed the potential of a 9.4 magnitude earthquake hitting the area. The article outlined projections for 13,000 immediate deaths, one million left homeless, and the whole region left without power and water for months.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing

IDG Contributor Network: When the Big One hits Seattle, will cloud providers stay on?

The San Andreas Fault gets all the attention, media coverage and movies, but it’s not the fault line the tech sector needs to worry about. A much bigger problem lies to the north, and some of the most important tech firms are directly in its crosshairs.

The Cascadia subduction zone runs north-south from Canada to northern California and sits roughly 80 miles offshore. That’s the good news, since it’s 80 miles out to sea, as opposed to the San Andreas and Hayward faults, which run right through the Silicon Valley and East Bay, respectively.

The bad news is it is capable of a much more severe quake. The Cascadia fault is believed to be capable of a 9.4 magnitude quake. Residents of the Pacific Northwest got quite a fright last year when The New Yorker published an article called “The Really Big One,” which detailed the potential of a 9.4 magnitude earthquake hitting the area. The article outlined projections for 13,000 immediate deaths, one million left homeless, and the whole region left without power and water for months.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing

IDG Contributor Network: The downside of relying on social network providers for authentication

Like many people, I have a WordPress blog, and I ask my readers to identify themselves when they post comments by authorizing WordPress to access their Facebook or Twitter profiles. This helps me filter out anonymous trolls and spammers. 

But it is an option not all users appreciate. Twitter user @herminones reached out to me, saying:

Oleg, I would have liked to comment on your most recent blog but for the pre-requisite WordPress Faustian contract

The option to rely on Facebook or Twitter is not unique to WordPress. Authentication, authorization and password management are some of the key APIs of any useful application. As a result, many apps authenticate via Facebook, Twitter, or another third-party identity provider.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing

IT pros should focus on largest public cloud providers

The cloud has seen massive rates of adoption among IT professionals this year, and it will spread even deeper into entrenched industries over the next 16 months, according to a new report from Forrester Research. Despite ongoing consolidation, the research firm reports that the cloud vendor landscape is too crowded and IT professionals should increasingly hedge their bets on major public cloud providers such as Amazon, IBM and Microsoft.

CIOs and IT leaders should be wary of small, specialized players due to their narrow focus and the increased risks these companies carry around longevity and security, Forrester reports. The market research firm predicts there will be a significant decline in the number of players providing infrastructure-as-a-service (IaaS) cloud services and management software by the end of 2016.

To read this article in full or to leave a comment, please click here

Network World Cloud Computing

Public cloud providers’ end game shouldn’t surprise anyone

In the beginning, public cloud was the only choice. If you had an existing environment on-premise, colocated or with another web hosting company, you couldn’t connect it up to the public cloud. You could set up your own site-to-site VPN across the internet, but this had its challenges and limitations.

Indeed, this was all part of the cloud provider’s strategy — you had to go all in. Everything should be deployed into the provider’s public cloud and nowhere else. It made sense when cloud providers were mostly focusing on new projects and new applications, but it has proven a big challenge to migrate existing workloads or run systems in parallel.

So we started to see features released designed to connect into existing environments. These have ranged from storage appliances that cache files locally to official site-to-site VPN functionality and direct connect. These have all been advertised as helping you connect existing environments to take advantage of public cloud, and it’s true that they do help with this. However, I don’t think that’s the end goal.

There are many cases where it’s an advantage to run your own environment and hook into public cloud for burst capabilities and flexibility. A site-to-site VPN allows you to do this, as does direct connect. However, while these types of features help the cloud providers in the short term, they mean that cloud resources merely complement existing environments for very specific use cases like bursting.

Examining the product portfolios of the big cloud providers (Amazon, Google and Microsoft) suggests that this isn’t the true goal. It’s very clear that these providers are competing to run your entire workload, from email to file storage to compute to data processing — they don’t just want to act as capacity insurance. The investment in the underlying core components has allowed these supporting services to be built, and it all contributes to convincing you to move ever more workloads into the public cloud.

Strange bedfellows make sense

Partnerships such as Google teaming up with VMware seem strange on the surface. Using vCloud to tap into Google Cloud Services seems like Google’s recognition of the fact that private deployments are here to stay. In reality, it has to be viewed as part of a long game to make existing on-site users comfortable with public cloud resources, with the end goal of moving more and more workloads into the public cloud.

Getting existing deployments to accept some public cloud components is a clever way to get into an existing environment, especially in large enterprises that are already familiar with the likes of VMware. For managers and CIOs, everything appears within existing frameworks, with support, SLAs and systems they recognize. For developers and sysadmins, the familiarization strategy is the same, which is why so much effort is being put into supporting common tools. If you can use Kubernetes to manage your existing private environments, you’ll be comfortable using it to manage public cloud resources too.

Docker here, Docker there, Docker everywhere

Why do you think there’s so much support for Docker, and containers generally? It’s certainly an interesting technology, but why is every cloud provider putting in so much effort to rapidly develop specialized services to support deployment and management of containers? Because the format makes it incredibly easy to deploy anywhere — it makes your applications completely portable, so applications can be moved to the public cloud much more easily than if they were VMs. The fact that it makes it easy to move between cloud providers is just a side effect — the biggest hurdle is getting into the public cloud in the first place. Once it’s there, each vendor can battle it out to compete for the workloads.

The strategy is transition. These features are transitory with all eyes focused on the prize of every workload deployed in the public cloud. There will always be hybrid deployments, especially in those cases where it makes more sense to run your own environment, but the goal for the vast majority is not just public cloud first, but only public cloud. This is what the cloud providers are ultimately trying to achieve. Don’t let their partner announcements and discussions around hybrid clouds fool you.

David Mytton is the founder of Server Density, a SaaS tool which helps you provision and monitor your infrastructure. Based in the U.K., David has been programming in Python and PHP for over 10 years, was one of the earliest production MongoDB users (founding the London MongoDB User Group) and was one of the founding members of the Open Rights Group.

Public cloud providers’ end game shouldn’t surprise anyone originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Cloud