What is Docker? Linux containers explained

Like FreeBSD Jails and Solaris Zones, Linux containers are self-contained execution environments—with their own, isolated CPU, memory, block I/O, and network resources—that share the kernel of the host operating system. The result is something that feels like a virtual machine, but sheds all the weight and startup overhead of a guest operating system.

In a large-scale system, running VMs would mean you are probably running many duplicate instances of the same OS and many redundant boot volumes. Because containers are more streamlined and lightweight compared to VMs, you may be able to run six to eight times as many containers as VMs on the same hardware.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

Containers will be a $2.6B market by 2020, research firm says

CIO Cloud Computing

2016’s top trends in enterprise computing: Containers, bots, A.I. and more

It’s been a year of change in the enterprise software market. SaaS providers are fighting to compete with one another, machine learning is becoming a reality for businesses at a larger scale, and containers are growing in popularity.

Here are some of the top trends from 2016 that we’ll likely still be talking about next year.

Everybody’s a frenemy

As more companies adopt software-as-a-service products like Office 365, Slack and Box, there is increasing pressure for companies that compete with each another to collaborate. After all, nobody wants to be stuck using a service that doesn’t work with the other critical systems they have.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing

Lessons from launching billions of Docker containers

The Iron.io Platform is an enterprise job processing system for building powerful, job-based, asynchronous software. Simply put, developers write jobs in any language using familiar tools like Docker, then trigger the code to run using Iron.io’s REST API, webhooks, or the built-in scheduler. Whether the job runs once or millions of times per minute, the work is distributed across clusters of “workers” that can be easily deployed to any public or private cloud, with each worker deployed in a Docker container.

At Iron.io we use Docker both to serve our internal infrastructure needs and to execute customers’ workloads on our platform. For example, our IronWorker product has more than 15 stacks of Docker images in block storage that provide language and library environments for running code. IronWorker customers draw on only the libraries they need to write their code, which they upload to Iron.io’s S3 file storage, where our message queuing service merges the base Docker images with the user’s code in a new container, runs the process, then destroys the container.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

Containers 101: Linux containers and Docker explained

Like FreeBSD Jails and Solaris Zones, Linux containers are self-contained execution environments — with their own, isolated CPU, memory, block I/O, and network resources — that share the kernel of the host operating system. The result is something that feels like a virtual machine, but sheds all the weight and startup overhead of a guest operating system.

In a large-scale system, running VMs would mean you are probably running many duplicate instances of the same OS and many redundant boot volumes. Because containers are more streamlined and lightweight compared to VMs, you may be able to run six to eight times as many containers as VMs on the same hardware.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

Kubernetes – the platform for running containers – is getting more enterprisey

Application containers are all the buzz nowadays. They’re an easy way to package applications and their dependencies into Linux container boxes and run them anywhere – public cloud, a private data center or a developer’s laptop.

The problem comes when managing a whole lot of containers together.

+MORE AT NETWORK WORLD: Everything you need to know about Google I/O 2016 | Will containers kill the virtual machine? +

screen shot 2016 05 19 at 2.49.55 pm

There are a handful of platforms emerging for managing containers at scale. Docker – the company that is credited with generating much of the market buzz about containers – has its own tool called Swarm. Google – which has said that most of its internal apps run in containers – has open sourced its own container management platform named Kubernetes.

To read this article in full or to leave a comment, please click here

Network World Cloud Computing

Containers plant a flag in the enterprise

Containers are finally making their way into the real world. According to a survey out this week from Nginx, 20 percent of respondents say they use containers in production, and one-third of the respondents run containers for more than 80 percent of workloads. Both developments are huge.

I see this in my world as well, as large enterprises now do real work with Docker and CoreOS.

On the surface, the objective is portability. Containers let you move from cloud to cloud with little or no modifications. However, the chances are low that a Global 2000 IT shop will move from Cloud A to Cloud B anytime soon.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

Google’s new managed containers are brought to you by Red Hat

A new incarnation of Red Hat’s OpenShift Dedicated service for running containers will be available on Google Cloud Platform and could further Google’s plans to create a genuinely open-source hybrid cloud.

OpenShift Dedicated was originally hosted on Amazon EC2, but it’s based on technology that can theoretically allow it to run anywhere. Now that Google and Red Hat are teaming up, instances of OpenShift Dedicated will be available on Google Cloud Platform. There are no details about pricing or availability as yet. 

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

No, containers are not the future of the cloud

InfoWorld Cloud Computing

Why unikernels might kill containers in five years

Sinclair Schuller is the CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.

Container technologies have received explosive attention in the past year – and rightfully so. Projects like Docker and CoreOS have done a fantastic job at popularizing operating system features that have existed for years by making those features more accessible.

Containers make it easy to package and distribute applications, which has become especially important in cloud-based infrastructure models. Being slimmer than their virtual machine predecessors, containers also offer faster start times and maintain reasonable isolation, ensuring that one application shares infrastructure with another application safely. Containers are also optimized for running many applications on single operating system instances in a safe and compatible way.

So what’s the problem?

Traditional operating systems are monolithic and bulky, even when slimmed down. If you look at the size of a container instance – hundreds of megabytes, if not gigabytes, in size – it becomes obvious there is much more in the instance than just the application being hosted. Having a copy of the OS means that all of that OS’ services and subsystems, whether they are necessary or not, come along for the ride. This massive bulk conflicts with trends in broader cloud market, namely the trend toward microservices, the need for improved security, and the requirement that everything operate as fast as possible.

Containers’ dependence on traditional OSes could be their demise, leading to the rise of unikernels. Rather than needing an OS to host an application, the unikernel approach allows developers to select just the OS services from a set of libraries that their application needs in order to function. Those libraries are then compiled directly into the application, and the result is the unikernel itself.

The unikernel model removes the need for an OS altogether, allowing the application to run directly on a hypervisor or server hardware. It’s a model where there is no software stack at all. Just the app.

There are a number of extremely important advantages for unikernels:

  1. Size – Unlike virtual machines or containers, a unikernel carries with it only what it needs to run that single application. While containers are smaller than VMs, they’re still sizeable, especially if one doesn’t take care of the underlying OS image. Applications that may have had an 800MB image size could easily come in under 50MB. This means moving application payloads across networks becomes very practical. In an era where clouds charge for data ingress and egress, this could not only save time, but also real money.
  2. Speed – Unikernels boot fast. Recent implementations have unikernel instances booting in under 20 milliseconds, meaning a unikernel instance can be started inline to a network request and serve the request immediately. MirageOS, a project led by Anil Madhavapeddy, is working on a new tool named Jitsu that allows clouds to quickly spin unikernels up and down.
  3. Security – A big factor in system security is reducing surface area and complexity, ensuring there aren’t too many ways to attack and compromise the system. Given that unikernels compile only which is necessary into the applications, the surface area is very small. Additionally, unikernels tend to be “immutable,” meaning that once built, the only way to change it is to rebuild it. No patches or untrackable changes.
  4. Compatibility – Although most unikernel designs have been focused on new applications or code written for specific stacks that are capable of compiling to this model, technology such as Rump Kernels offer the ability to run existing applications as a unikernel. Rump kernels work by componentizing various subsystems and drivers of an OS, and allowing them to be compiled into the app itself.

These four qualities align nicely with the development trend toward microservices, making discrete, portable application instances with breakneck performance a reality. Technologies like Docker and CoreOS have done fantastic work to modernize how we consume infrastructure so microservices can become a reality. However, these services will need to change and evolve to survive the rise of unikernels.

The power and simplicity of unikernels will have a profound impact during the next five years, which at a minimum will complement what we currently call a container, and at a maximum, replace containers altogether. I hope the container industry is ready.

Why unikernels might kill containers in five years originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Cloud

Red Hat should double down on containers

OpenStack is many things, but a runaway success it is not. Despite a community that measures in the thousands, Gartner still counts OpenStack deployments in the hundreds — on a good day.

This could change. OpenStack might, as Randy Bias urged the OpenStack faithful in his annual State of the Stack address, start streamlining development because “OpenStack is at risk of collapsing under its own weight.”

But if you’re a hyperfocused company like Red Hat, sinking even more resources in OpenStack development might not be the smart bet.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing