Five things AIs can do better than us

For millennia, we surpassed the other intelligent species with which we share our planet — dolphins, porpoises, orangutans, and the like — in almost all skills, bar swimming and tree-climbing.

In recent years, though, our species has created new forms of intelligence, able to outperform us in other ways. One of the most famous of these artificial intelligences (AIs) is AlphaGo, developed by Deepmind. In just a few years, it has learned to play the 4,000-year-old strategy game, Go, beating two of the world’s strongest players.

Other software developed by Deepmind has learned to play classic eight-bit video games, notably Breakout, in which players must use a bat to hit a ball at a wall, knocking bricks out of it. CEO Demis Hassabis is fond of saying that the software figured out how to beat the game purely from the pixels on the screen, often glossing over the fact that the company first taught it how to count and how to read the on-screen score, and gave it the explicit goal of maximizing that score. Even the smartest AIs need a few hints about our social mores. 

To read this article in full or to leave a comment, please click here

Network World Cloud Computing

Sponsored post: Building For Success on AWS: Five Best Practice Tips

As AWS increasingly becomes the preferred deployment model for enterprise applications and services, it’s never been more important for a software or AWS SaaS provider to work effectively with AWS. Many leading technology providers are therefore optimizing their software to run on AWS as well as building globally available cloud services delivered through AWS’ worldwide regions.

Splunk has been very pleased with the success of our SaaS business on AWS, so we thought we’d share what we’ve learned in the form of best practices for you to keep in mind when developing your software or SaaS business on AWS.

1. Embrace the change

If you’ve attended the keynote at any recent AWS Summit, you’ve heard the message “cloud is the new normal.” Our advice is to take this to heart, and invest in your business knowing the momentum behind cloud will only continue to accelerate.

This is a boon to businesses of every size and in every location around the world—cloud makes it easier than ever before to innovate, rapidly bring an offering to market and serve your customer.

2. Relentlessly focus on the customer experience

Focusing your business on customer success is a must when building a business on AWS.

Why? Because the number one driving factor behind everything AWS does is to help its customers be successful and innovative. Tactically, this can mean many things for a SaaS Partner, but the one that stands out is building technology integrations that can provide additional value to AWS customers.

A good example of this involves AWS CloudTrail and AWS Config, services that deliver log data on AWS API calls around user activity and resource changes. When properly harnessed, these services help enable enterprises ensure security and compliance of their AWS deployments. A handful of SaaS Partners deliver integrations for these AWS services. The importance of these integrations is clear when you think of the importance of security and compliance for any successful AWS deployment.

3. Leverage your customers in your go-to-market strategy

When it comes to building your software or SaaS business on AWS, nothing beats customer validation. One of the most compelling stories is when a customer fully integrates your technology into their AWS strategy.

A great example of this is the Federal Industry Regulatory Authority (FINRA). FINRA is an independent regulator that examines all securities firms and their registered persons and monitors trading on U.S. markets. To respond to rapidly changing market dynamics, FINRA is moving its platform to Amazon Web Services (AWS) to analyze and store approximately 30 billion market events every day. FINRA uses Splunk Cloud to ensure security and compliance in their AWS deployment.

4. Choose AWS and go “all-in”

When building out your cloud strategy, you have to make choices. Our advice: When two roads diverge in the cloud, choose AWS.

This is a best practice because AWS has the richest and broadest set of services in the market. If your offering is storage intensive, there are specific solutions for that; if it’s compute intensive, there are specific solutions for that; if it’s I/O intensive, there are specific solutions for that as well. Regardless of what you need on the infrastructure stack—whether it’s automated provisioning, configuration or management, AWS has a mature solution that fits the bill.

In addition, business today is global. To successfully grow your business you need the ability to rapidly expand around the world. AWS offers that through their 11 worldwide regions.

5. Leverage the ecosystem

If you’re building on AWS, chances are that other folks building on AWS will find it useful. This is what makes the AWS announcement of its SaaS Partner Program so exciting. If you’re building a SaaS storage solution, odds are we could use it for our SaaS operational and security monitoring solution. Since we’re building a SaaS operational and security monitoring solution, odds are you could use it for your SaaS storage solution.

We have the opportunity to be better together on AWS for the benefit of all of our customers.

To learn more about our cloud solutions, visit us here.

Building For Success on AWS: Five Best Practice Tips originally published by Gigaom, © copyright 2016.

Continue reading…

Cloud

Why unikernels might kill containers in five years

Sinclair Schuller is the CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.

Container technologies have received explosive attention in the past year – and rightfully so. Projects like Docker and CoreOS have done a fantastic job at popularizing operating system features that have existed for years by making those features more accessible.

Containers make it easy to package and distribute applications, which has become especially important in cloud-based infrastructure models. Being slimmer than their virtual machine predecessors, containers also offer faster start times and maintain reasonable isolation, ensuring that one application shares infrastructure with another application safely. Containers are also optimized for running many applications on single operating system instances in a safe and compatible way.

So what’s the problem?

Traditional operating systems are monolithic and bulky, even when slimmed down. If you look at the size of a container instance – hundreds of megabytes, if not gigabytes, in size – it becomes obvious there is much more in the instance than just the application being hosted. Having a copy of the OS means that all of that OS’ services and subsystems, whether they are necessary or not, come along for the ride. This massive bulk conflicts with trends in broader cloud market, namely the trend toward microservices, the need for improved security, and the requirement that everything operate as fast as possible.

Containers’ dependence on traditional OSes could be their demise, leading to the rise of unikernels. Rather than needing an OS to host an application, the unikernel approach allows developers to select just the OS services from a set of libraries that their application needs in order to function. Those libraries are then compiled directly into the application, and the result is the unikernel itself.

The unikernel model removes the need for an OS altogether, allowing the application to run directly on a hypervisor or server hardware. It’s a model where there is no software stack at all. Just the app.

There are a number of extremely important advantages for unikernels:

  1. Size – Unlike virtual machines or containers, a unikernel carries with it only what it needs to run that single application. While containers are smaller than VMs, they’re still sizeable, especially if one doesn’t take care of the underlying OS image. Applications that may have had an 800MB image size could easily come in under 50MB. This means moving application payloads across networks becomes very practical. In an era where clouds charge for data ingress and egress, this could not only save time, but also real money.
  2. Speed – Unikernels boot fast. Recent implementations have unikernel instances booting in under 20 milliseconds, meaning a unikernel instance can be started inline to a network request and serve the request immediately. MirageOS, a project led by Anil Madhavapeddy, is working on a new tool named Jitsu that allows clouds to quickly spin unikernels up and down.
  3. Security – A big factor in system security is reducing surface area and complexity, ensuring there aren’t too many ways to attack and compromise the system. Given that unikernels compile only which is necessary into the applications, the surface area is very small. Additionally, unikernels tend to be “immutable,” meaning that once built, the only way to change it is to rebuild it. No patches or untrackable changes.
  4. Compatibility – Although most unikernel designs have been focused on new applications or code written for specific stacks that are capable of compiling to this model, technology such as Rump Kernels offer the ability to run existing applications as a unikernel. Rump kernels work by componentizing various subsystems and drivers of an OS, and allowing them to be compiled into the app itself.

These four qualities align nicely with the development trend toward microservices, making discrete, portable application instances with breakneck performance a reality. Technologies like Docker and CoreOS have done fantastic work to modernize how we consume infrastructure so microservices can become a reality. However, these services will need to change and evolve to survive the rise of unikernels.

The power and simplicity of unikernels will have a profound impact during the next five years, which at a minimum will complement what we currently call a container, and at a maximum, replace containers altogether. I hope the container industry is ready.

Why unikernels might kill containers in five years originally published by Gigaom, © copyright 2015.

Continue reading…

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.

Cloud

Dell expanding in China with $125B investment over five years

Dell plans to invest US$ 125 billion over the next five years in China, the company’s second largest market outside the U.S.

The computers and IT services company is also collaborating with the state-controlled Chinese Academy of Sciences to set up an “Artificial Intelligence and Advanced Computing Joint-Lab,” and is expanding its own research and development team in the country to focus on technologies aimed at the Chinese market.

The company already employs nearly 2,000 senior engineers in its research and development team in the country.

Like many other U.S. technology companies, Dell appears to be making these investments in the country to win over large local government and private business.

To read this article in full or to leave a comment, please click here

Network World Cloud Computing