Splice Machine seeks to deliver hybrid RDBMS as a service

Splice Machine, which specializes in an open source relational database for hybrid workloads, wants to bring that database to the cloud as a service.

The company announced this week that it will release Cloud RDBMS, a database-as-a-service (DBaaS) on Amazon Web Services (AWS) this spring. It noted that Cloud RDBMS will be able to power applications and perform analytics, without the need for ETL and separate analytical databases.

CIO Cloud Computing

Nadella points to machine learning as battleground in cloud computing

Microsoft CEO Satya Nadella has identified machine learning as the firm’s key focus as cloud computing usage becomes more widespread.

It is an area that is fast becoming the battleground for the big cloud providers. Google and Amazon Web Services both offer a range of tools that make it easier for developers to create “intelligent’ applications, while the likes of Salesforce are keen to incorporate artificial intelligence into their software services.

Speaking at an event in London’s Canary Wharf financial district, Nadella’s sales pitch placed emphasis on the role of machine learning across Microsoft’s range of cloud products – from infrastructure and platform as a service offering in Azure, to its Dynamics and Office365 cloud software.

To read this article in full or to leave a comment, please click here

CIO Cloud Computing

AWS machine learning VMs go faster, but not forward

Amazon Web Services has unveiled a new generation of GPU-powered cloud computing instances aimed squarely at customers running machine learning applications.

The P2’s a major step up from the previous generation of GPU-powered AWS instances, and it has plenty of memory to burn. But it’s built with an earlier generation of GPU, so it’s less suited for the bleeding-edge machine learning work that needs the most recent advances in GPU technology.

New hotness …

The prior variety of AWS instances with GPUs, the G2, maxed out at four GPUs with 4GB of video RAM and 80GB of system memory per instance. Amazon is currently billing the G2 as suitable for “graphics-intensive applications,” rather than machine learning specifically.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

The method behind Google’s machine learning madness

First there was TensorFlow, Google’s machine learning framework. Then there was SyntaxNet, a neural network framework Google released to help developers build applications that understand human language. What comes next is anyone’s guess, but one thing is clear: Google is aggressively open-sourcing the smarts behind some of its most promising AI technology.

Despite giving it away for free, however, Google is also apparently betting that “artificial intelligence will be its secret sauce,” as Larry Dignan details. That “sauce” permeates a bevy of newly announced Google products like Google Home, but it’s anything but secret.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

Eric Schmidt sees a huge future for machine learning

The man who helped build Google from a search engine into one of the biggest and most influential companies in the world has predicted the emergence of a new computing architecture based on crowd-sourced data and machine learning.

Speaking at Google’s GCP Next cloud computing conference in San Francisco, Alphabet Chairman Eric Schmidt said the combination of crowd-sourced data and machine learning will be the basis of “every successful huge IPO” in five years.

He said the adoption of machine learning will allow companies to mine crowd-sourced data, which already provides a mass of information not previously available to companies, and improve on it.

To read this article in full or to leave a comment, please click here

InfoWorld Cloud Computing

Need machine learning? HPE just launched a new service with more than 60 APIs

CIO Cloud Computing

IBM Strengthens Effort to Support Open Source Spark for Machine Learning

Spark 300x251 IBM Strengthens Effort to Support Open Source Spark for Machine LearningIBM is providing substantial resources to the Apache Software Foundation’s Spark project to prepare the platform for machine learning tasks, like pattern recognition and classification of objects. The company plans to offer Bluemix Spark as a service and has dedicated 3,500 researchers and developers to assist in its preservation and further development.

In 2009, AMPLab of the University of Berkeley developed the Spark framework that went open source a year later as an Apache project. This framework, which runs on a server cluster, can process data up to 100 times faster than Hadoop MapReduce. Given that the data and analyzes are embedded in the corporate structure and society – from applications to the Internet of Things (IoT) – Spark provides essential advancements in large-scale data processing.

First, it significantly improves the performance of applications dependent data. Then it radically simplifies the development process of intelligence, which are supplied by the data. Specifically, in its effort to accelerate innovation on Spark ecosystem, IBM decided to include Spark in its own platforms of predictive analysis and machine learning.

IBM Watson Health Cloud will use Spark to healthcare providers and researchers as they have access to new health data of the population. At the same time, IBM will make available its SystemML machine learning technology open source. IBM is also collaborating with Databricks in changing Spark capabilities.

IBM will hire more than 3,500 researchers and developers to work on Spark-related projects in more than a dozen laboratories worldwide. The big blue company plans to open a Spark Technology Center in San Francisco for the Data Science and the developer community. IBM will also train Spark to more than one million data scientists and data engineers through partnerships with DataCamp, AMPLab, Galvanize, MetiStream, and Big Data University.

A typical large corporation will have hundreds or thousands of data sets that reside in different databases through their computer system. A data scientist can design an algorithm using to plumb the depths of any database. But is needs 90 working days of scientific data to develop the algorithm. Today, if you want to implement another system, it is a quarter of work to adjust the algorithm so that it works. Spark eliminates that time in half. The spark-based system can access and analyze any database, without development and no additional delay.

Spark has another virtue of ease of use where developers can concentrate on the design of the solution, rather than building an engine from scratch. Spark brings advances in data processing technology on a large scale because it improves the performance of data-dependent applications, radically simplifies the process of developing intelligent solutions and enables a platform capable of unifying all kinds of information on real work schemes.

Many experts consider Spark as the successor to Hadoop, but its adoption remains slow. Spark works very well for machine learning tasks that normally require running large clusters of computers. The latest version of the platform, which recently came out, extends to the machine learning algorithms to run.


CloudTimes