Google open sourced TensorFlow (TF), a distributed machine learning library, in November. The basic idea is that, you build your ML process into a graph and let TF handle the running and distribution of the work between cores. Be it cores in your CPU or GPU, TF has you covered.
The dataflow graph works much like an enterprise miner diagram. Only you do not have a GUI to define it. The nodes of the graph represent operations, while the edges represent data being exchanged between nodes.
The response to TensorFlow has been overwhelmingly positive so far. Last time google shared this kind of technology (MapReduce) we ended up with a new industry standard (Hadoop). Google stopped using MapReduce in 2014, and now the sense in the online circles is that TensorFlow may repeat the feat. The few critics point out there are already competitors, like Theano and Caffe, out there that do the same thing without the limitations google placed (distributing to a cluster is not possible in open source version of TF).
I experimented with TF a little for a possible EMBA project (which turned out to be too complex for EMBA). The python interface was a breeze to use. Google did a good job of developing documentation and packaging to go with TF before releasing it. My only gripe was that TensorBoard (the module responsible for visualizing the graphs) only works properly in Chrome. I was unable to get Firefox to render more than the histograms and plots.