Phase II year
2015
(last award dollars: 2018)
Phase II Amount
$2,999,206
Our proposal includes the creation of a web-based platform that will benchmarking GPU applications accross numerous GPU cards located in the cloud. This is a useful tool for programmers with limited resources at hand. Further, we will open source the ArrayFire software library - the broadest OpenCL available. In addition to this, we will extend Array with additional functionality for image processing, computer vision, and social network analysis. All these crucial tools for data scientists.
----------
Numerous machine learning frameworks exist, each with highly variable support for accelerated computing (e.g. Graphics Processing Units (GPUs), Field-Programmable Gate Arrays (FPGAs), etc). A library of machine learning primitives couldsupport the many disparate frameworks and boost the advancement of machine learning research and programs, such as those underway in the DARPA Data-Driven Discovery of Models (D3M) program. This proposal seeks to build accelerated machine learning primitives into the ArrayFire Machine Learning Library (ArrayFire-ML). Features of ArrayFire-ML include: 1) transparent hardware acceleration due to ArrayFires ability to run on CUDA parallel computing platforms, within the Open Computing Language (OpenCL) framework, or on multi-core Central Processing Unit (CPU) devices, 2) support for many programming languages, including Python and Julia, via community language wrappers, 3) the robust and already deployed ArrayFire testing and documentation framework, 4) the active and broad existing ArrayFire open source community seeking additional machine learning functionality, and 5) with the proposed D3M primitives in ArrayFire-ML, all of ArrayFires existing math functions would become available for further research efforts on those primitives. In this proposal, we seek to provide a significant leap forward to the machine learning field by building a robust, open source library of accelerated machine learning primitives available to frameworks, the D3M program, and the broader research community.