Heterogeneous Parallel and Distributed Computing with Java (HPDCJ)

Lead Research Organisation: Queen's University of Belfast
Department Name: Electronics Electrical Eng and Comp Sci


Our proposal focuses on the ease of use and programmability of Java for distributed heterogeneous computing in order to make it exploitable by the huge user base of mainstream computing. Based on previous work (PCJ library http://pcj.icm.edu.pl), we will introduce and transparently expose parallelism in Java, with minimal change to the specifics of the language, thus allowing programmers to focus on the application. We have demonstrated power and scalability of the PCJ library for the parallel systems and we will extend it for the cases where communication cost and latency could be higher.

We will extend the existing solution with the capability of running on the heterogeneous systems including GPU and mobile devices. The user will obtain the possibility to execute computational intensive parts of the application on multiple GPUs. Since our solution is based on Java it can be easily run on mobile devices. Within the project we will extend the library capabilities with the optimised communication and scheduling mechanism necessary to fully use such devices.

We will utilise the potential of the parallel Java library to process distributed data. The existing solution benefits from the parallel I/O performed by the multiple JVMs. We will use this solution to optimise the process of data distribution and storage including streaming of the large data sets.

We will address dependability and resilience by adding fault tolerance mechanisms to the parallel Java library including fault detection and rescheduling of the application execution. The mechanism will extend capabilities of the existing PCJ library and will be transparent to the users.

We will show the applicability of our framework for distributed heterogeneous systems by a set of selected, key applications including data-intensive Big Data applications.
Our potential success will create a solution for Java programming that will be very attractive to a wide mainstream user base and will thus have a game-changing influence on the European computing industry.

We assembled a carefully selected team with complementary focuses and the right degree of overlap. Most of the partners have worked in close collaboration in previous (EU) projects with remarkable success. We believe this to become a key pilot project that can open the way for future research, which will have a profound impact on mainstream computing.

Planned Impact

The developed libraries will address problems of communication and computation scheduling on the parallel and distributed systems. We will especially investigate problems erasing with the efficient utilisation of multicore systems with the heterogeneous architecture.

The developed solutions will be applied to the selected applications from large scale data analysis. Therefore we will address concrete solutions for important parallel and distributed computing challenges especially in the Big Data area. This will improve European competitiveness in areas that are most important for Europe.

The HPDCJ project trough industrial partners, HPC centres and research groups in the area of parallel computing will strengthened European industry and research in the supply, operation and use of heterogeneous parallel systems and will allow to achieving world- leadership. Development of new programming paradigms, libraries and applications will contribute to the next generation computing. Through partners, project will exploit synergies with on-going EC-supported efforts in heterogeneous platforms and the deployment of leadership-class HPC systems under PRACE.

The main application area is data analysis. The data produced is growing at 50 percent a year, or more than doubling every two years as estimates IDC. Improved access to information is also fuelling the Big Data trend. For example, government data - employment figures and other information - has been steadily migrating onto the Web. In business, economics and other fields, decisions will increasingly be based on data and analysis rather than on experience and intuition.

The recent study Data Equity: Unlocking the value of big data (see reference [19], case for support) validates current industry thinking that big data will herald the next phase of technology-led business innovation, productivity and competition. As the amount of data continues to grow, compounded by the internet, social media, cloud computing and mobile devices, it poses both a challenge and an opportunity for businesses - how to manage, analyse and make use of the ever-increasing amount of data being generated. As a result, organisations are turning to big data analytics solutions such as high-performance analytics to unlock the value of data and reveal previously unseen patterns, sentiments and customer intelligence.

The investigation based on the 179 large companies found that those adopting "data-driven decision making" achieved productivity gains that were 5 percent to 6 percent higher than other factors could explain.

The parallel Java libraries, tools and skeletons developed by the HPDCJ project as well as demonstration of their usability for the adoption of big data analytics applications will significantly influence this market. Adoption of the solutions to the state of the art heterogeneous parallel and distributed systems will be easier and available to much larger user communities. Participation of the commercial partner who is already present on this market makes this prediction more.

The project opens the way for wide adoption of Java for scalable parallel simulations and data processing will significantly increase the number of applications. We believe that HPDCJ project through new tolls, libraries, use cases and examples will lower entry barrier for parallel heterogenous computing and therefore will attract young developers from universities, academia and industry. Thus, the potential impact on the European computing to be potentially game changing.


10 25 50
Description During our research we studied potential programming environments for heterogeneous computing using Java. GPU accelerators are of particular interest and there are different Java programming that can exploit their capabilities. The Java programming environments that we have examined so far were JCuda, Aparapi and Jocl.
Assessing the performance and identifying potential overheads when compared to their original C/C++ alternatives Cuda and OpenCL is one of the contributions of our research. To evaluate the performance of these Java libraries we ran comparison tests using different kernels such as a) sparse matrix-vector multiplication (Spmv), b) sparse matrix-matrix multiplication and c) Fast Fourier Transform. The performance results of the Java environments (JCuda, Aparapi and Jocl) were compared against similar kernel implementation in Cuda and OpenCL. The goal was to estimate any particular overheads that the Java libraries might impose. To compare the performance of the Java counterparts we also utilised the Scalable HeterOgeneous Computing (SHOC) suite which is an established benchmark for CUDA and OpenCL. The SHOC benchmark provided us with 3 levels of performance testing: a) Very low level device characteristics such as bandwidth testing between host and a GPU device b) Device performance for low-level operations c) Device performance for complex applications.

Our findings indicate that there is a reasonable overhead that the Java programming environment can impose as opposed to the original CUDA and OpenCL implementations. The overhead is variable and depends on both the size of the examined problem and on the type of use case that we examine. The very small performance gap in the case of the low level characteristics such as bandwidth between host and device indicate that the Java environments can provide a viable and portable solution in exploiting GPU capabilities. We will further work in this direction to examine in detail the performance and any potential overhead using more complex use case applications.

Further activities have been concentrated on looking at ways to improve the memory usage of the JVM, in particular the garbage collection, when dealing with very big amount of data. These included a survey of literature to explore current issues with the JVM heap usage on NUMA machines and the analysis of a few Spark applications (in particular TF/IDF and, to some extent, Pagerank and BFS), which showed memory issues. Additionally we looked into GC usage with big datasets of different sizes and aggregation, and explored possible improvements by tuning the GC behaviour and modifying the source code.
Exploitation Route The use of Java programming environments for the exploitation of heterogeneous computing resources such as GPUs is an emerging research topic in an area where traditionally the C/C++ languages are more predominant. The need for Java programming environment to utilise GPU accelerators is becoming more obvious if we consider the rising trend of Java use in big data applications and within widely used data analytics environments, as well as in other distributed and computationally intensive applications. Examples of these Java applications are the Apache Spark framework and Apache HBase distributed database, where the use of GPU accelerators is currently a very actively investigated research topic. In order to have better integrated support for GPU acceleration in these cases and a unified Java programming environment, it is necessary to utilize Java APIs such as JCuda, Aparapi and Jocl. However, the advantages and weaknesses of these APIs when they are compared against different use case scenarios are not thoroughly documented and our findings aim to address this gap.
Sectors Digital/Communication/Information Technologies (including Software)

Description EPSRC ICT Delivery Planning Workshops
Geographic Reach National 
Policy Influence Type Participation in a national consultation
Description EU Horizon2020 Programme: UniServer Project
Amount € 663,625 (EUR)
Funding ID 687628 
Organisation European Commission 
Sector Public
Country European Union (EU)
Start 02/2016 
End 01/2019
Description Horizon2020 Programme
Amount € 5,999,510 (EUR)
Funding ID H2020-732631 
Organisation European Commission 
Sector Public
Country European Union (EU)
Start 01/2017 
End 12/2020
Description SFI-DEL Investigators Programme: Meeting the Challenges of Heterogeneous and Extreme Scale Parallel Computing
Amount £521,947 (GBP)
Funding ID 14/IA/2474 
Organisation Science Foundation Ireland (SFI) 
Sector Charity/Non Profit
Country Ireland
Start 09/2015 
End 08/2020
Description Collaboration with IBM on Numerical Libraries for Graph Analytics 
Organisation IBM
Country United States 
Sector Private 
PI Contribution Optimisation of sparse matrix algorithms for large-scale graph processing problems emerging from citation and social graphs. QUB team has contributed techniques to exploit mixed precision arithmetic and dynamic concurrency control in order to minimise the energy footprint of sparse matrix computations in a range of heterogeneous systems for datacentres and HPC environments.
Collaborator Contribution IBM has contributed algorithmic techniques and evaluation systems for this work.
Impact A series of highly optimised implementations of sparse matrix-vector product operations for large-scale graph analytics on heterogeneous HPC servers, micro-servers, and custom computing platforms.
Start Year 2015
Description Collaboration with Maxeler on integrating dataflow accelerators in Big Data software stacks 
Organisation Maxeler Technologies Inc
Department Maxeler Technologies
Country United Kingdom 
Sector Private 
PI Contribution Integration of Maxeler's dataflow engines into the Spark, Storm and other Big Data software stacks, in collaboration with Maxeler Technologies and STFC Hartree.
Collaborator Contribution Programming APIs for Maxeler dataflow accelerators.
Impact No outputs yet, extensions of Spark and Storm with streaming APIs using Maxeler dataflow engines are currently under design.
Start Year 2016
Description NVTV Interview on Superocmputing 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Media (as a channel to the public)
Results and Impact Interview in NVTV's Behind the Science program on Supercomputing as a technology with impact on our everyday lives.
Year(s) Of Engagement Activity 2015
URL http://www.nvtv.co.uk/shows/behind-the-science-dimitrios-nikolopoulos/