Efficient Computing for Particle Physics (Lead Proposal)

Lead Research Organisation: University of Edinburgh
Department Name: Sch of Physics and Astronomy

Abstract

STFC science projects are highly data-intensive, and depend crucially on high performance software and computing, as well as on instruments and facilities. Software is used for: control and operation of experiments; filtering and data reduction; signal processing and reconstruction; statistical data analysis; and experiment simulation. The high-luminosity LHC upgrade (HL-LHC) in 2026, will require greater computing performance than any other current science facility. At present, there is no means of meeting HL-LHC computing requirements within a reasonable funding envelope. In particular, the use of current software with a ten times larger dataset will not scale, since the "Moore's Law" decrease of computing costs with time has effectively ceased due to our inability to efficiently use modern computing platforms. It is clear that this is an urgent issue that must be addressed promptly, while there is still time to put in place the required new technologies. We will engage with industry and train ourselves in new computing architectures. We will identify the parts of our millions of lines of code that will benefit most strongly from adaptation to these new methods.

Planned Impact

This proposal will develop methods to yield greater scientific output per unit computing cost by both leveraging recent trends in hardware and through a comprehensive, cross-experiment approach to software optimisation. As a direct impact, this will result in a real-terms increase in the scientific sensitivity of HEP experiments and the direct societal impact of better value for money for public funds spent in science, not just for the HEP experiments to which the proposal investigators are affiliated, but to large-scale scientific infrastructure in general.

Over the next decade, several UKRI supported programs (SKA, LSST, DUNE, HL-LHC upgrades to name a few) will collect the largest datasets ever recorded in their respective fields, often by orders of magnitude. At the same time, the rate of increase of data processing capabilities for standard (x86) computing architectures is slowing. The consequences of improved software performance and the development of new software optimised for specialised (GPU, FPGA and ASIC) architectures stands to benefit all of the UKRI programs which are seeking to realise the potential of big data.

There are significant synergies with recent trends in industry, where FPGA-based machine learning solutions are in ascendancy. Microsoft's cloud computing platform has started to offer FPGA coprocessor options for this task (https://www.eetimes.com/document.asp?doc_id=1334741), incentivising current cloud users to develop for these performant and energy efficient technologies. This proposal will result in a roadmap by which we will train the future users of these technologies through the Centres for Doctoral Training (CDTs) specialising in data-intensive science, and through CASE studentships with industry partners. These training goals generate direct impact through industry placements and secondary impact as trained students graduate and transfer to industry. As we face the real challenges of climate change and urban pollution, there has been recent interest in real-time processing of big data relating to efficient use of resources (http://ximantis.com/). The intelligent use of big data to promote greener cities can only be realised with performant algorithms of the kind enabled by this proposal. We will involve industry in the UK workshops through existing links with universities and through CERN's OPENLAB, in addition to fostering new industry relations in the aforementioned areas.

Publications

10 25 50
 
Description Efficient solutions to deliver computing for high energy physics were investigated. This was a pump-prime project that, with other work, led to Swift-HEP.
Exploitation Route The partnerships with industry, the working group structure and the individual projects have all been taken forward.
Sectors Digital/Communication/Information Technologies (including Software)

 
Description Links have been set up between UK HEP academics and industry partners in computing.
First Year Of Impact 2020
Sector Digital/Communication/Information Technologies (including Software)
Impact Types Societal,Economic

 
Description ST/V002562/1 STFC Swift-HEP PPRP project
Amount £1,500,000 (GBP)
Funding ID ST/V002562/1 
Organisation Science and Technologies Facilities Council (STFC) 
Sector Public
Country United Kingdom
Start 04/2021 
End 04/2024
 
Description nVidia 
Organisation NVIDIA
Country Global 
Sector Private 
PI Contribution Farrington and postdoc (vishwakarma) working with Nvidia and CERN on ATLAS EM calorimeter geometry adaptation for parallelised implementation.
Collaborator Contribution Nvidia mentoring of Vishwakarma; training to ambassador status.
Impact No outputs yet.
Start Year 2019
 
Description ECHEP/Excalibur Workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Workshop involving wider UK particle physics community addressing the need for modern software and computing solutions to deal with exa-scale datasets. Informed Swift-HEP proposal/collaboration.
Year(s) Of Engagement Activity 2020
URL https://indico.cern.ch/event/928965/