Machine Learning for Thread Level Speculation on Multicore Architectures

Lead Research Organisation: University of Edinburgh
Department Name: Sch of Informatics


Computer hardware has arrived in the era of multi-core systems. Processors with 2 and 4 cores are already in the high-street. Chip manufacturers promise to deliver many more cores per chip in the coming years. The big research challenge is: how can we make best use of all these resources? Existing programs and programming styles are unable to take real advantage of this hardware concurrency. Thread-Level speculation is one viable solution. TLS works by making predictions about future computations, proceeding to execute programs `speculatively' as if these predictions were true. As a backup, it checks the predictions, in parallel with the speculative computation. If the predictions turn out to be correct, then the computer has done useful work earlier than it could have done otherwise - ultimately meaning your programs run faster. On the other hand, if the predictions are false, then the system has to throw away results, and the speculative work is wasted.There are many different factors to consider in this new paradigm. TLS influences different parts of the system, including processor, memory, operating system, programming language and compiler. At each of these different levels, there are various policies and heuristics to set. These affect things like how make predictions about the future, how to stop different computational tasks from interfering with each other, how to decide which threads are more important, and how existing optimization techniques interact with speculation. This research project will explore these factors using Machine Learning. We will use state of the art feature selection and online machine learning techniques, developing the field where necessary, with the ultimate goal of creating a computer system that can automatically tune itself to run its programs as fast as the physical resources will allow.


10 25 50