At EPCC we run an MSc in HPC and data science, and I am course organiser of a semester two course on this module called Parallel Design Patterns (PDP). The idea of this is to take a top down approach to writing or optimising parallel codes, where we discuss the different ways in which one can architect parallelism from a design perspective, and from this make decisions about how to structure their parallel code and data. There are numerous attributes that need to be considered, from performance to maintainability, to portability and bit reproducibility, and which are most important really depends on the application in question and what you are trying to achieve. However the good news is that we have been doing development and optimisation for over 30 years in the HPC community, so already have lots of knowledge and experience of how to design such codes most effectively. Therefore, the course is based around different game plans for parallelism (which we call our design patterns), and explores problems that they suit and how to specialise the patterns for specific codes and situations. This is very much a practical course, where real-world HPC examples are used to motivate the content in lectures, and practicals then explore these in practice. In addition to doing the majority of teaching on the PDP course, I also teach some lectures on a couple of our other courses too.

I also supervise MSc dissertation projects each year, some of which are industrial led dissertations where the student is working on a specific problem that the company has encountered, and the student also being supervised by their staff. MSc projects I have supervised are wide ranging and cover numerous areas including GPU and FPGA optimisation of codes, visual approaches to writing parallel codes, volunteer web-based computing, and the porting of Python onto RISC-V architectures.

PhD students

I am currently the primary supervisor of two PhD students, Maurice and Ludovic. Maurice Jamieson is exploring the programmability and performance challenges associated with micro-core architectures, and specifically working on ePython. His research has focused around the ability to process data-sets and code sizes of arbitrary size on these architectures using Python, but also delivering high performance (as close to native as possible) along with the programmer productivity benefits. This work has involved the development of soft multi core micro-architecture designs, for both the MicroBlaze and RISC-V architectures, and Maurice created the Eithne framework which allows one to benchmark these for performance, power efficiency, and resource usage.

My other student, Ludovic Capelli, started his research around type oriented task based programming using Mesham. However, after a preliminary MSc by research year he undertook a very successful internship and decided to shift focus to vertex centric processing. This is all about the processing of large graphs (millions of vertices and billions of edges) and the vertex centric approach provides significant programmability benefits when it comes to writing the underlying algorithms. Until Ludovic’s research, vertex centric frameworks implied a significant amount of runtime and memory overhead, severely limiting their practicality for real-world data-sets. Based upon his new framework, iPregel, much of this has been eliminated by the development of underlying techniques, ultimately enabling high performance parallel processing of very large graphs using the vertex centric abstraction.


In the past I was also heavily involved in public engagement, and supervised six PRACE Summer of HPC (SoHPC) students over the years. Theses were mainly focused around the development of outreach activities, and some of the early projects worked on the development of a dinosaur racing game. The front-end visualisation was developed by the SoHPC student, then utilising GaitSym to actually run the simulation of the animal on one of our supercomputers. Researchers had previously been using GaitSym on HECToR, the UK national supercomputer at the time, and-so it was a natural choice to see whether we could turn this into an outreach demo, illustrating the role of HPC in science. There were a number of dinosaurs provided for the public to select, for instance a Tyrannosaurus, Argentinosaurus, and Edmontosaurus, with participants then being able to tweak and tune these as they desired (for instance make certain parts bigger or smaller). It took around 30 seconds to simulate a dinosaur in parallel (initially with a direct link to HECToR, and subsequently on Wee Archie), with the results then played out on the screen against other dinosaur configurations to see who could design the fastest dinosaur. This worked well, was popular with the general public, and our main outreach demo for a number of years.

Other outreach demos were also developed, for me probably the most interesting one I supervised was the ARCHER challenge, which is a web-based game that enables players to run their own HPC centre, and juggle the cost of installing and maintaining machines, against the scientific needs of their users. Myself leading the development, working with a SoHPC student and another staff member, I found this especially interesting, as the technologies were very different from what we commonly use in HPC. Not least, it is amazing how the JavaScript libraries and browser support has progressed in recent years, which enables the quick development of such codes and a rich experience in the web-browser. The most difficult part of the project was the tuning of parameters to make the game playable, and this gave me a new appreciation of the skill involved by game studios! The game could get fairly complex, and-so we also added a timed festival mode, which enabled participants at outreach events to play the game in a specific time frame, and this has been used at the Big Bang Fair a few years running, as well as some other more local science festivals.