Framework for planning, executing and monitoring cooperating jobs

Guest post by: Marta Cudova , Faculty of Information Technology, Brno University of Technology, Czech Republic

Marta was one of the early career presenters at the WHPC workshop at SC17. In this post she discusses the work she presented at the workshop.

What is this framework?

This framework, the k-Dispatch, provides a service offering automated scheduling, execution and monitoring cooperating extensive computations. It presents a middle layer between a user application and remote computational resources.

The k-Dispatch is being developed as a part of the k-Plan software used for offline model-based treatment planning for therapeutic ultrasound treatments, e.g., tissue ablation and ultrasonic neurostimulation.

However, the modular and generic design of the k-Dispatch enables its use in other scientific fields and industry as well. It unifies the access to various computational resources and enables a connection with different user applications since all the communication is based on SSH and standard web services.

Why to use this framework?

Nowadays, realistic simulations need very powerful computers for their run. Usually, computations consist of several cooperating tasks. This also places higher demands on both users and developers since more sophisticated computational techniques are needed to take full advantage of the machine power. Therefore, it is very desirable to develop a software providing automated planning, submitting and monitoring cooperating jobs. This approach could reduce the complexity of daily administrative tasks and move HPC closer to people who are not familiar with advanced IT knowledge much.

Where is this framework going to be used?

The k-Dispatch’s ability to be easily extended to support other user applications and computational resources enables a wide application everywhere where HPC service or a kind of computation automation are needed. Let’s give an example of a treatment planning for therapeutic ultrasound treatments. This approach can be used for women breasts imaging or cancer treatment in breasts, kidneys, and prostate and so on. Presently, there is a research concerning with this.

Why is the computation so demanding?

To compute whether the affected tissue will be fully destroyed by the HIFU transducer (High Intensity Focused Ultrasound), coupled ultrasound, thermal and tissue models have to be executed. However, human body is not fully homogeneous so you can’t compute these models analytically to determine the focus point of the transducer. There are many other factors in human body you need to count with in the computation. For example, bones (repulse ultrasound waves), vessels and heartbeat (take the heat away), or breathing (affected tissue might be slightly moving).

To plan the execution of these models in order to communicate effectively, with minimal latency and data stored is one of the challenges for the k-Dispatch. Other challenges are a support of heterogeneous architectures, model run configuration or a selection of appropriate computational resource.

How does the framework know what model to run and when?

The execution planning of the whole simulation is based on a task graph. This task graph is dedicated to a simulation type and is the only part of this software which cannot be generalized. Such a task graph defines models to be executed concurrently on a particular level of the graph and mutual dependencies.  In the picture below, yellow rectangles represent coupling interfaces between models which may differ in computational demands.

 

About the author: Marta Cudova

  • Marta Cudova is a PhD student at the Faculty of Information Technology, Brno University of Technology. She received her MSc in Computer Science from the Brno University of Technology in 2016. In 2016, she attended PRACE Summer of HPC and spent two months at Edinburgh Parallel Computing Centre in the UK where she worked under Dr. Neelofer Banglawala. She also received the HPC Ambassador Award within PRACE Summer of HPC. Now, she is a member of the Supercomputing Technologies Research Group where she focuses on cluster management systems and multiphysics model coupling. This group closely collaborates with the Biomedical Ultrasound Group at University College London.

Posted in:

Leave a Reply

Your email address will not be published. Required fields are marked *