ISC 2016 Workshop Speakers

Click here for full workshop details


Poster Presenters

  • Speaker & Chair: TONI COLLIS

    Co-Founder, Women in HPC, EPCC at the University of Edinburgh

    Toni Collis is an Applications Consultant in HPC Research and Industry, providing consultancy and project management on a range of academic and commercial projects at EPCC, the University of Edinburgh Supercomputing Centre.

    Toni has a wide-ranging interest in the use of HPC to improve the productivity of scientific research, in particular developing new HPC tools, improving the scalability of software and introducing new programming models to existing software. Toni is also a consultant for the Software Sustainability Institute and a member of the ARCHER team, providing ARCHER users with support and resources for using the UK national supercomputing service as effectively as possible. In 2013 Toni co-founded Women in HPC (WHPC) as part of her work with ARCHER. WHPC has now become an internationally recognized initiative, addressing the under-representation of women working in high performance computing.

    Toni is SC17 Inclusivity Chair and a member of the Executive committee for the conference. Toni is also a member of the XSEDE Advisory Board and has contributed to the organization and program of a number of conferences and workshops over the last five years including as an Executive Committee member of the EuroMPI 2016 conference and leading seven WHPC workshops around the world.

  • Speaker: Mrs ALISON KENNEDY

    Director, Hartree Centre, UK

    AlisonAlison Kennedy is the current Managing Director of PRACE and is Chair of the Board of Directors of the Partnership for Research Computing in Europe (PRACE). She joined the STFC Hartree Centre in the UK as Director in March 2016.

    The Hartree Centre provides collaborative research, innovation and development services that accelerate the application of HPC, data science, analytics and cognitive techniques, working with both businesses and research partners to gain competitive advantage. Prior to joining Hartree, she worked in a variety of managerial and technical HPC roles at EPCC for more than 23 years.

  • Speaker: LORNA RIVERA

    I-STEM Senior Research Specialist, University of Illinois at Urbana-Champaign

    Lorna Rivera
    Lorna Rivera serves as an I-STEM Senior Research Specialist working on a nationwide project funded by the National Science Foundation: Extreme Science and Engineering Discovery Environment (XSEDE). XSEDE seeks to help scientists and engineers around the world use their collection of integrated advanced digital resources and services to advance research in order to make us all healthier, safer, and better off.

    Lorna Rivera received her Bachelor of Science in Health Education and her Master of Science in Health Education and Behavior from the University of Florida. In 2011, she was certified as a Health Education Specialist by the National Commission for Health Education Credentialing, Inc. Prior to joining I-STEM in Illinois, Lorna worked with various organizations, including the March of Dimes, Shands HealthCare, and the University of Florida College of Medicine. Her research interests include the evaluation of innovative programs and their sustainability, and health education and promotion programs.


    PhD Candidate, University of Southampton

    Poster available for download here.
    I am a PhD student in the Faculty of Engineering and the Environment at University of Southampton since November 2015. I graduated with first class honours in Aerospace Engineering in 2015 from the same University.

    My research topic is Advanced Internet of Things (IoT) for Engineering. My main interests are ubiquitous computing, indoor positioning and context-aware computing coupled to applications of IoT in engineering and science which may require large scale computing and data handling requirements

    Abstract: Advanced Internet of Things for Engineering

    In the last few decades Internet of Things (IoT) has developed at an exponential rate. A dynamic area that captivates the attention of both industry and academia, some consider Internet of Things the second technological revolution – the first one being the invention of the personal computer – and there are predictions claiming that by 2020 approximately 25 – 50 billion smart devices will be connected to the Internet.

    Over the last few years the hardware capabilities have improved significantly, while their price and physical size highly decreased. This research is looking at next-generation IoT capabilities that are enabled by these advances.

    We identify that a key enabler for Internet of Things is indoor positioning. Although a number of indoor localisation systems have been developed lately, there is still further work to be done to achieve a highly cost-effective, practical indoor positioning system. It is important that such systems can be deployed easily and used by anyone, and do not impose hardware or software restrictions on the user. The system would be an excellent ubiquitous computing platform that can be deployed at large scales to further enable context-aware applications. 

    We have developed an indoor positioning system that is ubiquitous, has the capability to perform auto-calibration and the tracked devices do not have to be connected to a WiFi infrastructure. Further work will look at deploying the system across our University campus with the aim of performing advanced analytics and providing underpinning infrastructure for future research projects across our estate. It is likely that large amounts of data will be collected. This will be combined with other available data sources and analysed using high performance computing.

  • Poster presenter: JULITA INCA CHIROQUE

    System Engineer at Callao’s University and Computer Science Master at Pontificia Universidad Católica del Peru

    Poster available for download here.
    I have been working with Linux related technologies during the last seven years. Enthusiastic involved in Linux projects such as GNOME (Member of the GNOME Foundation) and FEDORA (FEDORA Peru Ambassador). Thanks to these projects I was able to travel for more than twelve countries and I also have been Linux Administrator and IT Linux Specialist in companies such as GMD and IBM. Among academic life experiences, I lectured in universities like PUCP (Assistant’s Professor), USIL (Professor) and UNI (Professor) in courses that let me spread the Linux knowledge such as Operative System, Networking, Security and Linux courses. Recently, I have involved in HPC researches and I belong to the HPC group at CTIC-UNI. Parallel computing and heterogeneous parallel programming are now my research topics.


    The starting point of this HPC experiment is by the use of a commodity HPC cluster. But, instead of using thousands servers that are work together for solving a single problem, we decided to use an educational system, the Cluster Cruz II, on which to install and configure Hadoop and MapReduce. This tool is useful in its low price, portability and flexibility, also has the key features such as the physical manipulation of SD card to support a lot of variations and changes.

    We have finally a success to install the needed software when all the node of the cluster were with single model of Raspberry Pi 2 model B. The Raspbian OS (Jessie) was installed on the 4 nodes, and all the packages were installed without any compatibility issues. When we had the cluster architecture in which a Raspberry Pi 1 as the master node and all the other slaves were of Raspberry Pi 2, we have some failures to install software. The SD cards in use were of 32 GB and it does not matter even if not all of them are the same in brand or type.

    During the experience, Linux has played an important role here during the formatting and labelling part. Before installing and configuring Hadoop 1.2.1 with MapReduce, we tested MPI by calculating the value of PI and NFS(service to share files throughout the cluster) installation. The next challenge accomplished was the installation of Java (requirement to install Hadoop), the open version was 8 in this experiment and then constructing the hierarchy of files to install Hadoop. After the configuration of the xml files, the configuration of the environments, you must learn the Hadoop commands (specially to manage inputs and output files) and Java libraries to run parallel program, to success. Next step will be Spark on Raspberry Pi.

    This experiment was based in the book of Andrew K. Dennis: “Raspberry Pi Super Cluster”.

  • Poster presenter: LARISA STOLTZFUS

    PhD Candidate, University of Edinburgh

    Poster available for download here.
    I did my undergraduate degree in physics at Grinnell College and many years later pursued an MSc in HPC from the University of Edinburgh. In the meantime, I’ve worked as a software engineer for biotech, geophysical and financial companies. I am currently part of the CDT in Pervasive Parallelism(link is external) researching performance portable solutions for room acoustics simulations.

    Abstract: Performance, Portability and Productivity for Room Acoustics Codes

    Currently there are a wide-range of methods available for parallelising code on different architectures, however these strategies are becoming more complex and less portable. Meanwhile rewriting and re-tuning code is a time-consuming task. OpenCL solves some of these problems, however the performance is not portable and the framework is very low-level for non-experts.

    Parallel abstraction layers offer one possibility of de-coupling the need for parallel programming expertise from simulation modelling. A wide range of these types of solutions exist (including skeletons, code generators and low-level libraries), however many of them are still in early stages of developments or have been tested primarily on simple benchmarks. There remains no straightforward path between real world HPC applications and higher level abstraction frameworks for the purposes of writing simplified, performant, portable code.

    This gap could be bridged by determining the limitations of current methods and adding new functionality and data abstractions with real simulation codes in mind. In the first instance, this will involve a case study for stencil applications, in particular room acoustics simulations developed by the NeSS project. These room acoustics models use a finite difference time domain discretisation method to simulate the behaviour of sound waves from different sources in a room. Firstly for these room acoustics codes, performance, optimisations and alternative data abstractions have been investigated across different platforms to determine any portability, performance portability and abstraction issues. These results will then be compared with more advanced room simulations to see if similar or new problems occur. Finally, attempts will be made to implement the room acoustics benchmarks in existing higher level frameworks. 

    From this initial study, the intention is to ascertain if and where current parallel frameworks need more functionality for room acoustics simulations and further the development of parallelisable stencil abstractions able to fit a wider range of physical codes.