Note!
This year, due to exceptional circumstances related to pandemic of coronavirus and special protection measures, the conference will be conducted as a virtual event. The participants will be able to follow the full program of the conference remotely from any place worldwide with internet access.The registration fee has been very drastically reduced to allow the widest participation.
...will explore visionary trends and innovations in high performance computing and computational science
Postponed until further notice
Testimonials
"During these pandemic times, it is a challenge to organize virtual conferences and invite participants across the globe. To that end, SCFE21 did a fantastic job with all the organization!! The conference covered a broad range of topics including HPC, Quantum, AI/DL/ML and interdisciplinary science! I thank SCFE21 for inviting me to give a talk at their conference and it was fun to present on the best practices of software development!!"
"Supercomputing Frontiers Europe 2021 was a well organised conference providing an excellent balance of high performance computing “trade-craft” talks and interesting scientific talks. The talks were all informative and I learnt something from each one of them. Some topic areas were new for me and challenged my thinking – which is exactly the purpose of this Conference I believe. Many of the speakers were very engaging and have prompted me to continue to learn about the topics after the conference. This is the essence of attending a conference – virtual or otherwise!"
Clement Lau Senior Consultant, Founder XLink Media
"The conference was exceptionally organised, covering diverse, current and relevant topics in relation to cutting edge data and compute intensive theory, technologies, ecosystems and applications. The conference was attended by a globally diverse audience and provided invaluable networking opportunities to foster collaborations. It provided industry engagement, panel discussions and hands-on training workshops"
Tshiamo Motshegwa Lecturer, Department of Computer Science University of Botswana
"SFE21 was an excellent conference covering a broad range of topics including HPC initiatives, Cloud, Edge, AI and Quantum Computing. Bringing the international community together during the pandemic and creating a good experience using a virtual platform is hard and the organizing team did a fantastic job to this end. I am glad I had the opportunity to attend this conference and enjoyed giving a talk on how to get insight into HPC code behavior. Thank you SFE21 team!"
Fouzhan Hosseini Performance Optimisation and Productivity Manager NAG
"Now that SCFE21 is over, I want to thank the conference organizers for a perfect conference, with many great talks about different HPC Initiatives, HPC architectures and components, including Claud, Edge, AI, and Quantum Computing, amazing and novel HPC applications, and outstanding speakers. And last but not least a perfectly smooth organization with an excellent digital conference platform."
Wolfgang Gentzsch President The UberCloud
Marek Michalewicz
Director, Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw, Chairman of the Organising and Scientific Committee
Title: Closing words
Dr. Marek Michalewicz is a scientist, entrepreneur and inventor. In 2018 he became the Director of Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw. In 1987–2000 he worked in research institutions in the USA and Australia, where he dealt with the use of supercomputers in scientific research. From 2009 to 2016, he was the managing director of A * STAR Computational Resource Center (A * CRC) – the largest scientific supercomputing center in Singapore. He was one of the people responsible for planning and creation of Singapore’s National Supercomputing Center (NSCC). In 2014, Dr. Michalewicz initiated the InfiniCortex project – a global computer network with a capacity of 100 Gb/s – and InfiniBand – a transport technology enabling the creation of one concurrent supercomputer located on four continents. Since 2015, Dr. Michalewicz has been organizing “Supercomputing Frontiers” conference, firstly in Singapore and in 2018 first European edition was held in Warsaw in March. He was also the founder of the startup Quantum Precision Instruments – one of the first companies in the world specializing in nanoscale products. From October 2016, Dr. Michalewicz was the deputy director of the ICM UW.
Marek Michalewicz
Director, Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw, Chairman of the Organising and Scientific Committee
Title: Opening remarks
Dr. Marek Michalewicz is a scientist, entrepreneur and inventor. In 2018 he became the Director of Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw. In 1987–2000 he worked in research institutions in the USA and Australia, where he dealt with the use of supercomputers in scientific research. From 2009 to 2016, he was the managing director of A * STAR Computational Resource Center (A * CRC) – the largest scientific supercomputing center in Singapore. He was one of the people responsible for planning and creation of Singapore’s National Supercomputing Center (NSCC). In 2014, Dr. Michalewicz initiated the InfiniCortex project – a global computer network with a capacity of 100 Gb/s – and InfiniBand – a transport technology enabling the creation of one concurrent supercomputer located on four continents. Since 2015, Dr. Michalewicz has been organizing “Supercomputing Frontiers” conference, firstly in Singapore and in 2018 first European edition was held in Warsaw in March. He was also the founder of the startup Quantum Precision Instruments – one of the first companies in the world specializing in nanoscale products. From October 2016, Dr. Michalewicz was the deputy director of the ICM UW.
Maciej Brzeźniak
Poznan Supercomputing and Networking Center
Title: National Data Storage
Abstract: The aim of the project is to deliver a production-level infrastructure for data storage, access and protection services as well as to integrate solutions for efficient analysis and processing for large and complex data sets based on distributed ecosystem of HPC, BigData and AI platforms, supporing the approach of providing European Open Science Cloud at national and international levels.
Ken O’Brien
Senior Research Scientist at Xilinx
Title: Innovative Computing-Architectures with FPGAs
Abstract: A spectrum of innovative hardware architectures are emerging to tackle modern compute and memory intensive workloads. Specialized hardware architectures are critical to performance scaling in key domains such as HPC and ML. In this talk, we will explore FPGA specific hardware innovations such as spatial processing and custom arithmetic, and using a network intrusion detection example we will demonstrate the benefits of these solutions.
Ken O’Brien is a senior research scientist at Xilinx research labs. He graduated in 2019 from University College Dublin, Ireland, with a PhD in computer science focused on energy efficient high performance heterogeneous computing. In the past he has worked in the areas of reduced precision machine learning, bioinformatics, and performance modelling on reconfigurable platforms. He is currently researching heterogeneous distributed computing solutions for reconfigurable platforms.
Gaurav Kaul
Senior Solutions Architect – AI & HPC at Hewlett Packard Enterprise
Title: Memory and Interconnect Interplay in System Architecture
Abstract: Technology changes in key building blocks of system architecture are happening which impact system design at large scale specifically in AI/ML acceleration and exascale computing. In this article, we cover how these changes are in response to existing scaling challenges and how it will impact computer architecture in the next 3-5 years. We will also look at some novel hardware architectures which have been recently launched and how these are response to the trends we discuss. In order to scale computation for AI/ML acceleration, following building blocks can be used by architects Compute – Chiplet : Tensor Cores, Systolic Arrays, full processing elements (a la Ponte Vechhio) Memory – HBM, eDRAM, LPDDR, Smart Memory CPU-Accelerator-Memory Interconnect – NVLink, CXL, CAPI Node Interconnect – Infiniband, SmartNIC How these building blocks are combined is a function of performance, scale and cost which architects want to optimize. The purpose of this document is to Build a system model which can factor in these building blocks to come up with design space exploration of how these elements are combined in current architectures and how they may evolve as these building blocks evolve further Look at memory dataflow and orchestration across the chips from core, memory elements (HBM, GDDR, LPDDR, DRAM…), network and across systems Possible tradeoffs across these system parameters Hyperscale vs HPC/Exascale deployments white_check_mark
An experienced IT professional with extensive knowledge in Cloud Computing platforms and Intelligent Edge Services, particularly in zero-downtime, scalable microservices architectures. He has been specifically focused on HPC and AI platforms designed to support both Cloud-native and non-Cloud-native applications running on on-premise infrastructure, Cloud-based Private, Public and Hybrid environments, or at the Edge, which has enabled him to manage, develop and build successful Software-defined Enterprise Systems integration and Service operations. Gaurav obtained his MSc in Computer Science (High Performance Computing) at the University of Manchester. He is has also published technical papers on HPC and AI in industry sessions and forums. Gaurav is based out of London, UK.
Valentin Plugaru
Chief Technical Officer, LuxProvide High Performance Computing Center
Title: MeluXina – a new generation supercomputer
Abstract: LuxProvide is home to the MeluXina supercomputer, built as one of the new generation European supercomputers and part of the EuroHPC network. MeluXina is designed as a modular system to offer world-class HPC, HPDA and AI services for a wide variety of workloads and application domains. This talk will focus on MeluXina’s architecture and technologies, software ecosystem and platform services, as well as look into future use cases that will take advantage of the system’s capabilities.
With over a decade of passion and experience in all things HPC, Valentin Plugaru has worked as part of national and European HPC initiatives, helping shape the roadmap for the European HPC ecosystem and creating Luxembourg’s first national HPC center and supercomputing platform. Today he is sharing his time between boosting LuxProvide’s HPC, Data and AI capabilities, growing highly expert groups, and supporting the development of competencies networks, in particular through the EuroCC/National Competence Centers and CASTIEL projects.
Tomasz Wazny and Rafal Tymkow
Huawei
Title: The biggest HPC implementation on the Central and Eastern Europe, PCSS business case
Abstract: PraceLab project is a project implemented in Poland, offering advanced computing and data storage services, supporting the scientific community in the
country and Europe, as well as industrial research in the economy. The most modern and largest computing infrastructure with the highest standard of reliability.
The main goal of the project is to increase the competitiveness of the scientific community and the economy, with particular emphasis on SMEs, on international markets. Thanks to the
implementation of development works, it is planned to improve the position of the Polish ICT sector by supporting and strengthening the development of innovative solutions.
The direct objective of the project is to build a widely available HPC (High Performance Computing) computing infrastructure consisting of high-performance computing servers, specialized
processing units and flexible data management systems, and to provide scientific units and enterprises based on this infrastructure with services for research and development and activities
commercial.
The implementation of the project is planned for 2019-2023 and includes the construction of specialized laboratories that guarantee the highest quality of services.
The project partners are: Institute of Bioorganic Chemistry of the Polish Academy of Sciences – Poznan Supercomputing and Networking Center, Academic Computer Center CYFRONET AGH, Bialystok
Technical University, Czestochowa University of Technology, Gdansk University of Technology CI TASK, Technical University of Lodz, Kielce University of Technology, Wroclaw University of
Technology – Wroclaw Network and Supercomputing Center
Tomasz Wazny is Huawei’s Account Manager, in charge of key customers from banking and public sectors. With the IT industry for 10 years. Enthusiast of supercomputers, Artificial Intelligence, Robotic Process Automation and cybersecurity.
Rafal Tymkow is Head of public sector in EBG Huawei Poland. He is directly responsible about PCSS project. With the IT industry for 20 years. HPC expert and passionate.
Akshara Jayanand Kaginalkar
Senior Director at Centre for Development of Advanced Computing (C-DAC), India
Title: Connecting the dots: urban environment models, HPC-Cloud, climate resilient Indian smart cities
Abstract: Globally, rapid urbanisation is contributing to the economic growth. This is leading to majority of the population to be urban dwellers. This growth has
cost in terms of environmental vulnerability and cascading health and economy impacts. Science driven urban resilience is crucial for improving quality of life and to mitigate climate change
effects. To that end, it is imperative to understand, simulate and disseminate urban scale impact forecasts for extreme events, routine city operations and planning decisions. There is an
urgent need for integrated modeling based city services to address environmental issues such as extreme pollution, heavy rainfall, flooding, heat waves and their cascading impacts. The
neighbourhood scale simulations are compute and resource intensive due to complex parametrization, large scale data assimilation, and multi-model ensemble executions.
Addressing
these needs, the talk highlights the urban scale cross-sector modeling with weather, air quality, and hydrology models and HPC-Cloud cyberinfrastructure development underway for Indian smart
cities. The Urban environment science to society (UES2S) system, under the aegis of ‘National Supercomputing Mission’ program will be presented.
Akshara Kaginalkar is a Senior Director with Centre for Development of Advanced Computing (C-DAC), India. She works in HPC applications in weather, climate and environment. She has executed number of projects translating science and computational modeling using supercomputers to the end-user services in sectors such as, agriculture, air quality, defence, energy, and disaster management.
Warsaw Team
Patrycja Krzyna, Marek Masiak, Marek Skiba
University of Warsaw, University College London
Title: Warsaw Team: Student participation in HPC competitions amidst a global pandemic
Abstract: Warsaw Team is a student cluster competition team, assembled from students studying at an undergraduate level. The team operates with the support and supervision of the Interdisciplinary Centre for Mathematical and Computational Modelling UW. The main objective of the team is to qualify for and take part in the finals of the three most significant student cluster competitions. The competitions are the student cluster competition (SCC) during the SC conference in the USA, the SCC during the ISC conference in Germany, and the Asia Supercomputer Challenge in China. Normally, the competitions demand, apart from competition tasks, building a cluster on-site, staying vigilant for possible power outages, and making sure that the power usage doesn’t exceed a predefined threshold. Due to the ongoing pandemic, the competitions had to transition into online hackathons. Mounting nodes into the rack cabinet have been replaced with choosing the right AWS or Azure cluster instance. Despite the new challenges, the Warsaw Team managed to take part in ISC20, SC20 and ASC20-21 remotely. The team has embraced the situation. The remote working conditions meant that members could have been anywhere in the world and still participate in preparing for the competitions.
The Warsaw Team is a student cluster competition team, which was first assembled in 2016 on the initiative of the acting director of the ICM UW. Since that time, the team has been acquiring professional knowledge in the HPC field. Five years have passed since the team was created, which means that some team members who graduated had to be substituted by first- and second-year students. The younger members are very well-prepared, having gained experience by participating in the student cluster competitions at SC19 and SC20 and academic research in HPC. The graduate veterans of the SCCs are guiding and teaching more recent members about what they have learned in their time as a part of the Warsaw Team. This combination of hard-earned experience and eagerness to learn is our formula for success.
The team gathers students from the University of Warsaw, University of Oxford and University College London. All team members are registered at college level programs related to computer science and physics.
Phuti Ngoepe
Professor at University of Limpopo
Title: Simulated synthesis, characterisation and performance of nanostructured metal oxide electrodes for energy storage
Abstract: Lithium ion batteries are increasingly being used in in electronic consumer devices, electric vehicles and for energy storage in renewables. Simulated
amorphisation recrystallization method, which is based on molecular dynamics, is used in conjunction with classical forcefields, to illustrate simulated synthesis and characterisation of
different nano-architectures, i.e. nano- spheres, sheets, rods, porous and bulk of binary metal oxides such as MnO2 and TiO2. [1] Furthermore, the nano-architectures are lithiated, to
imitate charging and discharging, and their structural aspects and performance are characterized by simulated X-ray diffraction patterns. In particular, the relationship between mechanical
properties, microstructural features and electrochemical activity in nanoporous and bulk structures is highlighted [2]. Such connection is extended to why the ternary nano Li2MnO3, an end
member of high voltage composite cathodes, is electrochemically active whilst its bulk form is inactive. Lastly, nano-architectures, associated with the Li-Mn-O ternary were synthesised from
amorphous spinel nanosphere. The resulting crystallised nano-architectures are characterised and the presence of a composite consisting of the layered Li2MnO3 and spinel LiMn2O4 together
with a variety of defects, including grain boundaries and ion vacancies are observed, from XRDs and microstructural features [3]]. This is a step towards addressing the challenge of voltage
fade in the composite layered spinel cathodes, which have high capacity and energy density. Preliminary work beyond Li-ion batteries, especially the role of catalytic metal oxides in
metal-air batteries will be discussed [4].
References:
[1] M.G. Matshaba, D.C. Sayle, T.X.T. Sayle and P.E. Ngoepe, J. Phys, Chem. C 2016, 120, 14001-14008.
[2] T.X.T. Sayle, F. Caddeo, N.O. Monama, K.M. Kgatwane, P.E. Ngoepe and D.C. Sayle, Nanoscale 2014, 7, 1167-1180.
[3] R.S. Ledwaba, D.C. Sayle and P.E. Ngoepe, ACS Appl. Energy Mater 2020, 3, 1429-1428.
[4] K.P. Maenetja and P.E Ngoepe, J. Electrochem Soc., Accepted for publication.
Professor Phuti E Ngoepe is a Professor in Physics, holds the South African Research Chair in Computational Modelling of Materials at the University of Limpopo, South Africa. He has previously served as Dean of the Faculty of Science and Acting Deputy Vice Chancellor at the University. He is a Founder Member of the Academy of Science of South Africa and was a CSIR Fellow. He is a recipient of several honours including the Order of Mapungubwe (Silver), awarded by the President of South Africa for excellent contributions to science. He has published widely on computational modelling on energy storage, mineral processing and alloy development and has given several invited lectures at local and international conferences and summer schools where he organized some; including chairing them and serves on some of their advisory committees. He has successfully supervised over 80 MSc and PhD students to completion. Lastly he has served on several Boards mainly of Science Councils and participated in many science strategy committees and including reviews of government institutions.
Attila Cangi
Center for Advanced Systems Understanding (CASUS)
Title: Data-driven Surrogate Modeling of Matter under Extreme Conditions
Abstract: The successful diagnostics of phenomena in matter under extreme conditions relies on a strong interplay between experiment and simulation. Understanding these phenomena is key to advancing our fundamental knowledge of astrophysical objects and has the potential to unlock future energy technologies that have great societal impact. A great challenge for an accurate numerical modeling is the persistence of electron correlation and has hitherto impeded our ability to model these phenomena across multiple length and time scales at sufficient accuracy. In this talk, I will summarize our recent efforts on devising a data-driven workflow to tackle this challenge. Based on first-principles data we generate machine-learning surrogate models that replace traditional electronic-structure algorithms. Our surrogates both predict the electronic structure and yield thermo-magneto-elastic materials properties of matter under extreme conditions highly efficiently while maintaining their accuracy. This opens up the path towards multiscale materials modeling for matter under ambient and extreme conditions at a computational scale and cost that is unattainable with current algorithms.
Attila Cangi is a theoretical and computational physicist interested in various aspects of quantum many-body theory, quantum dynamics, materials science, and machine learning for the modeling of phenomena in matter induced by extreme electromagnetic fields, temperatures, and pressures. He is currently the acting research team leader of the research area “Matter under Extreme Conditions” at the Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf, Germany. He began his academic career with a Ph.D. in Chemistry from the University of California, Irvine in 2011. From 2011 to 2017 he was a postdoctoral researcher at the Max Planck Institute of Microstructure Physics, Halle, Germany. Until 2020 he was a staff member at the Center for Computing Research, Sandia National Laboratories, USA where he developed modeling frameworks for matter under extreme conditions.
Daudi Jjingo
Programme Director/PI for the Ugandan NIH H3Africa bioinformatics training programme (BRECA)
Title: A review of computational data science applications in Uganda
Abstract: In low resourced environments like Uganda, high performance computing and computational science come to life through locally relevant applications. Several such applications are not heavy on computational theory, but are locally relevant and impactful. In this talk, I will provide an overview of exemplified local problems to which high performance computing and computational science are being brought to bear in Uganda. These will range from the biomedical and public health to the agricultural domains. The talk will then consider some of the efforts being take to build suitable high performance computing environments and the attendant challenges.
Dr. Daudi Jjingo is a Principle Investigator of the NIH/Fogarty Bioinformatics training grant in Uganda and Co-Principal Investigator of the Ugandan H3BioNet node. He serves as the Director of the African Center of Excellence (ACE) in bioinformatics and data intensive sciences, whose mandate involves providing cutting-edge computational platforms for biomedical research. Dr. Jjingo is a Senior Scientist and Lecturer at Makerere University in the College of Computing, where he is also a senior member of the Artificial Intelligence laboratory (AI-lab). He earned his PhD in Bioinformatics as a Fulbright Scholar at the Georgia Institute of Technology in Atlanta USA, preceded by an MSc in Bioinformatics from the University of Leeds, UK and a BSc in Biochemistry from Makerere University, Uganda. His research interests broadly lie in the application of computational tools and methods on biomedical and public health problems.
Thomas Blum
Pre-Sales Systems Engineer DDN® Storage
Title: Taking a closer look at AI I/O
Abstract: With latest applications using AI and ML algorithms there is a need to optimize the data paths to provide the best possible performance for the large data sets that’s being used. If data can’t be provided in time for processing expensive compute resources will run underutilized and can not sustain their full capacity and expensive resources would be wasted. Within this presentation we analyze the IO patterns and show optimization strategies and technologies to improve the IO efficiency for AI/ML.
Thomas is an IT engineer with over 15 years of experience in High Performance Computing and storage systems. Since his thesis in computer science on distributed and parallel filesystems, he specially focused on parallel filesystems and storage technologies for HPC, designed solutions for academic and industry and was involved into several Top500 projects throughout Europe. In 2017 he joined DDN and architects storage solutions for academic and industrial HPC, media, life sciences and other data intense applications.
Santosh Ansumali
Associate Professor, Jawaharlal Nehru Centre for Advanced Scientific Research
Title: Towards CFD at Exa-scale
Abstract: Computational fluid dynamics in the petascale era is large memory bound and memory bandwidth and data structure optimization plays important role in code performance. This trend is expected to worsen with exa-scale era. This talk will survey these trends, discuss progress at our end, and offer some suggestions for emerging hardware.
Prof. Santosh Ansumali an Associate Professor at Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR). He is also the founding CTO of Sankhya Sutra Labs Pvt Ltd, Bangalore (https://www.sankhyasutralabs.com). His research interests are mesoscale methods for Computational Fluid Dynamics, High Performance Computing and Computational Kinetic Theory. His works primarily focuses on developing algorithms and physical models which are fundamentally suited for high performance computing. Before joining JNCASR, he was an Assistant Professor at School of Chemical and Biomedical Engineering, NTU Singapore. Prof. Ansumali received his PhD from Institute of Energy Technology, ETH Zurich in 2005. His complete list of publications is available on Google Scholar
Edward Hsu
Chief Product Officer at Rescale
Title: High Performance Computing Built For Cloud Using an Intelligent Control Plane Approach
Abstract: Applying the cloud architecture and operating model to High Performance Computing means it’s finally possible to start from the workflow and the workload
to optimize the infrastructure used, and not the other way around. Cloud computing means near unlimited scale, unprecedented variety of accessible architectures, and persistent connectivity
to a broad range of platform services.
The only way organizations can successfully navigate all the possibilities to find the best approach per workload is through automation. We will discuss how a control-plane based approach
to cloud high performance computing can help deliver the optimal software, specialized architecture, and other relevant services, so that researchers and engineers can focus on discovering
breakthroughs.
This control plane automates ~700 of the most popular commercial and open source simulation and machine learning software, on specialized architectures from all major cloud providers,
including AWS, GCP, Azure, and Oracle Cloud Infrastructure. Control plane intelligence on performance enables automation to identify the optimal cloud hardware architecture by workload,
from machine learning, to fluid dynamics, or finite element analysis. By combining global regional pricing information, regional capacity/maturity, and workload-specific performance
intelligence, we can help researchers or IT leaders optimize for time-to-solve, lowest cost or a balance in between.
Edward Hsu is Chief Product Officer at Rescale, which delivers an intelligent computing platform to help organizations move towards Digital R&D operations. Previously, Edward led product and marketing at D2IQ which pioneered production container infrastructure operations with technologies like Docker, Apache Mesos, and later Kubernetes as it became mainstream.
At VMware, Edward worked as the Sr. Director of Product Marketing, where he was responsible for pricing and packaging for the majority of VMware products by sales. Edward led the first launch of groundbreaking products including storage virtualization (VSA), private cloud software (vCloud Suite), and hyperconverged integrated systems (Cloud Foundation).
At McKinsey, Edward sourced and oversaw engagements with Fortune 500 companies, to help senior executives dramatically improve the performance of technology and service organizations. Edward led engineering at Oracle where he delivered Oracle’s first browser & workflow-based CRM products and invented patented software technologies. Edward has Masters and Bachelors degrees in Electrical Engineering and Computer Science from MIT, and an MBA from NYU Stern School of Business.
Miguel Terol Palencia
HPC Solutions Architect Lenovo Infrastructure Solutions Group
Title: Lenovo Scalable Architectures for Genomics Analytics
Abstract: The pain point of researchers at getting insights from genome sequencing is the bottleneck that occurs in the genomics analytics step to get genome
variants. Usually, with a standard solution, this takes up to 150 hours to get a Whole Genome Sequence (WGS) analyzed. This may cause, e.g., delays in the development of a certain drug for a
specific disease, or a vaccine against a new virus.
Lenovo has developed GOAST (Genome Optimization And Scalability Tool), an optimized hardware and software platform that dramatically reduces the time to results of Genomics Analytics up to
19 minutes, accelerating the productivity of bioinformaticians, with a general purpose architecture that allows savings in the acquisition and reusability for other Life Sciences
workloads.
Miguel Terol will be explaining the main features of this solution.
Miguel Terol is a professional with more than 20 years of experience in different areas of the IT industry, in companies like Siemens, Silicon Graphics, IBM and Lenovo, spending the last 14 years focused in HPC. Miguel holds a BSc. Degree in Mathematics.
He works as HPC&AI architect for the EMEA organization at Lenovo Infrastructure Solutions Group. Besides the design of HPC&AI hardware architectures, he works in the design of solutions targeting specific workloads in Life Sciences and some other segments.
Tara Madhyastha
Principal Research Scientist at RONIN and affiliate faculty in the Department of Psychology at the University of Washington
Title: RONIN: Secure Self-service High Performance Research Computing in the Cloud
Abstract: In many areas of computational research, the pace of innovation creates pressure to store, process and analyze ever-larger data sets using ever more
computationally demanding methods. To meet these challenges, researchers can benefit from the flexibility of cloud computing to create their own high performance computing clusters or to use
specialized hardware (GPUs, FPGAs, memory-optimized machines) for their workloads.
RONIN is a user-friendly web application that provides researchers with easy self-service control to create high throughput, tightly coupled, and accelerated computing clusters in the cloud
in minutes. This enables researchers with time-sensitive workloads to avoid queues and provides extra HPC capacity for smaller jobs or those with unique computational demands. For example,
the ability to create auto-scaling clusters that are based on multiples of highly available small instance types is a cost-effective approach to running unoptimized research codes or
scripts. Budget monitoring and resource scheduling in RONIN enable researchers to easily track and control costs of their cloud resources. Included HPC package managers such as Spack can be
used to facilitate software installation and optimized installations of popular scientific applications can easily be packaged for future use or shared with others. This feature enables
reproducibility at scale. Abstractions to manage object-based storage facilitate simple data access control and collaboration while other features such as remote desktops assist with data
visualisation and analysis. Finally, the RONIN architecture supports secure and compliant research environments (such as trusted research environments and HIPAA compliant workloads) with HPC
capabilities that are as easy for researchers to use as on-premise systems.
This talk will introduce RONIN and describe how it is currently being used by research groups to provide self-service research computing on the cloud.
Dr. Tara Madhyastha is a Principal Research Scientist at RONIN and affiliate faculty in the Department of Psychology at the University of Washington. Trained in high performance computing, she is an interdisciplinary scientist. Prior to coming to RONIN she led HPC research strategy at Amazon Web Services (AWS). In her earlier academic career she worked in neuroimaging, developing new methods to study changes to cognitive networks that occur with aging and neurodegenerative disease. She is the author of over 100 peer-reviewed publications in the fields of computer science, educational technology, psychology and neuroscience. Tara did her PhD in high performance computing at the University of Illinois at Urbana-Champaign and has held faculty positions at the University of California and the University of Washington, Seattle. She has received numerous innovation awards in these positions and at AWS, and extramural funding from the National Science Foundation, National Institutes of Health, and Department Of Education.
Addison Snell
CEO of InterSect360 Research
Title: Round Table pannel on Industry trends for HPC and AI
Abstract: The Roundtable Panel is an open discussion on industry trends for HPC and AI and beyond. Addison will be joined by Irene Qualters (Los Alamos National Laboratory), Anders Dam Jensen (European High Performance Computing Joint Undertaking) and Tshiamo Motshegwa (University of Botswana) to share their thought on topics based on their previous talks and InterSect360 Research.
Addison Snell is a veteran of the High Performance Computing industry and the co-founder and CEO of Intersect360 Research, now in its 15th year delivering forecasts and insights for high-performance markets. Addison is a competent bridge player, an excellent Scrabble player, and a puzzle and game enthusiast, particularly word puzzles. His life “bucket list” dream is to compose a crossword published in the New York Times.
Pinaki Chaudhuri
Professor at The Institute of Mathematical Sciences, India
Title: Studying amorphous materials via large-scale computing
Abstract: Amorphous materials, in the form of glasses, gels, emulsions, foams, granular materials, are ubiquitous in our daily lives and natural phenomena. Understanding the properties of these materials from a microscopic perspective has been a challenging exercise, both from the perspective of fundamental sciences as well as for design of materials. Numerical techniques have proven to be very valuable, in that context, in providing insights into diverse processes at play. In this talk, I will discuss, using some examples, how large-scale computing has become relevant for investigating phenomena at different length-scales, to provide multi-scale descriptions.
Pinaki obtained his PhD from the Physics department of Indian Institute of Science, Bangalore, India, followed by postdoctoral tenures at the universities of Montpellier, Lyon and Duesseldorf. Since 2014, he has been at the Theoretical Physics group at the Institute of Mathematical Sciences, Chennai, India. Pinaki’s research is primarily focused on using computational tools to study soft matter, specifically amorphous materials of diverse kinds.
Tomi Ilijas
Arctur’s CEO
Title: FF4EuroHPC: Enabling SMEs to benefit from HPC – Open Call 2
Abstract: SMEs are the backbone of the European economy that drive innovation thus the European Commission is encouraging SMEs to adopt novel technologies. Still,
many companies are not aware of the potentials that HPC brings to the business.
In this presentation, the FF4EuroHPC project, a successor of the Fortissimo and Fortissimo2
projects, and Open Call 2 will be presented in detail. The key concept behind FF4EuroHPC is to demonstrate to European SMEs ways to optimize their performance with the use of advanced HPC
services (e.g., modelling & simulation, data analytics, machine-learning and AI, and possibly combinations thereof) and thereby take advantage of these innovative ICT solutions for business
benefit. Namely, Open Calls can lower the barriers for the participating SMEs to commence HPC-related innovation in their existing or newly identified markets, help SMEs to develop unique
products and innovative business opportunities. Two open calls will be organised by the project, with the aim to create two tranches of application experiments within diverse industry
sectors. Additionally, highlights from the previous Fortissimo success stories will be presented to inspire the community.
Tomi Ilijaš is founder and CEO of Arctur and he holds a MSc degree from Ljubljana University, Faculty of Electrical Engineering. Mr. Ilijaš is an entrepreneur focusing on Hi-Tech innovation, who has shared his knowledge and experience to many start-ups and spin-offs in the region. He participated in several EU funded R&D projects, inventing new business models in HPCaaS and successfully breaking the barriers in bringing HPC to manufacturing SMEs. He is also a member of EUROHPC Research & Innovation Advisory Group (RIAG) and member of PRACE Industrial Advisory Committee.
Karl Podesta
EMEA HPC Technical Specialist @ Microsoft
Title: Supercomputing on Azure Powered By AMD EPYC
Abstract: From optimising vehicle safety, to simulations in autonomous driving, to analysing risk in global financial markets, to life science researchers investigating new therapies, to understanding more about our global climate. These are real scenarios where Supercomputing on the public cloud (Microsoft Azure) is currently making a difference in our world. Public cloud has a huge role to play in providing large scale compute & simulation resources (and HPC, aka “Supercomputing”) to everyone. Powered by AMD EPYC processors, Azure HPC has produced world class and “world first” HPC application benchmarks at scale, proving real Supercomputing scale and performance. In this presentation we will give a short overview of the topic – covering the “why, what, who, where, and how” of doing Supercomputing in the public cloud with Microsoft Azure and AMD EPYC – also leaving you with insight, proof points, and next steps for how you could get started with your own project.
Karl is a Technical Specialist for High Performance Computing (HPC) on Microsoft Azure. He is part of the Microsoft’s Global Black Belt Team – a specialist team focused on advanced workloads – and works with customers, partners, and colleagues focused in the EMEA (Europe, Middle East, and Africa) region. Prior to joining Microsoft in 2016, he previously spent 15+ years in technical roles, including Linux & HPC engineer, architect, trainer – and working in Oil & Gas, Financial, and Life Science domains. He tweets as @karlpodesta and is from Dublin, Ireland.
Erich Focht
Heading the Research & Development group, NEC
Title: Programming Heterogeneity
Abstract: The slow-down of Dennard scaling has led to a significant increase in processor innovations and hardware heterogeneity. The journey to exascale as well as explosion of artificial intelligence applications reflects the widening range of computer architectures and forces us to adjust the approach to programming and portability. The talk sketches various paths we explore for dealing with the increasingly heterogeneous hardware landscape.
Dr. Erich Focht has studied physics and did his PhD in theoretical Physics in Aachen and at the John von Neumann Institute for Computing, Juelich. The work on quantum field theories on the lattice required programming of simulations on supercomputers of various architectures, re-connecting to an older passion for electronics and computers. He started working for NEC’s European HPC division in 1996 optimizing CFD and structural mechanics algorithms for the NEC SX-4 and subsequent parallel vector supercomputers. At the advent of Itanium NUMA machines Erich switched to Linux kernel development focussing on NUMA scheduling and memory management, working in cross-vendor collaborations to prepare Linux for large machines. Shortly after Beowulf clusters appeared Erich’s involvement in various open source projects lead to research and development work in Linux cluster system software, distributed systems, single system image and grid software. He worked on the design and early implementation of the parallel file system XtreemFS and built for several years parallel file system products based on Lustre for NEC’s HPC customers. Currently Erich leads a research and development group at NEC HPC Europe. Nowadays his work topics cover system software and compilers for heterogeneous computing with NEC’s SX-Aurora Vector Engine, supporting cooperation projects related to the Vector Engine, augmenting HPC simulations with AI and integrating cloud technologies into HPC clusters.
Nicolas Dubé
Fellow and Vice-President, Chief Technologist High-Performance Computing Business Unit, HPE
Title: A Vision for the Post-Exascale Era
Abstract: With the imminent Exascale capability, the scientific community is about to tackle challenges that were considered too large and too complex just a few years ago. But how will this new generation of supercomputing technology impact the broader IT market over time? Being at the forefront of such key innovations, can the HPC community drive an open workflow ecosystem to provide an alternative to the vertically locked-in solutions? Join this talk to entertain a more open, accessible and decentralized vision of the IT industry for the next decade, where workloads and data can flow fluidly across system architectures and organizations.
As the Chief Technologist for HPC at Hewlett Packard Enterprise (HPE), Nic leads the team building the largest supercomputers in the world with a focus on energy efficiency, usability and application performance at scale. He is also driving HPE’s post-exascale strategy, that will couple exascale supercomputers to sensors at the edge, further enable the booming silicon landscape, provide a redefined data model that spans across traditional filesystems and enable workflow fluidity through new software runtime environments and alternative provisioning systems. He has served as the technical lead for the Advanced Development Group, architected HPE’s exascale PathForward program in collaboration with the US Dept. of Energy, spearheaded ARM enablement in HPC and designed many leadership HPC systems. Nic regularly advocates for a greener IT industry and open source software projects with the goal of constantly enhancing and broadening the impact of leadership computing.
Phil Murphy
Co-Founder, Chief Executive Officer at Cornelis Networks
Title: Cornelis Networks Omni-Path: Purpose Built High-Performance Fabrics for HPC/HPDA/AI
Abstract: The convergence of traditional HPC modeling/simulation, HPDA and AI on a single compute cluster brings new challenges to the fabric design but the required fundamental interconnect performance characteristics of low latency, extreme message rate, and scalable bandwidth remain paramount. Phil will discuss the fabric design trade-offs required to deliver these fundamentals. He will also discuss how Cornelis Networks is leveraging libfabric/OpenFabrics Interfaces to improve real application performance while taking advantage of industry-wide innovations in communications libraries and programming frameworks.
As CEO of Cornelis Networks, Phil is responsible for the overall management and strategic direction of the company. Prior to co-founding Cornelis Networks, Phil served as a director at Intel Corporation, responsible for fabric platform planning and architecture, product positioning, and business development support. Prior to that role, Phil served as vice president of engineering and vice president of HPC technology within QLogic’s Network Solutions Group, responsible for the design, development, and evangelizing of all high-performance computing products, as well as all storage area network switching products. Before joining QLogic, Phil was vice president of engineering at SilverStorm Technologies, which he co-founded in 2000 and which was acquired by QLogic in 2006. SilverStorm’s core focus was on providing complete network solutions for high performance computing clusters. Prior to co-founding SilverStorm, Phil served as director of engineering at Unisys Corporation and was responsible for all I/O development across the company’s diverse product lines.
Phil holds a BS in Mathematics from St. Joseph’s University and an MS in Computer and Information Science from the University of Pennsylvania.
Wolfgang Gentzsch
President and Co-founder of UberCloud
Title: Using distributed HPC technology for building an automated, self-service, truly multi-cloud simulation platform
Abstract: Many companies are finding that replicating an existing on-premise HPC architecture in the Cloud does not lead to the desired breakthrough improvements.
With this in mind, from day one, a fully automated, self-service, and multi-cloud Engineering Simulation Platform has been built with cloud computing in mind, resulting in highly increased
productivity of the HPC engineers, significantly improving IT security, reducing cloud costs and administrative overhead to a minimum, and maintaining full control for engineers and
corporate IT over their HPC cloud environment and corporate assets.
This platform has been implemented on Google Cloud Platform (GCP) for 3DT Holdings for their highly complex
Living Heart Project and Machine Learning, with the final result of reducing simulation times from many hours per simulation to just a few seconds highly accurate prediction of an optimal
medical device placement during heart surgery.
The team ran 1500 simulations needed to train the ML algorithm. The whole simulation process took place as a multi-cloud approach,
with all computations running on Google GCP, and management, monitoring, and health-checks orchestrated from Azure Cloud performed through SUSE’s Kubernetes management platform Rancher
implemented on Azure.
Technology used: UberCloud Engineering Simulation Platform, multi-node HPC Docker, Kubernetes, SUSE Rancher, Dassault Abaqus, Tensorflow, preemptible GCP
instances (c2_standard_60), managed Kubernetes clusters (GKE), Google Filestore, Terraform, and DCV remote visualization.
Co-authors: Wolfgang Gentzsch, President UberCloud,
Daniel Gruber, Director of Architecture at UberCloud; Yaghoub Dabiri, Scientist at 3DT Holdings; Julius Guccione, Professor of Surgery at the UCSF Medical Center, San Francisco; and Ghassan
Kassab, President at California Medical Innovations Institute, San Diego.
Wolfgang Gentzsch is president and co-founder of UberCloud which offers an automated self-service on-demand Engineering Simulation Platform and high-performance Engineering Simulation Containers to manufacturing, energy, financial services, life-sciences and other compute- and data-intensive applications. Wolfgang was the chairman of the International ISC Cloud Conference series from 2010 to 2015. Previously, he was an Advisor to the EU projects EUDAT and DEISA. He directed the German D-Grid Initiative and the North Carolina Statewide Grid, and was a member of the Board of Directors of the Open Grid Forum and of the US President’s Council of Advisors for Science and Technology, PCAST, from 2005 to 2008.
Previously, Wolfgang was a professor of computer science and mathematics at several universities in North Carolina, USA, and Regensburg, Germany, and held leading positions at the MCNC North Carolina Grid and Data Center in Durham, Sun Microsystems in California, the DLR German Aerospace Center in Gottingen, and the Max-Planck-Institute for Plasmaphysics in Munich. In the 90s, he founded HPC software companies Genias and Gridware which developed the well-known distributed HPC workload management system Grid Engine. Gridware has been acquired by Sun Microsystems in 2000 and Grid Engine (via Sun, Oracle, and Univa) is now part of Altair Engineering.
Fouzhan Hosseini
Project lead and Technical Manager at NAG Ltd
Title: Meet the POP CoE: Getting insight into HPC code behavior
Abstract: High Performance Computing (HPC), including in the cloud, is now accessible to a much wider range of users. However, majority of the new and traditional
HPC users still struggle to understand the performance bottlenecks of their applications and the associated computation cost in terms of time, money or energy. The efficient use of HPC
facilities remains a major challenge.
The Performance Optimisation and Productivity (POP) Centre of Excellence, funded by the EU, has established a quantitative methodology for
performance assessment of HPC applications. This methodology uses a standard, cross domain and scale, set of hierarchical metrics, where each represents the relative impact of one cause of
inefficiency in HPC applications. These metrics have proven invaluable in identification of inefficient kernels for code refactoring. In this talk, I will review the POP MPI metrics as well
as the extension of our metrics which give unique insight into the performance of hybrid OpenMP and MPI codes. We will see examples of how this methodology is being used to help
organizations over Europe to improve their HPC code.
Pekka Manninen
Director of the LUMI Leadership Computing Facility
Title: LUMI: Europe’s flagship supercomputer
Abstract: The EuroHPC initiative is a joint effort by the European Commission and 32 countries to establish a world-class ecosystem in supercomputing to Europe (read more at https://eurohpc-ju.europa.eu/). One of its first concrete efforts is to install three leadership-class supercomputers, each of which will be among the top 10 systems in the world. We will discuss one of these systems, LUMI to be located in Kajaani, Finland. LUMI will be the fastest supercomputer in Europe and in general one of the most powerful and advanced computing systems on the planet at the time of its installation. In this talk, we will walk through the technical architecture of the LUMI infrastructure.
Andrew King
Director of Performance Research, D-Wave
Title: What a Computational Performance Advantage Means for the Future of Practical Quantum Computing
Abstract: Earlier this year, D-Wave published a milestone peer-reviewed study in
Nature Communications in collaboration with scientists at Google. They were able to demonstrate a computational performance advantage for a Quantum Computer, increasing with both simulation
size and problem hardness, to over 3 million times that of corresponding classical methods. Notably, this work was achieved on a practical application with real-world implications,
simulating the topological phenomena behind the 2016 Nobel Prize in Physics.
This performance advantage, exhibited in a complex quantum simulation of materials, is a meaningful step in the journey toward applications advantage in quantum computing. To-date, this work
is the clearest evidence yet that quantum effects provide a computational advantage in D-Wave processors.
In this session, Dr. Andrew King, Director of Performance Research at D-Wave and one of the authors of the paper, will share more on the milestone study and paper, and explain what it means
for the future of practical quantum computing.
Herbert Huber
Head of High Performance Systems Division at Leibniz Supercomputing Centre
Title: Energy efficient supercomputing at LRZ
Abstract: The lecture presents the activities of the Leibniz Supercomputing Centre regarding energy efficient computing. In particular, the “4 Pillar Framework for Energy Efficient HPC Data Centers” will be introduced which is used by LRZ to identify areas for further improvement and research. The 4 pillars are: 1. Building Infrastructure; 2. HPC Hardware; 3. HPC System Software; and 4. HPC Applications. The advantages of energy efficient system cooling, system hardware and software technologies will be demonstrated using the actual LRZ supercomputer SuperMUC-NG.
Tshiamo Motshegwa
BEng, PhD, Department of Computer Science, University of Botswana
Title: Developments in African Cyber-infrastructure to Support Open Science & Open Data
Abstract: Globally there is movement in the trajectory of developing a Global Open Science Cloud (GOSC) aimed at supporting research collaborations across
continents to assist in addressing global science challenges – for example UN Sustainable Development Goals (SDGs), climate change, infectious diseases, and coordination of global disaster
risk reduction.
Continents, regions, and countries are also actively developing Open Science platforms and investing in underlying cyberinfrastructures to advance their Research Science Technology and
Innovation (RSTI) ecosystems, enhance collaboration and increase their competitiveness and critically, use RSTI as a driver for national and continental priorities.
This talk highlights the movement toward development of a Pan African cyberinfrastructure to support advancement of the continent’s science enterprise through open science and open data.
Furthermore, the cyberinfrastructure will promote collaboration and support addressing higher level African priority areas and challenges through leveraging the medium of research, science,
technology and innovation, and thereby contribute to African advancement and integration to help deliver on the African vision – Agenda 2063 – The Africa We Want
To this end, a discussion of the African Open Science Platform (AOSP) is given. AOSP pilot study conducted an audit and provided frameworks to guide countries in the development of requisite
policies, infrastructure, incentives, and human capital to facilitate leveraging of open science and open data amidst the digital revolution – with all the challenges and opportunities
presented.
Furthermore, African regional blocks also have initiatives aligned with AOSP – for example, the Southern African Development Community Cyberinfrastructure Framework – (SADC CI) has been
approved by Governments. It is currently supporting some regional projects and was consulted in the AOSP pilot project. The SADC CI facilitates a regional collaborative ecosystem for
research, innovation, and teaching by creating a shared commons for data, computational platforms and human capital development over a fabric of high-speed connectivity afforded by National
Education and Research networks (NRENs)
The development of these cyberinfrastructures provides a basis, bedrock and capacity for African participation and contribution to the wider global science enterprise and endeavour – this
especially given ongoing and upcoming projects of consequence in the continent such as the Square Kilometer Array (SKA), H3Africa -Human Heredity and Genomics, Weather and Climate Change
projects and others.
Dr Tshiamo Motshegwa is an academic based at the Department of Computer Science, Faculty of Science at the University of Botswana where he is a lecturer and leads the High-Performance Computing and Data Science Research cluster and the University-Industry-Government cocreation Platform. He received a BEng 1:1 (Hons) in Computer Systems Engineering and a Ph.D. in Computer Science- both at the School of Mathematics, Computer Science and Engineering, City,University Of London, UK.
He has had visiting fellowships at British Telecom Research & Innovation Labs (BTexact Technologies) at Intelligent Business Systems Group, Adastral Park Ipswich, UK and fellowships at the UNESCO International Center For Theoretical Physics (ICTP), Trieste Italy and internship at British Energy Plc, Lancashire, UK.
Dr Motshegwa serves on the Botswana Government’s Ministry of Tertiary Education, Science and Technology (MOTE) task team for the Botswana Space Science strategy overarching developments and opportunities in space sciences and technologies. He serves to engage at the Ministry regarding the National Digital Transformation Initiative for research, science , technology innovation ecosystems through cyberinfrastructure for digital revolution, capacity building and digital skills.
Mary-Jane Bopape
Senior manager: Research at the South African Weather Service
Title: Implementation of the SADC Cyber-Infrastructure Framework: focus on weather modelling
Abstract: Weather forecasting beyond the nowcasting timescale relies on the use of numerical weather and climate models. The use of these models is limited on the African continent because they require supercomputing facilities to run over large domains with higher resolution. The availability of supercomputing facilities is however increasing on the continent due national initiatives and regional initiatives such as the Southern African Development Community (SADC) Cyber-Infrastructure (CI) Framework which was approved by the relevant ministers in 2016. The presentation will give an overview of a project focusing on numerical weather prediction (NWP) funded by the South African Department of Science and Innovation and also through the Climate Research for Development (CR4D) fellowship in six SADC countries. Through the project three workshops were held and studies of the sensitivity of heavy rainfall events in all six countries to different aspects of the Weather Research and Forecasting (WRF) model were investigated. A number of peer reviewed articles have resulted from the project, and for some countries these were first ever papers on NWP, while some authors lead the writing of papers for the first time. The presentation will also touch on lessons learnt and proposed way forward.
Mary-Jane Bopape is a senior manager: Research at the South African Weather Service. She holds a PhD degree in meteorology has worked as a Postdoctoral Research Fellow at the University of Reading. She also worked at the Council for Scientific and Industrial Research (CSIR) Natural Resources and Environment (NRE) and Centre for High Performance Computing (CHPC) with a focus on climate change studies. She has received a number of awards including the African Institute for Mathematical Sciences (AIMS) Next Einstein Initiative (NEI) Fellowship for Women in Climate Change Science, 2019 Climate Research for Development (CR4D) fellowship and 2008 World Meteorological Organization (WMO) award for young researchers. She was recognised by the President of South Africa as a pathfinder in his 9 August 2019 speech on women’s day. She served on the Executive council of the South African Society for Atmospheric Sciences (SASAS) for a number of terms and is currently the co-president of the society. She co-supervises a number of postgraduate students in different universities in South Africa.
Onur Mutlu
Professor of Computer Science Information Technology and Electrical Engineering department ETH Zurich
Title: Intelligent Architectures for Intelligent Systems
Abstract: Computing is bottlenecked by data. Large amounts of data overwhelm storage capability, communication capability, and computation capability of the modern machines we design today. As a result, many key applications’ performance, efficiency and scalability are bottlenecked by data movement. We describe three major shortcomings of modern architectures in terms of 1) dealing with data, 2) taking advantage of the vast amounts of data, and 3) exploiting different semantic properties of application data. We argue that an intelligent architecture should be designed to handle data well. We show that handling data well requires designing system architectures based on three key principles: 1) data-centric, 2) data-driven, 3) data-aware. We give several examples for how to exploit each of these principles to design a much more efficient and high-performance computing system. We will especially discuss recent research that aims to fundamentally reduce memory latency and energy, and practically enable computation close to data, with at least two promising novel directions: 1) performing computation in memory by exploiting the analog operational properties of memory, with low-cost changes, 2) exploiting the logic layer in 3D-stacked memory technology in various ways to accelerate important data-intensive applications. We discuss how to enable adoption of such fundamentally more intelligent architectures, which we believe are key to efficiency, performance, and sustainability. We conclude with some guiding principles for future computing architecture and system designs.
Onur Mutlu is a Professor of Computer Science at ETH Zurich. He is also a faculty member at Carnegie Mellon University, where he previously held the Strecker Early Career Professorship. His current broader research interests are in computer architecture, systems, hardware security, and bioinformatics. A variety of techniques he, along with his group and collaborators, has invented over the years have influenced industry and have been employed in commercial microprocessors and memory/storage systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. He started the Computer Architecture Group at Microsoft Research (2006-2009), and held various product and research positions at Intel Corporation, Advanced Micro Devices, VMware, and Google.
He received the IEEE High Performance Computer Architecture Test of Time Award, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, ACM SIGARCH Maurice Wilkes Award, the inaugural IEEE Computer Society Young Computer Architect Award, the inaugural Intel Early Career Faculty Award, US National Science Foundation CAREER Award, Carnegie Mellon University Ladd Research Award, faculty partnership awards from various companies, and a healthy number of best paper or “Top Pick” paper recognitions at various computer systems, architecture, and security venues. He is an ACM Fellow, IEEE Fellow for, and an elected member of the Academy of Europe (Academia Europaea).
His computer architecture and digital logic design course lectures and materials are freely available on YouTube (https://www.youtube.com/OnurMutluLectures), and his research group makes a wide variety of software and hardware artifacts freely available online (https://safari.ethz.ch/). For more information, please see his webpage at https://people.inf.ethz.ch/omutlu/.
Bryce Meredig
Chief Science Officer and co-founder of Citrine Informatics
Title: Designing next-generation materials with machine learning
Abstract: The development of improved materials is critical to human progress, in areas ranging from space exploration to environmental sustainability. Traditional materials development efforts face difficulty in delivering breakthrough materials at the accelerating pace demanded by consumers, regulators, and society as a whole. In this rapidly changing environment, data-driven approaches such as machine learning (ML) offer the possibility of reducing the timescale of materials innovation. In this talk, I will discuss how machine learning (ML) can accelerate materials design, and how ML can be used in conjunction with physics-based simulations to improve our ability to predict materials properties in silico. I will also address some of the materials science-specific methodological considerations that arise when applying ML to materials design.
Dr. Bryce Meredig is cofounder and Chief Science Officer of Citrine Informatics, a materials informatics platform company, where he leads the External Research Department (ERD). ERD conducts publishable research with collaborators in academia, government, and industry. Dr. Meredig’s research interests include the development and validation of physics-informed machine learning methods specific to applications in materials science and chemistry; integration of physics-based simulations with machine learning; and data infrastructure for materials science. Dr. Meredig received his PhD from Northwestern University and BAS and MBA from Stanford University.
Kate Keahey
Senior Scientist, MCS, Argonne National Lab CASE, University of Chicago
Title: Chameleon: Taking Science from Cloud to Edge
Abstract: New research ideas require an instrument where they can be developed, tested – and shared. To support Computer Science experiments such instrument has to
support a diversity of hardware configurations, deployment at scale, deep reconfigrability, and mechanisms for sharing so that new results can trigger further innovation. Most importantly —
since science does not stand still – such instrument requires constant adaptation to support an ever increasing range of experiments driven by emergent ideas and opportunities.
The
NSF-funded Chameleon testbed for systems research and education (www.chameleoncloud.org) has been developed to provide all those
capabilities. The testbed provides many thousands of cores and over 5PB of storage hosted at three sites (University of Chicago, Texas Advanced Computing Center, and Northwestern) connected
by 100 Gbps networks. The hardware consists of a large homogenous partitions to facilitate experiments at scale, alongside a diverse set of hardware consisting of accelerators, storage
hierarchy nodes with a mix of HDDs, SDDs, and NVMe, high-bandwidth I/0 storage, SDN-enabled networking hardware, and edge devices. To support experiments ranging from work in operating
systems through networking to edge computing, Chameleon provides a range of reconfigurability options from bare metal to virtual machine management. To date, the testbed has supported 5,000+
users and 700+ research and education projects and has just been renewed until the end of 2024.
This talk will describe the goals, design strategy, and the existing and future
capabilities of the testbed, as well as some of the research and education projects our users are working on. I will also describe how Chameleon is evolving to support new research
directions, in particular edge and IoT-based research and applications. Finally, I will introduce the services and tools we created to support sharing of experiments, curricula, and other
digitally expressed artifacts that allow science to be shared via active involvement and foster reproducibility.
Kate Keahey is one of the pioneers of infrastructure cloud computing. She created the Nimbus project, recognized as the first open source Infrastructure-as-a-Service implementation, and continues to work on research aligning cloud computing concepts with the needs of scientific datacenters and applications. To facilitate such research for the community at large, Kate leads the Chameleon project, providing a deeply reconfigurable, large-scale, and open experimental platform for Computer Science research. To foster the recognition of contributions to science made by software projects, Kate co-founded and serves as co-Editor-in-Chief of the SoftwareX journal, a new format designed to publish software contributions. Kate is a Scientist at Argonne National Laboratory and a Senior Fellow at the Computation Institute at the University of Chicago.
Sunita Chandrasekaran
Assistant Professor, Department of Computer & Information Sciences, University of Delaware
Title: Best practices for a productive (yet performance) software development
Abstract: This talk will take the audience through the journey of following the best practices for creating a productive yet performant software. The journey will entail experiences gathered from porting real-world applications from one system to another, from interactions with scientists from various domains and from challenges encountered in a multidisciplinary multiple team-based collaborative projects. Stories will also focus on strategies to prepare the next generation workforce as they march to join/lead teams developing research software.
Sunita Chandrasekaran is an Assistant Professor with the Department of Computer and Information Sciences at the University of Delaware, USA. She received her Ph.D. in 2012 on Tools and Algorithms for High-Level Algorithm Mapping to FPGAs from the School of Computer Science and Engineering, Nanyang Technological University, Singapore. Her research spans High Performance Computing, interdisciplinary science, machine learning and data science. She is a recipient of the 2016 IEEE-CS TCHPC Award for Excellence for Early Career Researchers in High Performance Computing. Chandrasekaran has been involved in the technical program and organization committees of several conferences and workshops including SC, ISC, IPDPS, IEEE Cluster, CCGrid, WACCPD, AsHES and P3MA.
Neil Thompson
Innovation Scholar MIT’s Computer Science and Artificial Intelligence Lab and the Initiative on the Digital Economy
Title: The approximate future of computing
Abstract: Technical and economic trends in hardware are pushing computing towards specialization, opening up new opportunities to customize to particular algorithms. But that is only one part of the story. There are also new trends in algorithms re-shaping computing. This talk will survey these big trends and offer some suggestions for what they will mean for the future of computing
Dr. Neil Thompson is an Innovation Scholar at MIT’s Computer Science and Artificial Intelligence Lab and the Initiative on the Digital Economy. He is also an Associate Member of the Broad Institute. Previously, Neil was an Assistant Professor of Innovation and Strategy at the MIT Sloan School of Management, where he co-directed the Experimental Innovation Lab (X-Lab), and a Visiting Professor at the Laboratory for Innovation Science at Harvard. Neil have advised businesses and government on the future of Moore’s Law and has been on National Academies panels on transformational technologies and scientific reliability. He did his PhD in Business and Public Policy at Berkeley, where he also did Masters degrees in Computer Science and Statistics. Neil has a masters in Economics from the London School of Economics, and undergraduate degrees in Physics and International Development. Prior to academia, he worked at organizations such as Lawrence Livermore National Laboratories, Bain and Company, The United Nations, the World Bank, and the Canadian Parliament.
Bogdan Rosa
Professor in the Centre of Numerical Weather Prediction at Institute of Meteorology and Water Management – National Research Institute (IMWM-NRI).
Title: Computational challenges in modelling cloud microphysical processes
Abstract: Reliable weather forecasts require an accurate description of cloud microphysical processes. These phenomena cannot be fully resolved in numerical
weather predictions (NWP) systems because their characteristic length scales are several orders of magnitude smaller than those defining large-scale atmospheric flows. A common strategy to
overcome this problem is to represent only some statistical features of the droplet dynamics by using so-called parameterisations. Particularly important are kinematic and dynamic collision
statistics and the droplet settling velocity. Inaccuracies in the parameterization schemes in NWP systems are one of the main sources of the uncertainty of the numerical forecasts.
Developing more realistic schemes requires detailed knowledge of these processes at droplet scale. As a multiscale phenomenon, cloud simulation represents one of the most difficult
scientific and computational challenges.
Fast progress in high performance supercomputers opened new perspectives to quantify the statistical properties of the atmospheric clouds.
From the numerical point of view, the problem comes down to modelling multiphase flows i.e., turbulent flows with a disperse phase. Typically, the continuous phase (air flow) is solved in
the Eulerian approach employing Direct Numerical Simulations (DNS) or Large Eddy Simulations (LES). The dispersed phase (droplets) is treated using the Lagrangian approach along with the
point-particle assumption. Most previous studies (DNS or LES) were limited to 1-way or 2-way momentum coupling. The important physical phenomenon, such as hydrodynamic interactions between
droplets has often been neglected. The reasons for these simplifications are the lack of efficient computational method and the high computing cost of such simulations. In turn, highly
accurate fully resolved simulations of turbulence with finite-size particles have become possible only in recent years and are limited to a relatively small number of droplets. Here I
discuss the applicability of different computational approaches and their limitations in the context of computing power demand.
Dr. Bogdan Rosa is a Professor in the Centre of Numerical Weather Prediction at Institute of Meteorology and Water Management – National Research Institute (IMWM-NRI).
Bogdan Rosa gained his M.Sc. in physics at the University of Warsaw in 2000, followed by the Ph.D. in 2005. The subject matter of his doctoral research concerned numerical modelling of high-Reynolds-number flows around the airborne ultra-fast thermometer. Afterwards, he spent 3 years as a postdoctoral fellow at the University of Delaware, where he was involved in developing parallel computational tools to study cloud microphysical processes. In the meantime, he collaborated with the National Center for Atmospheric Research on modelling multiphase flows and droplet dynamics. Since 2009, Dr. Rosa has been working at IMWM-NRI. Apart of modelling of turbulent clouds his current projects involve adaptation of the numerical model EULAG into an operational weather prediction model of the European COSMO consortium.
In 2018, Dr. Rosa received the habilitation from Wroclaw University of Science and Technology in technical sciences. His scientific achievements include over 20 articles published in peer-reviewed scientific journals and more than 100 papers in conference proceedings. He built upon his expertise in atmospheric processes by working with scientists from Germany, Iran, China, USA, Venezuela, and Japan. Bogdan Rosa has led several national and international projects including three grants from the National Science Centre. Additionally, he was a laureate and leader of several computational projects awarded by Interdisciplinary Centre for Mathematical and Computational Modelling UW.
Jean-Marc Denis
Chair of the Board, European Processor Initiative
Title: Future Supercomputers are game changers for processor architecture. Why?
Abstract: The rise of artificial intelligence in HPCand data deluge, combined with the transition from monolithic applications toward complex workflows combining
classical HPC models with AI lead the HPC community, especially the hardware architects to reconsider how the next generation supercomputers are designed. The recent evolutions in the HPC
hardware technologies landscape, with more and more accelerators and the end of the X86 domination are also key parameters to be taken into account by the HPC ecosystem from hardware
designers to end-users.
In this talk, the transition from existing homogenous to future modular architectures is discussed. The consequences on the general-purpose processor are
addressed. These considerations ruled the design of the European low power HPC processor that will be used in the future European Exascale and post-Exascale supercomputers. We also elaborate
on the first information related to Rhea, the first European HPC processor.
Since July 2021, Jean-Marc has joined SiPearl, the company designing the European HPC microprocessor, as Chief Strategy Officer.
After five years of research in the development of new solvers for the for Maxwell equations at Matra Defense (France) as mathematician from 1990 to 1995, Jean-Marc Denis had several technical positions in the HPC industry between 1995 to 2004 from HPC pre-sales to Senior Solution Architect.
Since mid of 2004 Jean-Marc has worked at Bull SAS head Quarter (France) where he has started the HPC activity. In less than 10 years, the HPC revenue at Bull exploded from nothing in 2004 to 200M EUR in 2015, making Bull the undisputed leader of the European HPC industry and the fourth in the world. From 2011 to the end of 2016, Jean-Marc has leaded the worldwide business activity with the goal to consolidate the ATOS/Bull position in Europe and to make ATOS/Bull a worldwide leader in Extreme Computing with footprint in Middle-East, Asia, Africa and South America.
From 2018 to 2020, Jean-Marc has been the head of Strategy and Plan at Atos/Bull, in charge of the global cross-Business Unit Strategy and of the definition of the 3 years business plan. In 2016 and 2017, Jean-Marc has been in charge of the definition of the strategy for the BigData Division at ATOS/Bull. From the beginning of 2020 to mi 2021, Jean-Marc has been Chief of Staff of the Innovation and Strategy Division at Atos. In addition, since mid-2018, Jean-Marc has been also elected as Chair of the Board of the European Processor Initiative (EPI).
In parallel to his activities at ATOS/Bull, from 2008 to 2015, Jean-Marc Denis has taught “Supercomputer Architecture” concepts in Master 2 degree at the University of Reims Champagne Ardennes (URCA), France.
Education
- Masters degree in Mathematics (U. of Tours, France, 1989)
- Masters degree in Computer Science (U. of Toulouse, France, 1990)
- Engineering diploma in Computer science, (ENSEEIHT, Toulouse, France, 1990)
Anders Jensen
Executive Director of the European High Performance Computing Joint Undertaking (EuroHPC JU)
Title: EuroHPC JU at full throttle
Abstract: The European High Performance Computing Joint Undertaking (EuroHPC JU) is a joint initiative between the European Union, European countries, and private
partners to develop a World Class Supercomputing Ecosystem in Europe. The EuroHPC Joint Undertaking allows the EU and EuroHPC participating countries to coordinate their efforts and pool
their resources with the objective of deploying world-class exascale supercomputers in Europe.
By making Europe a world leader in high performance computing (HPC), the EuroHPC JU
seeks to provide computing capacity, improve cooperation in advanced scientific research, boost industrial competitiveness, and ensure European technological and digital autonomy.
In
this talk, Anders Dam Jensen, Executive Director of the EuroHPC JU, will offer an update on the current state of play and his insights on the future of European HPC.
Anders Dam Jensen has a lifelong interest in supercomputers, dating from his time in university. He started his professional career with 10 years of engineering work developing computing hardware, firmware, and software for embedded systems. He pioneered IEEE802.11 wireless network technology in the nineties while working for Symbol Technologies.
After a decade working with hardcore engineering and product development, Anders shifted into management as he joined Cargolux Airlines International as Director IT. Over the next decade, Anders was instrumental in the spin-off of the Cargolux IT department into CHAMP Cargosystems S.A. With Anders as CTO, CHAMP Cargosystems grew to become the largest supplier of IT services to the air cargo industry.
In 2011, Anders was selected as the Director ICTM for the North Atlantic Treaty Organization (NATO), responsible for all Information and IT services as well as one of the largest classified networks in Europe. In 2020, Anders was appointed as Executive Director for the European High Performance Computing Joint Undertaking, a joint initiative between the EU, European countries and private partners to develop a World Class Supercomputing Ecosystem in Europe with a proposed budget of 8 billion euro. Anders holds a Master of Science Degree as well as a Master of Business Administration degree from Technical University of Denmark. He is based in Luxembourg, married to Mette, and has two children.
Irene Qualters
Associate Laboratory Director for Simulation and Computation at Los Alamos National Laboratory
Title: The Enduring Role of HPC: Advancing Science and Engineering
Irene Qualters serves as the Associate Laboratory Director for Simulation and Computation at Los Alamos National Laboratory, a U.S. Department of Energy national laboratory. She previously served as a Senior Science Advisor in the Computing and Information Science and Engineering (CISE) Directorate of the National Science Foundation (NSF), where she had responsibility for developing NSF’s vision and portfolio of investments in high performance computing, and has played a leadership role in interagency, industry, and academic engagements to advance computing.
Prior to her NSF career, Irene had a distinguished 30-year career in industry, with a number of executive leadership positions in research and development in the technology sector. During her 20 years at Cray Research, she was a pioneer in the development of high performance parallel processing technologies to accelerate scientific discovery. Subsequently as Vice President, she led Information Systems for Merck Research Labs, focusing on software, data and computing capabilities to advance all phases of pharmaceutical R&D.
Hiroaki Kitano
President at The Systems Biology Institute, Tokyo, a Professor at Okinawa Institute of Science and Technology Graduate University, Okinawa, a President & CEO at Sony Computer Science Laboratories, Inc., Tokyo, a Representative Director and CEO, Sony AI Inc., Tokyo and an Executive Vice President at Sony Group Corporation, Tokyo
Title: Nobel Turing Challenge – Creating the Engine of Scientific Discovery
Abstract: One of the most exciting and disruptive research in AI is to develop AI systems that can make major scientific discovery by itself with high-level of autonomy. In this talk, I propose “Nobel Turing Challenge” to the grand challenge bridging AI and other scientific communities. The challenge calls for development of AI systems that can make major scientific discoveries some of which worth Nobel Prize, and the Nobel Committee, and the rest of the scientific community, may not be able to distinguish if it was discovered by human scientist or AI (Kitano, H., AI Magazine, 37(1) 2016). This challenge is particularly significant in biomedical domain where progress of systems biology (Kitano, H., Science, 295, 1662-1664, 2002; Kitano, H., Nature, 420, 206-210, 2002) resulted in total overflow of data and knowledge far beyond human comprehension. After 20 years of my journey in systems biology research, I have concluded that next major breakthrough in systems biology requires AI-driven scientific discovery. Initially, it shall be introduced as AI-assisted science, but it will result in AI scientists with high-level of autonomy. This challenge poses a series of fundamental questions on the nature of scientific discovery, limits of human cognition, implications of individual paths toward major discoveries, computational meaning of serendipity or scientific intuition, and many other issues that may bring AI research into the next stage.
Hiroaki Kitano is a President at The Systems Biology Institute, Tokyo, a Professor at Okinawa Institute of Science and Technology Graduate University, Okinawa, a President & CEO at Sony Computer Science Laboratories, Inc., Tokyo, a Representative Director and CEO, Sony AI Inc., Tokyo and an Executive Vice President at Sony Group Corporation, Tokyo.
He received a B.A. in physics from the International Christian University, Tokyo, and a Ph.D. in computer science from Kyoto University, Kyoto. Since 1988, he has been a visiting researcher at the Center for Machine Translation at Carnegie Mellon University, USA. His research career includes a Project Director at Kitano Symbiotic Systems Project, ERATO, Japan Science and Technology Corporation, Tokyo, followed by a Project Director at Kitano Symbiotic Systems Project, ERATO-SORST, Japan Science and Technology Agency, Tokyo, a Group Director of Laboratory for Disease Systems Modeling at RIKEN Center for Integrative Medical Sciences, Kanagawa, a visiting professor of Keio University, Kanagawa, a visiting professor of the University of Tokyo, Tokyo, and so on.
Kitano is also an Editor-in-Chief of npj Systems Biology and Applications, and a Founding Trustee of The RoboCup Federation.
Kitano served and is currently serving as scientific advisor for numerous companies and research institutions internationally including ALSTOM, Mitsubishi Chemical Holdings, European Molecular Biology Laboratory (EMBL), Imperial College London, Univ. Manchester, and Swiss Systems-X Program.
Kitano received The Computers and Thought Award from the International Joint Conferences on Artificial Intelligence in 1993, Prix Ars Electronica 2000, Japan Design Culture Award 2001, Good Design Award 2001, and Nature’s 2009 Japan Mid-career Award for Creative Mentoring in Science, as well as being an invited artist for Biennale di Venezia 2000 and Museum of Modern Art (MoMA) New York at Worksphere Exhibition in 2001. He has been named a fellow of the Association for the Advancement of Artificial Intelligence for 2021. His research interests include computational biology, artificial intelligence, massively parallel computer, autonomous robot, systems biology, and open energy system.
Roberto Car
Recipient of the 2020 ACM Gordon Bell Prize, Professor at Department of Chemistry, Princeton University
Title: Machine Learning Based Ab-initio Molecular Dynamics
Abstract: Computational cost severely limits the range of ab initio molecular dynamics simulations. Machine learning techniques are rapidly changing this state of affairs. Deep neural networks, that learn the interatomic potential energy surface from ab-initio data, make possible simulations with quantum mechanical accuracy at the cost of empirical force fields. These approaches can model not only the atomistic dynamics but also the dielectric response properties measured in experiments. I will discuss, in particular, the deep potential method developed at Princeton. In combination with incremental learning techniques, this approach makes possible to construct, with minimal learning cost, reactive potentials that are accurate over a vast range of thermodynamic conditions, such as the pressure and temperature regimes underlying molecular and ionic phases of water. The methodology will be illustrated with applications to physical chemistry.
Roberto Car is a theoretical condensed matter physicist and physical chemist. He is known for the ab-initio molecular dynamics method that he introduced with Michele Parrinello, and for electronic structure and simulation studies of disordered systems. Car was born in Trieste (Italy) and graduated from the Milan Politecnico (Technical University of Milan). After postdoctoral appointments at EPFL (Switzerland) and at the IBM TJ Watson research Center, he held physics professor positions at SISSA and at the University of Geneva (Switzerland).
Since 1999 he is professor of chemistry and the Princeton Institute for the Science and Technology of Materials at Princeton University, where he is also associated professor of physics and the Program in Applied and Computational Mathematics.
Car has been awarded numerous prizes, including the 1990 Europhysics Prize, the 1995 APS Rahman Prize, the 2009 Dirac Medal of the ICTP, the 2009 IEEE Fernbach Award, the 2010 Berni J. Alder CECAM Prize for Computational Physics, the 2012 Fermi Prize of the Italian Physical Society, the 2016 ACS Theoretical Chemistry Award, and the 2021 Franklin Medal for Chemistry. Car holds honorary degrees from Italian and Swiss Universities. He is a member of the National Academy of Sciences of the USA since 2016.
Sunita Chandrasekaran Lecturer, Department of Computer & Information Sciences University of Delaware