Speaker
Dr
Kurowski Kurowski
(Poznan Supercomputing and Networking Center)
Description
With the constant growth of scientific data to be analysed and visualised in nearly real-time, the in-situ visualisation as a user-facing service is getting attention. The basic idea behind the in-situ visualization service is to perform data analysis and advanced visualization during the execution process, so users could react or even steer computing simulations easily at any stage, for instance check whether application parameters were set correctly or they have to be modified on fly due to instability. In fact, such scenario is common in many scientific areas, especially for CFD. Additionally, there is a strong need for different groups (e.g. scientists, engineers, government institutions) to use in-situ visualization in order to improve collaboration based on the same and interactive view on the generated results. However, achieving such capabilities where high-speed and reliable network for heavy visualization data transfer is available on-demand, along with computational resources booked in advanced (for both, simulation and data analysis) is not trivial.
In this paper we propose a new approach to facilitate running in-situ visualization in the cloud using high-speed GEANT network in conjunction with EGI e-infrastructure. The CFD application is managed on the EGI e-infrastructure using the QosCosGrid [QCG] middleware [1]. In a nutshell, QCG is an integrated system that offers advanced job and resource control features and also provides a virtualized cross-infrastructure environment. By connecting many heterogeneous HPC local queuing systems, and being integrated via a portal layer with OpenStack, it can be considered as a highly efficient management platform for variety of demanding HTP/HPC applications, including parameter sweep, workflows, hybrid MPI-OpenMP, distributed multi-scale and recently large-scale in-situ experiments [2,3].
In our approach the following actions are required to setup all the needed services from EGI and GEANT e-infrastructures to deal with in-situ visualization. First, QCG middleware is responsible for remote submission and control of CFD simulations which in turn, prior to its execution, will create the virtual machine with rendering services and an appropriate visualization environment. If required, it will also create a requested quality network connection between all components involved in scenario using “Bandwidth on Demand” GEANT services to ensure proper bandwidth and efficient communication between remote and local sites. Then, the QCG-Coordinator service is used to synchronize execution and to exchange parameters between remote services provided by EGI and GEANT. In our case, the CFD code has to be recompiled with an external in-situ visualization library to enable remote communication with the QCG-Coordinator. For the data analysis and visualisation we use the VAPOR tool [4] - a visual data discovery environment tailored towards the specialized needs of the geosciences CFD community. VAPOR is a desktop solution that is capable of handling terascale size data sets, providing advanced interactive 3D visualization with data analysis. With a proven functionality in NWP (Numerical Weather Prediction) environment, this tool is run over the cloud i) to ensure the best underlying hardware for data analysis and visualisation is used, ii) to avoid necessity of installing required software on users’ machines, that are involved in the collaboration scenario, and iii) to minimize network traffic in the collaboration scenario. In our approach, users interact and share web sessions with VAPOR by means of an ordinary web browser. It is possible thanks to various improvements to our rendering service called Vitrall, which proxyfies all user interactions and returns back a real-time display of visualisation provided by VAPOR. Currently, Vitrall supports HTML5 capabilities, like WebSockets and built-in video streaming [5]. Additionally, Vitrall provides useful collaboration features and enable many users from potentially distributed locations to share the same in-situ visualisation session. The complete advanced visualization service in different configurations has been tested on high-speed and reliable network connections provided by GEANT and Future Internet experiments [6, 7]. Consequently, it offers both user-friendly and interactive visual interfaces to end-users hiding the complexity of the underlying cloud, data processing, communication and on-demand services.
[1] B. Bosak, J. Komasa, P. Kopta, K. Kurowski, M. Mamoński, T. Piontek, New capabilities in QosCosGrid middleware for advanced job management, advance reservation and co-allocation of computing resources – quantum chemistry application use case, In Building a National Distributed e-Infrastructure–PL-Grid, Springer Berlin Heidelberg, 2012, pp. 40-55.
[2] Borgdorff, J., Mamonski, M, Bosak, B., Kurowski, K., Ben Belgacem, M., Chopard, B., Groen, D., Coveney, P.V. & Hoekstra, A. (2014). Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment. Journal of Computational Science, 5(5): 719-731.
http://dx.doi.org/10.1016/j.jocs.2014.04.004
[3] QosCosGrid middleware and tools - www.qoscosgrid.org
[4] VAPOR Visualisation and Analysis Platform - www.vapor.ucar.edu
[5] Śniegowski, P., Błazewicz, M., Grzelachowski, G., Kuczyński, T., Kurowski, K. & Ludwiczak, B. (2012). Vitrall: Web-Based Distributed Visualization System for Creation of Collaborative Working Environments, Lecture Notes in Computer Science, 7203, 2012, pp. 337-346. DOI: 10.1007/978-3-642-31464-3_34
[6] Vitrall in TEFIS project: http://tv.pionier.net.pl/Default.aspx?id=1831
[7] Vitrall in CoolEmAll project: http://youtu.be/PyBF8a0ej3M
Primary author
Dr
Kurowski Kurowski
(Poznan Supercomputing and Networking Center)
Co-authors
Mr
Bartosz Bosak
(Poznan Supercomputing and Networking Center)
Mr
Michał Kulczewski
(Poznan Supercomputing and Networking Center)
Mr
Piotr Sniegowski
(Poznan Supercomputing and Networking Center)
Mr
Tomasz Kuczynski
(Poznan Supercomputing and Networking Center)
Mr
Tomasz Piontek
(Poznan Supercomputing and Netowrking Center)