EGI Conference 2022

Europe/Amsterdam
Prague, Czech Republic

Prague, Czech Republic

Description

 

EGI2022: Together for Tomorrow. Innovative Computing for Research

The annual EGI conference is widely recognised as the venue that brings together like-minded professionals from the world of science and scientific computing. During EGI2022, we will look towards the future - how will innovative computing services and solutions help to build a better research ecosystem? How can we work towards innovation? And what opportunities are there for new collaborations?

Together for Tomorrow - join us on a journey of turning ideas into reality, recognising the fact that big ideas require collaboration.

 

Thank you! 

 

 

 

Contact
    • 11:00 17:00
      InterTwin Project Kick Off meeting (invitation only) RUBY I & II (Vienna Andel Prague)

      RUBY I & II

      Vienna Andel Prague

      Conveners: Andrea Manzi (EGI.eu), Gwen Franck, Malgorzata Krakowian (EGI.eu)
    • 11:00 18:00
      Training: FitSM Foundation - a pragmatic standard for IT Service Management: Part 1 Ruby III

      Ruby III

      IT Service Management is a discipline that helps provide services with a focus on customer needs and in a professional manner. It is widely used in the commercial and public sectors to manage IT services of all types, but current solutions are very heavyweight with high barriers to entry.

      FitSM is an open, lightweight standard for professionally managing services. It brings order and traceability to a complex area and provides simple, practical support in getting started with ITSM. FitSM training and certification provide crucial help in delivering services and improving their management. It provides a common conceptual and process model, sets out straightforward and realistic requirements and links them to supporting materials.

      In addition, participants have an opportunity to receive a formal certification backed by the certification authority ICO-Cert for anyone successfully passing the exam (20 multiple choice questions, 13 required to pass).

      The training is co-located with the EGI Conference taking place on 20-21 Sep 2022, ensuring no overlap with the main programme.

      This Foundation level course provides an agnostic introduction to the basic IT service management concepts and terms, outlines the purpose and structure of FitSM standards and their relationship to other standards, and details the formal requirements defined within it.

      Costs/Fees

      The all-inclusive cost per person is 540€ and includes 1-day training from an accredited trainer, the cost of exam/certification from APMG (value €80), and access to lunch and coffee breaks.

      Duration
      8 hours (plus 30-minute exam)

      Target audience
      - All individuals involved in the provisioning of (federated) IT services
      - Candidates who wish to progress to the advanced level of the qualification and certification scheme

      Entry requirements
      - None

      Contents
      - Basic IT service management concepts and terms (based on FitSM-0)
      - Purpose and structure of FitSM standards and their relationship to other standards
      - Full process framework underlying FitSM
      - Requirements defined in FitSM-1

      Exam
      - 30 minutes, at the end of the training
      - Closed book, i.e. no aids are allowed
      - 20 multiple choice questions (four possible answers for each question, one correct answer per question)
      - At least 65% of correct answers (13 of 20) are required to pass the examination

      Training Outputs
      - Exam scores
      - Certificates for those passing the exam with a unique certificate license number

      This training event is jointly organized in collaboration with the EOSC Future project.

      More information: https://www.egi.eu/service/fitsm-training/

      Convener: Sy Holsinger (OPERAS Aisbl)
      • 11:00
        Intro to ITSM and FitSM Standard incl. General Aspects and Requirements 1h 20m
      • 12:20
        Brief Break 10m
      • 12:30
        Service Management Processes: Service Portfolio Management (SPM), Service Level Management (SLM) 30m
      • 13:00
        Lunch 1h
      • 14:00
        Service Management Processes: Service Reporting (SRM), Service Availability and Continuity (SACM), Capacity (CAPM), Information Security (ISM), Customer (CRM) and Supplier Relationship Management (SUPPM) 1h 20m
      • 15:20
        Coffee Break 20m
      • 15:40
        Service management Processes: Incident and Service Request (ISRM), Problem (PM), Configuration (CONFM), Change (CHM), Release and Deployment Management (RDM) 1h 20m
      • 17:00
        Coffee Break 20m
      • 17:20
        Continual Service Improvement (CSI), Benefits/Risks/Challenges of implementing ITSM, and Related standards/frameworks 40m
    • 13:00 14:00
      Lunch 1h
    • 18:00 20:00
      InterTwin Welcome Reception (invitation only) 2h Bar Oscar's (Vienna Andel Prague)

      Bar Oscar's

      Vienna Andel Prague

    • 08:25 09:00
      Registration 35m
    • 09:00 11:00
      InterTwin Project Kick Off meeting (invitation only): Continued RUBY I & II (Vienna Andel Prague)

      RUBY I & II

      Vienna Andel Prague

      Conveners: Andrea Manzi (EGI.eu), Gwen Franck, Malgorzata Krakowian (EGI.eu)
    • 09:00 11:00
      Platform for distributed, big computing - DIRAC User Group meeting Topaz (Vienna Andel Prague)

      Topaz

      Vienna Andel Prague

      The DIRAC interware is a complete software solution for communities of users that need to exploit distributed, heterogeneous compute resources for big data processing.
      DIRAC forms a layer between the user community and the compute resources to allow optimized, transparent and reliable usage. DIRAC can connect various types of compute (Grids, Clouds, HPCs and Batch systems), storage, data catalogue resources. DIRAC is used by several of the most compute intensive research infrastructures. The EGI installation of DIRAC is used by diverse communities, such as WeNMR (Structural Biology), VIP (Medical imaging), Pierre Auger Observatory (astrophysics).
      This session will provide a forum for information sharing about new DIRAC features and user community experiences.

      Conveners: Andrei Tsaregorodtsev (CNRS), Yin Chen (EGI.eu)
    • 09:00 11:00
      Training: FitSM Foundation - a pragmatic standard for IT Service Management: Part 2 Ruby III (Vienna Andel Prague)

      Ruby III

      Vienna Andel Prague

      IT Service Management is a discipline that helps provide services with a focus on customer needs and in a professional manner. It is widely used in the commercial and public sectors to manage IT services of all types, but current solutions are very heavyweight with high barriers to entry.

      FitSM is an open, lightweight standard for professionally managing services. It brings order and traceability to a complex area and provides simple, practical support in getting started with ITSM. FitSM training and certification provide crucial help in delivering services and improving their management. It provides a common conceptual and process model, sets out straightforward and realistic requirements and links them to supporting materials.

      In addition, participants have an opportunity to receive a formal certification backed by the certification authority ICO-Cert for anyone successfully passing the exam (20 multiple choice questions, 13 required to pass).

      The training is co-located with the EGI Conference taking place on 20-21 Sep 2022, ensuring no overlap with the main programme.

      This Foundation level course provides an agnostic introduction to the basic IT service management concepts and terms, outlines the purpose and structure of FitSM standards and their relationship to other standards, and details the formal requirements defined within it.

      Costs/Fees

      The all-inclusive cost per person is 540€ and includes 1-day training from an accredited trainer, the cost of exam/certification from APMG (value €80), and access to lunch and coffee breaks.

      Duration
      8 hours (plus 30-minute exam)

      Target audience
      - All individuals involved in the provisioning of (federated) IT services
      - Candidates who wish to progress to the advanced level of the qualification and certification scheme

      Entry requirements
      - None

      Contents
      - Basic IT service management concepts and terms (based on FitSM-0)
      - Purpose and structure of FitSM standards and their relationship to other standards
      - Full process framework underlying FitSM
      - Requirements defined in FitSM-1

      Exam
      - 30 minutes, at the end of the training
      - Closed book, i.e. no aids are allowed
      - 20 multiple choice questions (four possible answers for each question, one correct answer per question)
      - At least 65% of correct answers (13 of 20) are required to pass the examination

      Training Outputs
      - Exam scores
      - Certificates for those passing the exam with a unique certificate license number

      This training event is jointly organized in collaboration with the EOSC Future project.

      More information: https://www.egi.eu/service/fitsm-training/

      Convener: Sy Holsinger (OPERAS Aisbl)
      • 09:00
        Exam Prep / Training Wrap-up / Discussion 50m Ruby III

        Ruby III

      • 09:50
        Brief Break 10m Ruby III (Vienna Andel Prague)

        Ruby III

        Vienna Andel Prague

      • 10:00
        Exam setup / log-in 15m Ruby III

        Ruby III

        Vienna Andel Prague

      • 10:15
        Exam 45m Ruby III

        Ruby III

    • 09:00 11:00
      Training: Infrastructure as Code to deploy scientific applications in EOSC Opal (Vienna Andel Prague)

      Opal

      Vienna Andel Prague

      Infrastructure as code is the process of managing and provisioning computing infrastructures through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. This is the mantra of IM and EC3 tools, two services that allow users to automate the deployment and configuration process of virtual infrastructures on top of cloud resources. While IM facilitates the deployment of a cluster with a fixed number of nodes, EC3 provides the cluster with elasticity capabilities, where the number of nodes is automatically adapted to the workload of the cluster.
      In this training session, we will show both the IM Dashboard[1] and the EC3 CLI[2] tools in action.
      With this tools:
      * You can automate your entire provisioning and deployment process, which makes it much faster and more reliable than any manual process.
      * You can use the same definition templates to provision the very same virtual infrastructure in different Cloud providers.
      * You can use the OASIS TOSCA Simple Profile in YAML standard to describe your cloud topologies.
      * You can store those source files in version control, which means the entire history of your infrastructure is now captured in the commit log, which you can use to debug problems, and if necessary, roll back to older versions.

      [1] IM Dashboard: https://appsgrycap.i3m.upv.es:31443/im-dashboard/
      [2] EC3 CLI: https://github.com/grycap/ec3

      Conveners: Amanda Calatrava (UPVLC), Miguel Caballer (UPVLC)
    • 09:00 11:00
      Training: The Virtual Imaging Platform: tutorial on the use and delivery of scientific applications as a service Quartz (Vienna Andel Prague)

      Quartz

      Vienna Andel Prague

      Convener: Sorina POP (CNRS)
      • 09:00
        The Virtual Imaging Platform: tutorial on the use and delivery of scientific applications as a service 2h

        The Virtual Imaging Platform (VIP) [https://vip.creatis.insa-lyon.fr/] is a web portal for medical simulation and image data analysis. It leverages resources available in the EGI biomed Virtual Organization to offer an open service to academic researchers worldwide. In May 2022, VIP counts more than 1400 registered users and about 20 applications publicly available.

        After a quick overview of the platform, the tutorial will cover two main aspects: (i) the use of VIP for executing one of the scientific applications already available in the platform and (ii) importing a new application into VIP.

        The first part will allow participants to get familiarized with the portal, create themselves a standard VIP account, launch an execution, monitor its status and retrieve the results. All this is available through the web browser and requires no prerequisites from the participants.

        The second part will allow the participants to obtain an administrator VIP account on a demo instance and import themselves a new application into the VIP demo instance. This will require the manipulation of a Boutiques [https://github.com/boutiques] JSON descriptor pointing to a Docker image corresponding to the application to import. A complete example will be available to the participants, making it possible to complete the tutorial without any prerequisites.

        Speaker: Sorina POP (CNRS)
    • 10:00 11:00
      EGI101 - EGI Introduction to newbies Crystal Room (Vienna Andel Prague)

      Crystal Room

      Vienna Andel Prague

      Are you new to EGI? And maybe a bit confused about what EGI is, what we do, how we are structured and how we can collaborate? Then this is the session for you!
      The EGI Federation is an international e-Infrastructure set up to provide advanced computing and data analytics services for research and innovation. EGI federates compute resources, support teams and various online services from over 300 international institutes, and make those available for researchers, innovators and educators. We deliver, we enable, we innovate. This session will give a basic introduction to the EGI federation, covering all the fundamental topics that You need to know before deep-diving into the conference.
      The session will include time for Q&A as well.

      Convener: Gergely Sipos (EGI.eu)
    • 11:00 12:00
      Coffee Break and Registration 1h
    • 12:00 13:00
      Opening Plenary Plenary Rooms 1st floor (Vienna Andel Prague)

      Plenary Rooms 1st floor

      Vienna Andel Prague

      Welcome by Arjen van Rijn EGI Council Chair (15')
      Local host welcome and short presentation, CZ e-Infrastructure perspectives and role in the EGI Federation (20')
      EGI keynote by Tiziana Ferari EGI Foundation Director (20')

      https://www.youtube.com/watch?v=UYPMeMnnSg8

      Conveners: Arjen van Rijn (NIKHEF), Ludek Matyska (CESNET), Tiziana Ferrari (EGI.eu)
      • 12:00
        Welcome by Arjen van Rijn EGI Council Chair (15') 15m
        Speaker: Arjen van Rijn (NIKHEF)
      • 12:15
        Welcome by Mr. Lukáš Levák - Director of Department for Research and Development. Czech Ministry of Education, Youth and Sports. 5m
      • 12:20
        e-Infra CZ: national plans for EGI and EOSC 20m

        Ludek Matyska will present how the e-INFRA CZ, the Czech national e-infrastructure, plans its future in the context of EOSC implementation in Czechia. We will show how the long involvement in the EGI federation helps define this role and the implications the EOSC implementation could have for e-INFRA CZ and its continued existence within the EGI ecosystem.

        Speaker: Ludek Matyska (CESNET)
      • 12:40
        The EGI Federation and its support to open science 20m

        How did research communities change the way the EGI Federation is operating over the last 12 years? Who are the scientific communities that EGI supports and what is the contribution to the advancement of scientific computing EGI is making?

        You will hear about all of this from Tiziana Ferrari, Director of the EGI Foundation.

        Speaker: Tiziana Ferrari (EGI.eu)
    • 13:00 14:00
      Lunch 1h
    • 14:00 15:00
      Plenary: Welcome to EGI new members ACOnet (Austria) and University of Vilnius (Lithuania)

      Two new NGIs joined the EGI Council in 2022: ACOnet and University of Vilnius. In this session we will meet the Council representatives of our two new participants. We will hear about their national initiatives in support to scientific computing and open science and discuss how the EGI Federation can leverage such initiatives to improve its support to international user communities.

      ACOnet represents Austrian organisations that contribute to facilitating access to general & specialised ICT resources at pan-European scale and to provide high-quality research data (e.g. earth observation and climate data sets, etc.) according to the FAIR principles to facilitate interdisciplinary research and education.

      https://www.youtube.com/watch?v=UYPMeMnnSg8

      Vilnius University is the reference provider for the Lithuanian academic community, with HPC resources amounting to 0.3 PFlops, that are being procured and set up by the end of 2020 across three different university institutes, including the mathematics department, the informatics department and the physics department. Vilnius University is also participating to EuroCC and Lithuania is a funding organization of EuroHPC.

      Convener: Tiziana Ferrari (EGI.eu)
      • 14:00
        Welcome address 10m
        Speaker: Tiziana Ferrari (EGI.eu)
      • 14:10
        ACOnet contribution to scientific computing 25m

        the "Association for the Promotion of an Austrian National Research and Education Network (NREN)" – was founded in 1986 as a consortium of Austrian universities. Today, all Austrian public universities, the majority of universities of applied sciences, and some federal universities participate in the ACONET association. It does act as a strategic user advisory board for ACOnet – the Austrian Academic Computer Network – a high-performance network infrastructure and identity federation, using 10 resp. 100 Gbps Ethernet over DWDM technology. Currently, an Austrian Quantum Fiber Network is set up; its connection to a German network is planned in the short term. ACONET plans to provide access to HPC capacities and ICT capabilities of de-centralised IT departments.

        Speaker: Matthias Schramm (TU Wien)
      • 14:35
        Vilnius University contributions to scientific computing 25m

        Lithuanian Open Access National Computational Center is located in Vilnius university over two faculties:
        Faculty of Physics (Saulėtekis) and Faculty of Mathematics and Informatics. The supercomputer “VU
        HPC” has over 0.5 Pflops infrastructure while its Sauletekis location achieved 0.3 PFlops real HPLinpack
        speed up. The most important research communities are from universities, research institutes, public
        sector and occasionally business in Lithuania. From EGI federating research data and computing facilities
        at European level could benefit small and middle size business, public sectors.
        For Lithuania the main reason to join EGI was to become EU level HPC infrastructures by adapting local
        HPC infrastructures and providing HPC knowledge and international experience locally.
        EGI federation could help to do research studies for user communities from various Lithuanian sectors
        by providing higher level HPC access, gathering and sharing the HPC related information.

        Speaker: Mindaugas Mačernis (University of Vilnius)
    • 15:00 15:45
      Break - coffee and changing of rooms 45m
    • 15:45 17:15
      Authentication and Authorisation in federated environments Opal (Vienna Andel Prague)

      Opal

      Vienna Andel Prague

      The EGI Community is using and offering the Check-in service to enable authenticated access for users across providers, resources and countries. EGI Check-in already serves more than 100 connected service and 4,000 identity providers, facilitating 2,500 logins from 3,200 registered users per month.
      This session will provide an overview of the new Check-in features and a forum to share user and provider experiences.

      Convener: Valeria Ardizzone (EGI.eu)
    • 15:45 18:45
      Disruptive technologies accelerating data-driven policymaking in the public sector- jointly organised by PolicyCloud, DECIDO, AI4PublicPolicy, IntelComp, DUET Ruby I, II, III (Vienna Andel Prague)

      Ruby I, II, III

      Vienna Andel Prague

      Disruptive technologies accelerating data-driven policymaking in the public sector

      The convergence of Cloud, Big Data and AI has already resulted in major transformation across Government services, yet the process of policy making itself is often left behind. Digital technologies have changed the world. Today people expect faster, seamless, on-demand services from their providers, and the Government is no exception. For effective urban operations which make life easier for residents, workers and visitors, Public Sector decision making needs to become more agile, breaking down data silos to combine day-to-day tactical decisions with longer term policies and strategies. Disruptive technologies such as Digital Twins, Artificial Intelligence (AI) and High Performance Computing (HPC) unlock new opportunities for sustainable decision making through visualisations, simulations and predictions that enhance transparency, increase public support and involvement, and optimise resources.

      To support this transformation, Policy Cloud, Decido, AI4PublicPolicy, DUET and Intelcomp pan-European projects and initiatives dedicated to using cloud for data-driven policy, have joined forces in the Data Driven Policy Cluster. Together they explore major challenges, trends and opportunities to improve public sector decision making that will deliver healthier, happier places to live and work.

      During the EGI 2022 workshop the cluster will demonstrate technologies developed to advance decision making in the public sector. The cluster will engage the EGI2022 attendees in discussion on the state of the art of these technologies and their adoptability.

      This session aims to raise awareness on the cluster of projects and disruptive technologies they develop for the public sector. In addition, the session will foster collaboration between the EGI researchers and the public authorities in the decision making with the use of research data and advanced tools for the benefit of society.

      The Data Driven Policy Cluster showcases the joint network of:

      AI4PublicPolicy: AI for Public Policy (AI4PP) project is a joint effort of policy makers and Cloud/AI experts to unveil AI’s potential for automated,transparent and citizen-centric development of public policies. The project will deliver,validate and promote the AI4PublicPolicy Platform,offering innovative policy management on unique AI technologies. The AI4PublicPolicy Virtualized Policy Management Environment(VPME) integrated with EOSC facilitates access to the Cloud and HPC resources required to enable the project’s AI tools and to a wider use of the project’s developments.
      DECIDO: the eviDEnce and Cloud for more InformeD and effective pOlicies (DECIDO) project aims to boost the use of EOSC by Public Authorities enabling innovation in the policy-making sector allowing cross-support and cross-collaboration,using secure compute and data intensive services. Decido involves citizens and local communities through co-creation activities for better targeted policies.
      DUET: the Digital Urban European Twins (DUET) project is a EU initiative which leverages the advanced capabilities of cloud, sensor data and analytics in Digital Twins,to develop more democratic and effective public sector decision-making.DUET Digital Twins provide virtual city replicas which simplifies the understanding of complex interrelation between traffic, air quality, noise and other urban factors. Powerful analytics predict the impacts of potential change to make better evidence-based operational decisions and long-term policy choices.
      IntelComp: develops a Competitive Intelligence Cloud/HPC Platform for AI-based Science, Technology and Innovation Policy-Making. Multi-disciplinary teams will co-develop analytics services, Natural Language Processing pipelines and AI workflows,exploiting EOSC open data and resources, HPC environments and federated operations at the EU, national and regional level. Ensuring a cooperative environment,different actors visualize, interact and analyze information. Through co-creation, IntelComp will adopt a living labs approach, engaging public policy makers, academia, industry, SMEs, local actors and citizens to explore, experiment with and evaluate STI policies. IntelComp is targeting domains aligned with the European Agenda and the Horizon Europe Missions:AI, Climate Change and Health.
      PolicyCloud: exploits the potential of digitisation, big data and cloud to improve the modelling, creation and implementation of policies. Delivering a unique, integrated environment of datasets, data management, and analytic tools it addresses the full lifecycle of policy management in four thematic-areas (radicalisation, food-value chain, city environment, city services),leveraging the data management capabilities of the EOSC Initiative. The Project empowers the Citizens to contribute to data and policies related to their everyday-life. The onboarding of these solutions in the EOSC Portal offers a great opportunity to reach a wide audience.

      Convener: Ilaria Fava (EGI.eu)
      • 15:45
        Session: Disruptive technologies accelerating data-driven policymaking in the public sector 2h 30m

        Disruptive technologies accelerating data-driven policymaking in the public sector

        The convergence of Cloud, Big Data and AI has already resulted in major transformation across Government services, yet the process of policy making itself is often left behind. Digital technologies have changed the world. Today people expect faster, seamless, on-demand services from their providers, and the Government is no exception. For effective urban operations which make life easier for residents, workers and visitors, Public Sector decision making needs to become more agile, breaking down data silos to combine day-to-day tactical decisions with longer term policies and strategies. Disruptive technologies such as Digital Twins, Artificial Intelligence (AI) and High Performance Computing (HPC) unlock new opportunities for sustainable decision making through visualisations, simulations and predictions that enhance transparency, increase public support and involvement, and optimise resources.

        To support this transformation, Policy Cloud, Decido, AI4PublicPolicy, DUET and Intelcomp pan-European projects and initiatives dedicated to using cloud for data-driven policy, have joined forces in the Data Driven Policy Cluster. Together they explore major challenges, trends and opportunities to improve public sector decision making that will deliver healthier, happier places to live and work.

        During the EGI 2022 workshop the cluster will demonstrate technologies developed to advance decision making in the public sector. The cluster will engage the EGI2022 attendees in discussion on the state of the art of these technologies and their adoptability.

        This session aims to raise awareness on the cluster of projects and disruptive technologies they develop for the public sector. In addition, the session will foster collaboration between the EGI researchers and the public authorities in the decision making with the use of research data and advanced tools for the benefit of society.

        The Data Driven Policy Cluster showcases the joint network of:

        AI4PublicPolicy: AI for Public Policy (AI4PP) project is a joint effort of policy makers and Cloud/AI experts to unveil AI’s potential for automated,transparent and citizen-centric development of public policies. The project will deliver,validate and promote the AI4PublicPolicy Platform,offering innovative policy management on unique AI technologies. The AI4PublicPolicy Virtualized Policy Management Environment(VPME) integrated with EOSC facilitates access to the Cloud and HPC resources required to enable the project’s AI tools and to a wider use of the project’s developments.
        DECIDO: the eviDEnce and Cloud for more InformeD and effective pOlicies (DECIDO) project aims to boost the use of EOSC by Public Authorities enabling innovation in the policy-making sector allowing cross-support and cross-collaboration,using secure compute and data intensive services. Decido involves citizens and local communities through co-creation activities for better targeted policies.
        DUET: the Digital Urban European Twins (DUET) project is a EU initiative which leverages the advanced capabilities of cloud, sensor data and analytics in Digital Twins,to develop more democratic and effective public sector decision-making.DUET Digital Twins provide virtual city replicas which simplifies the understanding of complex interrelation between traffic, air quality, noise and other urban factors. Powerful analytics predict the impacts of potential change to make better evidence-based operational decisions and long-term policy choices.
        IntelComp: develops a Competitive Intelligence Cloud/HPC Platform for AI-based Science, Technology and Innovation Policy-Making. Multi-disciplinary teams will co-develop analytics services, Natural Language Processing pipelines and AI workflows,exploiting EOSC open data and resources, HPC environments and federated operations at the EU, national and regional level. Ensuring a cooperative environment,different actors visualize, interact and analyze information. Through co-creation, IntelComp will adopt a living labs approach, engaging public policy makers, academia, industry, SMEs, local actors and citizens to explore, experiment with and evaluate STI policies. IntelComp is targeting domains aligned with the European Agenda and the Horizon Europe Missions:AI, Climate Change and Health.
        PolicyCloud: exploits the potential of digitisation, big data and cloud to improve the modelling, creation and implementation of policies. Delivering a unique, integrated environment of datasets, data management, and analytic tools it addresses the full lifecycle of policy management in four thematic-areas (radicalisation, food-value chain, city environment, city services),leveraging the data management capabilities of the EOSC Initiative. The Project empowers the Citizens to contribute to data and policies related to their everyday-life. The onboarding of these solutions in the EOSC Portal offers a great opportunity to reach a wide audience.

        https://docs.google.com/document/d/1yLkTPN91TIIaXxFK8N7xiAgrV6Rno_hN5qnCn865c9U/edit#

    • 15:45 18:15
      Integration of health initiatives with tools and services under the European Open Science Cloud (LETHE, FEMaLe and HealthyCloud) Quartz

      Quartz

      The aim of the workshop is to showcase the use of EOSC tools and services as instrument for increasing effectiveness of health initiatives. Specifically, health initiatives will serve as an intermediary between the public and private organizations managing care services and the scientific world and the European Cloud Infrastructure (ECI) through the direct collaboration with EOSC and will provide storage capacity and processing power through EGI infrastructure. The IT environment of health initiatives such as LETHE and HealthyCloud is going to guide health institutions in reusing the wealth of services and data available on EOSC and at the same time will enable them to publish new open data sets to be re-used seamlessly across borders, and among institutions and research disciplines, supporting health and research institutions to grasp the whole potential of the EOSC services and data, making it possible for researchers to use different data sources and making it possible to move, share and re-use data seamlessly across borders, and among institutions.

      Schedule and presenters:

      15:45 Welcome and overview of EOSC services and their potential for use in health initiatives
      Ville Tenhunen, EGI

      16:00 Concrete case: integration of LETHE project with EOSC
      Presenter: Francesco Mureddu

      16:30 Concrete case: integration of HealthyCloud project with EOSC
      Presenter: Juan Gonzalez-Garcia, IACS

      17:00 Prospective case: potential for the use of EOSC in the FEMaLe project
      Presenters: Dmitrijs Bļizņuks, Riga Technical University and Ulrik Bak Kirk, Aarhus Universitet

      17:30 Structured question & discussion
      Moderator: Francesco Mureddu

      18:00 Wrap up

      Introduction to LETHE (λήθη) – A personalized prediction and intervention model for early detection and reduction of risk factors causing dementia, based on AI and distributed Machine Learning
      Dementia is the most severe expression of cognitive impairment, the main cause of disability in elderly people, currently affecting nearly 50 million individuals worldwide. LETHE is an Horizon 2020 project designed to prevent cognitive decline in an ageing population at an early time point by a multi-domain interventional lifestyle approach built on a person centred digital solution. In LETHE a broad approach to prevention of Dementia is built at the intersection of clinical and technological displines. Icin that regard, LETHE is developing a data-driven risk factor prediction model for older individuals at risk of cognitive decline, novel digital biomarkers and a digital enabled intervention based on the evolution of the FINGER study. FINGER is a 2-year multi-center randomized controlled intervention trial carried out in Finland (Coordinated by the Finnish Institute for Health and Welfare, Helsinki). The project is performing on an existing clinical observation and intervention data from the 11-year follow-up FINGER study and re-using results from former EU projects’ validated sensing and interaction technologies.

      Introduction to HealthyCloud - Health Research & Innovation Cloud
      The need for advances in health and biomedical sciences requires that health research is performed timely, efficiently and oriented to high-quality results. To meet this need, health data must be oriented towards access, sharing, and secondary use in support of translational, clinical, and epidemiological-population level research. In other words, to maximise the impact of health research, it is necessary to adopt best practices on how to efficiently manage health data. In that regard, the objective of HealthyCloud is to generate a number of guidelines, recommendations and specifications that will enable distributed health research across Europe in the form of a Ready-to-implement Roadmap. This roadmap together with the feedback gathered from a broad range of stakeholders will be the basis to produce the final HealthyCloud Strategic Agenda for the European Health Research and Innovation Cloud (HRIC).

      Introduction to FEMaLe – Finding Endometriosis Using Machine Learning
      Healthcare tools for predicting and preventing diseases as well as personalising treatment and patient management offer great clinical benefits and cost reduction. The EU-funded FEMaLe project is working on a machine-learning multi-omics platform that can analyse omics data sets and feed the information into a personalised predictive model. The main focus of the project is to improve intervention for individuals with endometriosis, a condition where tissue normally lining the uterus grows outside the uterus. A combination of tools such as a mobile application and augmented reality surgery software will be developed, facilitating improved disease management and the delivery of precision medicine. The FEMaLe project will build bridges across disciplines and sectors to translate genetic and epidemiological knowledge into clinical tools that support decision-making in terms of diagnosis and care aimed at both general practice and highly specialised endometriosis clinics – all via machine learning and artificial intelligence.

      Convener: Ville Tenhunen
    • 15:45 17:00
      Launching the EGI Digital Innovation Hub Topaz (Vienna Andel Prague)

      Topaz

      Vienna Andel Prague

      EGI was created to support research communities in the use of advanced computing and data services. Now, the EGI DIH has been created to share all this experience and knowledge to support SMEs in their digital transformation.
      EGI DIH is a virtual space where companies and technical service providers meet to test solutions before investing. EGI DIH offers different services on advanced computing to help companies in the digitalization improve their productivity.

      EGI DIH acts as a one-stop-shop to provide technical assets, knowledge, expertise and support on business, market, and finance, leading to sustainable innovation.
      The session will introduce the EGI DIH, the services it offers, and will provide an opportunity to engage with EGI for business purposes.

      Speakers:
      Carlos Fernandez
      Ladislav Hluchy
      Smitesh Jain

      Convener: Elisa Cauhe (EGI.eu)
      • 15:45
        Coffee Break 30m
    • 17:30 19:00
      Lightning Talks: Security, Trust & Identity Opal (Vienna Andel Prague)

      Opal

      Vienna Andel Prague

      Convener: Matthew Viljoen (EGI.eu)
      • 17:30
        Secret management service for EGI Infrastructure 8m Topaz

        Topaz

        Applications in EGI Infrastructure may need different secrets (credentials, tokens, passwords, etc.) during deployments and operations. The secrets are often stored as clear texts in configuration files or code repositories that expose security risks. Furthermore, the secrets stored in files are static and difficult to change/rotate. The secret management service for EGI Infrastructure is developed to solve the issues.

        The secret management service is designed as follows:

        • Non-intrusion: Operates as a stand-alone service, no extra efforts from site admins to support the service, no additional permissions are needed for users.
        • Simple usage: Authentication via OIDC tokens from EGI Check-in, no extra credentials are required. The service is based on Hashicorp’s Vault which is well-known in industry, with many client tools and libraries.
        • High-availability: Service instances are distributed on different sites, without single point of failure. A generic endpoint https://vault.services.fedcloud.eu:8200 is dynamically assigned to a healthy instance via Dynamic DNS service.

        At the moment, the service is in public beta testing, full production operation is expected in September 2022.

        The service is available at the generic endpoint https://vault.services.fedcloud.eu:8200/. The detailed designed of the service is available at [1], and user guide is available at [2].

        1. https://docs.google.com/document/d/18uqpZ2AkdAm9WMsDfQgDnv4Y4qMyoUpBilsLiHPrfvk/edit?usp=sharing

        2. https://docs.google.com/document/d/11QKGQjJFGiTYCrs2fLazrFBEg2lfOgzpcJIuIKq02CE/edit?usp=sharing

        Speaker: Viet Tran (IISAS)
      • 17:40
        Exploring trust for Communities - Building trust for research and collaboration 8m Topaz

        Topaz

        When exploring the (sometimes) intimidating world of Federated Identity, research communities can reap considerable benefit from using common best practices and adopting interoperable ways of working. EnCo, the Enabling Communities task of the GEANT 4-3 Trust and Identity Work Package, provides the link between those seeking to deploy Federated Identity Management and the significant body of knowledge accumulated within the wider community. Individuals from EnCo aim to ensure that outputs from projects (e.g. AARC) and groups (e.g. WISE, FIM4R, IGTF, REFEDS) are well known, available and kept up to date as technology changes. Since many of these groups are non-funded, it’s vital for their survival that projects such as GN4-3 sponsor individuals to drive progress and maintain momentum. The ultimate aim is to enhance trust between identity providers and research communities/infrastructure, to enable researchers’ safe and secure access to resources.

        In this lightning talk we will focus on assurance. EnCo has on one hand been leading and participating in several activities on assurance like the REFEDS Assurance Suite. On the other hand, EnCo is active in the Federated Identity Management for Research (FIM4R) community. FIM4R is a forum where Research Communities meet to establish common requirements, combining their voices to send a strong message to FIM stakeholders. In 2021 FIM4R started work on requirements specific on assurance. We will provide a short overview of all these activities.

        Our target audience are the communities and the infrastructures providing their services.

        Aims of the Lighting Talk:

        • Raise awareness of the availability of common resources, including
          those owned by WISE, FIM4R, REFEDS, IGTF;
        • Focus on work available on assurance in Federated Identity Management
          (like the REFEDS Assurance Framework) and raise awareness for the
          FIM4R requirements work on Assurance.
        Speaker: David Kelsey (STFC)
      • 17:50
        OIDC support for Windows using PuTTY 8m Topaz

        Topaz

        Relying on OpenID Connect (OIDC) for identity and access management can significantly simplify the process of providing access to users, especially for non-web applications such as Secure Shell (SSH) where the management of typically used SSH keys is often laborious and error-prone.

        As a counterpart to the server-side components that enable SSH via OIDC [1], the client-side tools allow users to directly log into a server with their federated credentials via valid OIDC Access Tokens, without any prior application for an account:

        • oidc-agent is a set of command-line tools that enable users to obtain and manage OIDC Access Tokens. It follows the design of the ssh-agent and, as such, it can be easily integrated into the user's flow.
        • mccli is a command-line wrapper for the SSH client that is able to retrieve OIDC tokens and use them to log into the SSH server without further user interaction.

        These tools are developed for Linux and macOS. This contribution aims to present the efforts to fill in the gap of missing OIDC client functionality for Windows, with potentially major impact due to the widespread use of Windows in the target user communities (e.g. HPC).

        The project consists of two parts. First, the oidc-agent was ported to Windows. This subtask is significant since the oidc-agent is a tool with broad applicability, for any use case that involves programmatic use of OIDC tokens. In the second part of the project, we integrated the oidc-agent with PuTTY --- one of the most famous SSH clients for Windows. Users are able to choose between using SSH with pageant (PuTTY's ssh key manager), or using SSH with OIDC-tokens against an OIDC-capable ssh-server.

        Speakers: Gabriel Zachmann (Karlsruhe Institute of Technology), Diana Gudu (KIT), Marcus Hardt (KIT-G), Jonas Schmitt (Karlsruhe Institute of Technology)
      • 18:00
        Collaborative Operational Security 8m Topaz

        Topaz

        Our modern cybersecurity landscape requires that we work collaboratively to effectively defend our community. In this talk we explore activities in this area and ways in which sites, organisations and infrastructures can get involved in our shared response to cyberattacks

        Speaker: David Crooks (STFC)
      • 18:10
        Cybersecurity in state of emergency: technical and legal issues 8m Topaz

        Topaz

        A state of emergency is a legal regime designed for extraordinary circumstances, that enables the government to act in ways that it could not under the ordinary Legal framework.
        The measures adopted in emergency situations, in accordance with International Law, must comply with some characteristics including: the legislative provision; the need to pursue a legitimate goal, being necessary. This last feature inevitably implies a balance between the right subject to limitation and the interest that the limitation intends to safeguard.
        Cyberspace has become a scenario of cyber-attacks by directly addressing problems connected to the cyber security of computer systems.
        In Italy, the Decree-Law 21 March 2022, n.21 includes measures to strengthen the cyber discipline, necessary because of the protracted Ukrainian-Russian conflict. In particular, the National Cybersecurity Agency, having consulted the Cybersecurity Nucleus, recommends urgently proceeding with an analysis of the risk deriving from the IT security solutions used and to consider the implementation of appropriate diversification strategies as regards, in particular devices including anti-virus, anti-malware and endpoint detection and response applications; Web application firewall; e-mail protection; protection of cloud services; managed security service.
        The Statement on Research of 3 March 2022 of the European Commission, the note of MUR (Ministry of Universities and Research) prescribes "to suspend all activities aimed at the activation of new double degree or joint degree programs" and recalled that "those research projects underway with institutions of the Russian Federation and Belarus that involve transfers of goods or dual use technologies or are otherwise affected by the sanctions adopted by the EU”.
        The objective of the work is to examine which limits and characteristics these limitations may have in order to operate a balance between the protection of rights and cybersecurity, with particular reference to cloud-based infrastructures, network and HPC.

        Speaker: Nadina Foggetti
      • 18:20
        A Brief Overview of Token Based AAI Development at STFC 8m Topaz

        Topaz

        STFC’s Scientific Computing Department is currently engaged in the development and operation of several different token-based authentication and authorization services, using OpenID Connect.

        Central to this is the development of the IRIS IAM (Identity and Access Manager), an implementation of the INDIGO IAM software which forms a core component in the IRIS digital research infrastructure. The IRIS IAM provides centralised AAI tools to services and science communities, allowing for granular community managed authorization and single-sign-on with institutional identities.

        The team at STFC also participate in the development of the new WLCG Authorization infrastructure and have recently taken a leading role within the prototyping of the AAI solution for the SKA SRCnet.

        This lightning talk will give a brief overview of the AAI implementations at STFC, as well as progress and updates in the recent months.

        Speaker: Thomas Dack (STFC)
      • 18:30
        The EGI Software Vulnerability Group - evolving 8m Topaz

        Topaz

        The purpose of the EGI Software Vulnerability Group (SVG) is “To minimise the risk of security incidents due to software vulnerabilities.”

        The EGI SVG and its predecessors have been dealing with software vulnerabilities for about 15 years. Initially, the group was set up to address the lack of vulnerability management in Grid Middleware, and its tasks included fixing security issues and ensuring that all sites in the relatively uniform EGI environment addressed the most serious vulnerabilities.

        Now, things are different: the inhomogeneity has increased within the infrastructure, there is a greater proliferation of software installed, and the majority of software vulnerabilities affecting EGI infrastructure are announced by software vendors. This means that the methods for dealing with software vulnerabilities have been changing and need further change.

        One extreme is to say that service providers are wholly responsible for ensuring their software is up to date, which to the first order is true. Rather like people's mobile phones, we just assume that sites update themselves.

        But EGI can do better than that.

        EGI helps sites become aware of and address serious vulnerabilities that are within the scope of the EGI portfolio of distributed computing services, so that all parties concerned have confidence in the security of the infrastructure. Vulnerabilities may be reported by EGI participants or become known through third party reports. Analysis of the impact of a vulnerability within EGI may lead to its risk level being elevated or reduced compared to conclusions applicable elsewhere.

        This short talk will briefly describe the evolving software vulnerability management for EGI.

        Speaker: David Crooks (STFC)
    • 19:00 21:00
      Opening Reception - hosted by EGI-ACE and iMagine 2h 1st floor (Vienna Andel Prague)

      1st floor

      Vienna Andel Prague

    • 19:00 20:00
      Posters (presenters at poster) 1st floor (Vienna Andel Prague)

      1st floor

      Vienna Andel Prague

      • 19:00
        Access EGI resources through the ESCAPE developed ESFRI Science Analysis Platform 1h

        The EU ESCAPE project is developing ESAP, ESFRI’s scientific analysis platform, as an API gateway that enables the seamless integration of independent services accessing distributed data and computing resources. At ESCAPE we are exploring the possibility of exploiting OpenStack EGI's cloud computing services through ESAP. As a use case, we are considering one of the studies known as Data Challenges used to prepare the community to work with the data to be generated by the Square Kilometer Array (SKA).

        In our contribution, we describe the technical steps performed: we registered to the Virtual Organisation vo.access.egi.eu to count on the necessary development and test resources and we automated the creation of a Virtual Machine through the EGI fedcloud client. We automated the installation on a cloud virtual machine instance of the suitable analysis software through a software framework developed at IRA-INAF, called ira-init. We plan to provide ESAP’s users with resources access writing an ESAP connector.

        In this first prototype data access is simplified through NFS mounted storage or a cloud data volume. Data transfer tests are being conducted using storm-webdav to provide users with the ability to analyze the data both locally and remotely.

        Speaker: sara bertocco (INAF)
      • 19:00
        An Open Ecosystem for European Computing Continuum 1h

        To regain European competitiveness in Internet infrastructures, Europe cannot simply try to catch up with current Cloud hyperscalers: a bolder, forward-looking approach is needed.
        Both Cloud technologies and IoT technologies have steadily moved towards a technical and business convergence often labelled as Cloud-Edge-IoT continuum, with Edge Computing becoming both the target and the battleground of this convergence process. Under the name of computing continuum, an even broader scope including HPC, hardware devices, and 5G/6G networks, is understood.
        The Digital Compass emphasises clear and measurable targets for 2030 that are however systemic and challenging (e.g., 75% of European enterprises have taken up cloud computing services, or 10,000 climate neutral, highly secure edge nodes have been deployed in the EU).
        There is a need to guide computing continuum stakeholders towards defining and addressing the conceptual, technical, and community challenges raised by such targets (e.g., how will the thousands of edge nodes interoperate? How will their climate neutrality goal affect the energy industry?).

        In this poster, we present OpenContinuum, Open Ecosystem for European strategic autonomy and interoperability across the computing continuum industry.
        OpenContinuum will:
        - Promote the establishment of a European industrial Open Ecosystem based on Open Source and Open Standards
        - Map and analyse the supply-side landscape of the European emerging Computing Continuum
        - Engage the EU industrial and research actors to create a supply-side community that spans the whole Computing Continuum
        - Coordinate the relevant EU project portfolio towards an open European ecosystem for the cloud-edge-IoT continuum

        Speaker: Giovanni Rimassa (Martel Innovate)
      • 19:00
        Analysis of Pierre Auger Observatory open data using EGI Jupyter notebooks 1h

        Secondary school students learn about astroparticle physics in the scope of Open Science project of the Czech Academy of Sciences. Examples of analyses of open data published by Pierre Auger Observatory are provided on Kaggle platform. We compare this platform with local environment on desktop and with usage of EGI Jupyter notebooks. Effort to gain access, ease of use, stability, performance and availability of hardware resources will be presented. Full dataset of published Auger events consisting of 22731 showers measured with the surface detector array and of 3156 hybrid events in pseudo-raw data JSON format was used in this work together with more compact summary file in CSV format.

        Speakers: Jiri Chudoba (CESNET), Mrs Jana Maršálková (Gymnázium Jana Nerudy), Mr Filip Neubauer (Akademické gymnázium), Václav Zajac (SPŠST Panská)
      • 19:00
        Augmenting the EGI Monitoring based on the ARGO Monitoring framework with functionalities such as Service Trends and Status pages. 1h

        EGI Monitoring is the key service needed to gain insights into the Services that are part of the EGI Infrastructure. It is based on ARGO Monitoring Service that provides a flexible and scalable framework for monitoring status, availability and reliability of a wide range of services and is able to quickly detect, correlate, and analyze data for the detection of errors. Service Providers are able to make use of the EGI Monitoring Service via various sources of truth (e.g. CMDB, EOSC Resource Catalogue) so that they are able to get notifications when a problem occurs or ARGO reports to advertise with confidence the stability and reliability of their services. Similarly Researchers or Research communities are able to gain insights about the Services they want to use.
        Two new functionalities will enable gaining even better insights into Services: Service Trends and Status Pages. Via the constant monitoring of the services, we have the ability to analyze service trends and provide insights such as lists of top services with Critical, Warning or Unknown status or top services with authentication problems. Whether it's a server issue, bug in production, the simple truth is that a problem happens. The main idea of Status Pages is to build communities' trust and inform in real time about the status of the services in one simple view.
        We plan to streamline the process of registering new metrics and probes thus allowing faster inclusion of new metrics into ARGO reports. We provide a new all-inclusive report that includes all deployed metrics by default. Finally, EGI Monitoring is capable of exporting Monitoring Results via API or ARGO Messaging to 3rd Party dashboards and to EOSC Exchange Monitoring so as to further promote the Availability and Reliability of services that comprise the EGI Service Portfolio.

        Speakers: Emir Imamagic (SRCE), Konstantinos Kagkelidis (GRNET), Themis Zamani (GRNET)
      • 19:00
        Blue-Cloud: Your Open Science platform for collaborative marine research 1h

        The H2020 project Blue-Cloud is developing the thematic EOSC for ocean science, through a collaborative virtual environment to enhance FAIR and Open Science.

        Blue-Cloud federated leading European marine Research Infrastructures and e-Infrastructures, allowing researchers to combine, reuse, and share quality data across disciplines and countries with their existing MarineID account.

        The project has developed three main assets:

        • The Blue-Cloud Data Discovery and Access Service (DD&AS) facilitates access to 10+ million multi-disciplinary datasets. The DD&AS functions as a broker both for metadata and for data access, interacting with web services and APIs from each of the Blue Data Infrastructures federated in Blue-Cloud. This way, it enables users to discover first at the collection level which infrastructures might have data sets interesting for their use case, and next, to identify and download relevant data sets at granule level from those selected infrastructures, by means of a common interface.
        • The Blue-Cloud Virtual Research Environment (VRE) enhances collaborative research. Services include Data Analytics (Data Miner, Software and Algorithms Importer (SAI), RStudio, JupyterHub), facilitating to build and run analytical pipelines, Spatial Data Infrastructure to store, discover, access, and manage vectorial and raster georeferenced datasets, and services for provenance, documenting, and either sharing with selected colleagues or make available online any generated product (e.g. analytical methods, workflows, data products, publications, notebooks). The VRE is also accessible through the EOSC federated login.
        • This innovation potential is explored by a series of domain-specific Virtual Labs developed by five teams of experts, addressing societal challenges related to biodiversity, genomics, marine environment, fisheries, and aquaculture.

        These assets, including specific services developed within the VLabs, are also available via the EOSC Marketplace.

        The poster highlights key services developed within the Blue-Cloud technical framework and their potential impact on marine research, ultimately promoting a sustainable and data-driven ocean management.

        Speaker: Dick Schaap (Mariene Informatie Service MARIS BV)
      • 19:00
        Cloud development at INCDTIM 1h

        What exactly is cloud computing? Do they mean cloud as weather and then it could be over us, or does it mean processing power cloud and we are on it, hopping to get a special price for our request?
        Or it does not matter anymore and cloud is a place on the Internet which could help us store, process or do any simulation online without knowing what is behind the process and action?
        Anyway we look to that problem or solution the idea is that it’s developing around the clock, and around the globe. In our case at the National Institute for Research and Development of Isotopic and Molecular Technologies we are developing a hybrid cloud and an interface the will be accessed not only privately by our colleagues but also form external sources. The paper will discuss the steps toward the implementation and programing of the future clouditim

        Speaker: Dr Farcas Felix
      • 19:00
        DBRepo: A Database Repository to Support Research Activities 1h

        Many institutions start having dedicated data stewards curate data in close collaboration with researchers who collect, compute or distribute the data (e.g. as part of supplementary material to a journal article). Contrary to traditional data dumps, this is a challenge for structured data in databases where data evolves over time as tuples are added in data streams, updated or deleted. Outside of large-scale infrastructures designed to host e.g. climate or genome data, researchers usually have to maintain their own, local database and take care of regular software updates, configurations and feeding data, before being able to do research. Curation activities such as collecting metadata or preservation, if at all, happen only after the project is finished when the database is exported to a file repository turning it into a static dump that cannot be trivially queried anymore.

        We present DBRepo, a repository for relational databases in a private cloud setting to support research activities in four dimensions: (1) keep research data in relational databases from the beginning of a project and offer application programming interfaces to access the data; (2) provide separation of concerns that allows experts to handle database management tasks and let researchers focus on conducting their research work; (3) improve FAIRNess of data (Findability by collecting ontology-mapped metadata centrally and issuing persistent identifiers to queries; Accessibility by providing HTTP/AMQP/JDBC protocols; Interoperability by mapping to controlled vocabularies; and Reusability by offering metadata and attaching a license to each database); and (4) support reproducibility and persistent identification of arbitrary subsets of data by implementing the RDA WGDC recommendations. DBRepo’s source code is available in GitLab (https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services), we also operate a public demo instance (https://dbrepo.ossdip.at).

        Speaker: Martin Weise (TU Wien)
      • 19:00
        Deploying container-based applications on EGI with VIP 1h

        The Virtual Imaging Platform (VIP) leverages resources available in the EGI biomed Virtual Organisation to offer open services for medical image data analysis to academic researchers worldwide. VIP relies on Boutiques to facilitate application installation and sharing. Boutiques applications are installed through software containers described in a rich and flexible JSON language.

        Docker containers are nowadays very popular, but the Docker daemon requires root privileges, preventing its support on HPC and HTC infrastructures. Singularity has emerged as an alternative allowing users to run containers without root privileges. However, on a very large and heterogeneous infrastructure such as EGI, resource providers may have different Singularity versions and configurations which may hinder the seamless deployment of container-based applications. Another alternative is udocker, which is a tool that can be installed on the fly for the execution of containers in user space without requiring root privileges.

        Last but not least, the availability of the image/container on the EGI worker node is also important. The common/standard image pull from a central hub may cause network issues if we have a large number of jobs pulling images at the same time on the same computing cluster. One alternative is to pre-deploy images/containers on the biomed CVMFS (CernVM File System) shared folder, commonly used for software deployment in EGI. Another alternative (not yet available at the moment we write this abstract) would be a dedicated EGI hub. They both have their advantages and limitations, that will be further discussed on the poster.

        The poster will thus present the work and conclusions of the VIP team with respect to efficiently deploying and executing container-based applications on EGI HTC resources.

        Speaker: Sandesh PATIL (INRIA, CNRS)
      • 19:00
        EGI-ACE webODV - Online extraction, analysis and visualization of SeaDataNet and Argo data 1h

        In the framework of the EGI-ACE project, we are deploying the webODV application on EGI infrastructure at https://webodv-egi-ace.cloud.ba.infn.it/ and provide large temperature and salinity datasets from the SeaDataNet (https://www.seadatanet.org/) project and the international Argo (http://www.argo.net/) program. webODV is the online version of the widely used ODV (Ocean Data View, https://odv.awi.de/) software for working with marine observation datasets. The idea is to provide clients with user-friendly interfaces in their web-browser and give access to datasets centrally maintained and administered on a server. Users will always work with the latest version of the datasets and will not have to download and store the data on the local computer. webODV is integrated with the EGI Check-in service and will have a promotional impact on EOSC, broadening and improving the service in cloud environments.
        Presently, webODV provides two complementary services, webODV Data Extractor and webODV Data Explorer. Users select between these services after choosing a dataset. The goal of the extraction service is to provide an easy and intuitive data subsetting procedure, where data can be downloaded as text files, ODV collections or netCDF files. The explore service provides “ODV-like” functionality in the user’s web-browser for creating maps, surface-plots, section-plots, scatter-plots, filtering data etc. Users can download high-resolution images of the entire canvas or individual windows and can export the data of the current station set or of individual data windows. Analyses and visualizations can be fully reproduced by using the so-called xview files e.g. for sharing.
        In addition to the public webODV version, we are working on a prototype of a webODV-on-demand solution, integrated with the EOSC PaaS orchestrator (https://marketplace.eosc-portal.eu/services/paas-orchestrator). Users can request private webODV instances and workspaces to create their own ODV data collections or to work with private data in read-write mode.

        Speaker: Sebastian Mieruch-Schnuelle (Alfred-Wegener-Institut)
      • 19:00
        European computing infrastructures for digital twins: the EGI project ‘interTwin’ 1h

        Modern data-intensive and compute-intensive science from all domains involves modelling and simulation at very high resolution for prediction and inference workflows. Given the complexity of the computing workflows and the data required by models which can vary from Gigabytes to Petabytes of information per day to be processed, the
        ability to deploy ready to use tools that federate the access to resource to run complex AI-based processing workflows federating access to heterogeneous and distributed computing architectures is required. This requires ground-breaking innovation in computational and data handling capacity needs. The EGI Federation ambition is to develop a Digital Twin blueprint architecture and an interdisciplinary Digital Twin Engine
        (DTE) that will deliver generic capabilities for high volume and high and high speed data acquisition-volume and high-speed data acquisition and pre-processing, big data assimilation into model, forecast production by different simulation models, real-time processing of data, and validation of accuracy in modelling and simulation. These functions, delivered by generic Digital Twin Engine modules, will be demonstrated in the context of different DT applications and the modularity will be demonstrated with a set of specific simulation and modelling capabilities that are tailored to the needs of multiple adjacent scientific communities in four different scientific domains. In this context, the interTwin project will demonstrate the federation of research data from High Energy Physics, Radio Astronomy, Gravitational-wave Astrophysics, Climate research and Environmental monitoring.

        Speaker: Gwen Franck
      • 19:00
        Exploring reference data through existing computing services for the bioinformatics community : an EOSC-Pillar use-case 1h

        Galaxy is a widely adopted workflow management system for bioinformatics, aiming to make computational biology accessible to research scientists that do not have computer programming or systems administration experience.
        How can scientists connect this useful, reproducibility-oriented tool seamlessly with many data sources? How can they do so in a coherent way using different instances of Galaxy? Can they run it locally or on a secured infrastructure that handles patient data? Can they compare the results of those different scenarios?
        The proposed poster presents the work done as part of one of the EOSC-Pillar project’s scientific use-cases to address those questions, achieving the following objectives:
        ● Allow access to reference data from different Galaxy deployments to all EOSC users.
        ● Facilitate the deployment of Galaxy instances in the same infrastructure hosting the data to analyse
        ● Provide coherency in the deployment of different Galaxy instances
        ● Ensure sensitive (e.g., health) data security requirements are met throughout the process
        The poster describes four scientific scenarios based on concrete needs from the ELIXIR community. It also describes the technical services the use-case is relying on, namely: Laniakea (Galaxy as a service provided by IBIOM-CNR and INFN), Inserm data repository, IFB cloud Galaxy instances and the INDIGO-IAM authentication service provided by INFN. It demonstrates the interest of EOSC Pillar’s Federated Data Space (F2DS) for connecting different data sources to the Galaxy in a simple and coherent way.
        The poster also highlights the need to conform to data protection regulations concerning health personal data, by deploying Galaxy in a private, secured environment while still ensuring the data analysis workflow remains similar to its public counterpart.
        Finally, it shows proposed solutions to provide access to the service to all users within the EOSC community through roles management and by integrating it into a global authentication framework.

        Speaker: Gilles Mathieu (INSERM)
      • 19:00
        iMagine: Imaging data and services for aquatic science 1h

        iMagine is an EU-funded project providing a portfolio of image datasets, high-performance image analysis tools empowered with Artificial Intelligence (AI), and Best Practice documents for scientific image analysis ‘free at point of use’. These services and materials enable better and more efficient processing and analysis of imaging data in marine and freshwater research, accelerating our scientific insights about processes and measures relevant to healthy oceans, seas, and coastal and inland waters.
        By building on the Compute platform of the European Open Science Cloud (EOSC) the project delivers a generic framework for AI model development, training, and deployment, which can be adopted by researchers for refining their AI-based applications for water pollution mitigation, biodiversity and ecosystem studies, climate change analysis and beach monitoring, but also for developing and optimising other AI-based applications in this field.
        The iMagine compute layer consists of providers from the pan-European EGI federation infrastructure, collectively offering over 132,000 GPU hours, 6,000,000 CPU hours and 1500 TB-month for image hosting and processing.
        The iMagine AI framework offers neural networks, parallel post-processing of very large data, and analysis of massive online data streams in distributed environments. 12 RIs will share over 9 million images and 8 AI-powered applications through the framework. Having representatives of so many RIs and IT institutes, developing a portfolio of eye-catching image processing services together will also give rise to Best Practices. The synergies between aquatic use cases will lead to common solutions in data management, quality control, performance, integration, provenance, and FAIRness, contributing to harmonisation across RIs and providing input for the iMagine Best Practice guidelines. The project results will be integrated into and will bring important contributions from RIs and e-infrastructures to EOSC and AI4EU.

        Speaker: Gergely Sipos (EGI.eu)
      • 19:00
        jUMP-Modeling-Portal: a new service for simulating sound propagation in the ocean 1h

        Effects of anthropogenic noise on marine species were already recognized as a threat by the United Nations. Therefore, considering the 14th development goal (conservation and sustainable use of the oceans and marine resources), the proper management and reduction of underwater noise is a relevant contribution to the aimed sustainable management and protection of marine and coastal ecosystems. By avoiding significant adverse impacts, strengthening their resilience, and taking action for their restoration, it is possible to achieve healthy and productive oceans.
        The Portuguese coast is subjected to increasing pressure due to maritime transport, recreational and touristic activities, fishing efforts, and operating industrial units. Underwater noise is one of the adverse sub-products of these activities, with detrimental consequences to noise-sensitive species and the related ecosystems. Therefore, to address this theme and in the scope of the project "jUMP - Joint Action: A Stepping-stone for underwater noise monitoring in Portuguese waters", LNEC has developed a modeling portal to simulate the sound propagation in the ocean and support the monitoring activities along the Portuguese Exclusive Economic Zone (EEZ).
        In the present publication, the authors introduce a web portal that enables the users to set sound propagation simulations on-demand, with specific configurations such as the depth of the sound source, the frequency, and source and receptor positions. The jUMP modeling platform retrieves the oceanic stratification and bathymetry data from European data services like Copernicus and EMODnet to establish the underwater sound velocity profiles used by the model. The service will be freely available to the research community and incorporates several technologies and services from the European Open Science Cloud (e.g., Federated authentication, Workload managers, Infrastructure Managers, and computational resources). The authors believe that the platform can enhance the research on our oceans' underwater sound propagation thematic area.

        Speaker: Dr Anabela Oliveira (LNEC)
      • 19:00
        Managed Kubernetes — Next Gen Academic Infrastructure? 1h

        Academic infrastructures and institutions continuously develop new computing services to support research and education. These services are traditionally based on HPC batch systems and cloud services. Recently, a new computing paradigm based on containerization of applications has been adopted across the scientific community. Computations executed in containers are becoming increasingly popular because of their ease of use – the user encapsulates the entire environment (including software, its dependencies, and optionally data) into a single package that can be run independently of hardware and operating system. Such computations can be run on traditional HPC systems and virtual servers. However, running containerized computations in the Kubernetes (K8S) orchestration tool simplifies the execution and management of containers significantly.

        Running the Kubernetes infrastructure is a challenging task that requires non-negligible know-how and resources dedicated to its operation and maintenance. Therefore, it is reasonable to offload this line of work to dedicated IT professionals positioned within research infrastructures, NRENs, and other similar institutions providing IT environments to support research.

        Czech NREN "CESNET" embraced the opportunity presented by containerization by offering its own managed Kubernetes platform. Resources required to develop and maintain the platform, together with the operation of all the underlying IT layers such as hardware and networking, are fully realized by CESNET. Such an environment allows the researchers to focus solely on executing the containerized computation workflows.

        The viability of the Kubernetes infrastructure for research was verified on several use-cases traditionally run on HPC or IaaS, demonstrating the advantages of the managed K8s infrastructure in research applications. It covers use-cases such as scalable Jupyter notebooks, RStudio servers, personalized storage, true 3D game streaming (low-latency virtual desktops), and more. These use-cases make a strong argument for establishing the federated managed Kubernetes sites, which could be provided within the EGI to the broad scientific community.

        Speakers: Mrs Viktória Spišaková (Masaryk University), Mr Adrian Rosinec (CESNET)
      • 19:00
        Metrics Framework: Measuring the Success of a Recommendation System 1h

        A Recommender System (RS) is designed to suggest relevant content or products to users that might like or purchase. RS are growing more popular both commercially and in the research community, by offering personalized experiences to unique users. EOSC is also using a modern RS in EOSC Marketplace.
        Measuring the success of a RS is a very important and laborious task. We introduce an independent metrics framework as a service to support the evaluation and adaptation of recommendation mechanisms. The evaluation is quantitatively being performed by processing information such as resources, user actions, ratings, and recommendations in order to measure the impact of the AI-enhanced services and user satisfaction as well as to incorporate this feedback and improve the services provided, via a user-friendly API and dashboard UI. The framework consists of 3 components. Preprocessor which tasks are: (i) data retrieval through a connector module that claims and transforms data from various sources, (ii) service-associated knowledge, (iii) dissociated and dummy data removal, (iv) relation tags dispatch to information that marks various associations in the data, i.e. registered or anonymous -related users and services, (iv) statistics information . RSmetrics responsible for processing the data, computing the designated evaluation metrics, and producing the necessary information in a homogenized manner. A web service presenting reports through a rich UI/dashboard and a rest API. This work is part of the Developments for the EOSC Core RS by WP5 of the EOSC Future Project.
        The current version of the implementation features: (i) simple metrics and statistics (ii) complex ones, such as diversity, that indicates if services are recommended equally often; Novelty,; Hit Rate, and Click-Through Rate. The RS evaluation framework is constantly expanding with new features, metrics, and utilities, in order to lead to more robust, data adaptable, and good quality RS designs.

        Speaker: Themis Zamani (GRNET)
      • 19:00
        Network technologies in the I.Bi.S.Co. Napoli HPC hybrid cluster 1h

        The work aims to describe the architectural characteristics, and especially the local network, of a new hybrid cluster of 128 GPUs set up in the Data Center of the Monte Sant’Angelo complex of the "Federico II" University of Naples. Its hybrid features allow you to use its resources in different scenarios: from parallel computing, to GP-GPU accelerated workload, to their combinations. The cluster consists of 36 nodes and 2 switches that perform two functions: computing and storage. To maximize the efficiency of the cluster and to accommodate the multiple needs of users, the local network uses two distinct architectures:
        • intra-node: characterized by the combination of NVLink and PCI-e
        • inter-node: characterized by the combination of InfiniBand and Ethernet
        The setting up was possible thanks to funds financed by the I.Bi.S.Co. project (Infrastructure for Big data and Scientific Computing), of the PON 2017-2022.
        This infrastructure will enter the establishing National Center for Computing, financed by the funds of the PNRR.

        Speaker: Silvio Pardi (INFN)
      • 19:00
        Perun Comprehensive AAI solution 1h

        Perun covers all technical and non-technical aspects of an Authentication and Authorization Infrastructure (AAI). The technical part is designed in a modular way, so it can be deployed as a standalone solution or can be incorporated into existing infrastructure. Different components provide technical solutions, as well as easy to use functions for dealing with non-technical features, like policy handling, trust models or consent management. 

        The poster illustrates the depth and broad reach of the AAI area by listing individual topics that AAI needs to solve in seven categories. Perun AAI provides support and tools for handling activities in the mentioned areas. The content is attuned to a broad audience. Therefore, the technical details are omitted. The poster also provides a small listing of significant scientific communities, infrastructures and projects where Perun AAI solution is successfully deployed.

        Speakers: Dominik Frantisek Bucik (Masaryk University), Slavek Licehammer (CESNET)
      • 19:00
        Poster: The Training Portal for Photon and Neutron Data Services 1h

        Education is becoming an increasingly important topic to help scientists work on photon and neutron sources. Other relevant areas such as advanced quantum technologies will also play a key role in the future. One of the goals of ExPaNDS (European Open Science Cloud (EOSC) Photon and Neutron Data Service) is to train research scientists in order to better understand the issues, methods and available computational RI infrastructures to address critical research questions.
        The ambitious ExPaNDS and PaNOSC projects are a collaborations between 16 national Photon and Neutron Research Infrastructures (PaN RIs). The projects are delivering standardised, interoperable, and integrated data sources and data analysis services for Photon and Neutron facilities. Our PaN-training portal provides a one-stop shop for trainers and trainees to discover online information and content: For trainers the catalogue offers an environment for sharing materials and event information. For trainees, it offers a convenient gateway via which to identify relevant training events and resources, and to perform specific, guided analysis tasks via training workflows to provide FAIR research.
        Our associated e-learning platform hosts free education and training for scientists and students with an integration of Jupyter notebooks. The e-Learning platform hosts free education and training for scientists and students. It includes courses on both the theory of photon and neutron scattering and how to use python code or software for data reduction and modeling.

        Speakers: Oliver Knodel (Helmholtz-Zentrum Dresden-Rossendorf), Marta Gutierrez David (EGI.eu)
      • 19:00
        Processing large datasets using EGI-ACE EOSC resources for the climate community 1h

        Many end users of climate change information often need specialized products to perform their research, impact study or data analysis. For example, climate indices, like the standard ones defined by ECA&D and ETCCDI, cover most of the general needs. However, datasets provided on the climate data infrastructure ESGF are climate model output and only provide standard variables, such as temperature and precipitation, and not climate indices, such as the number of “Summer days” or the “Maximum consecutive dry days”, for example.

        A python package to calculate climate indices, called icclim, is currently developed within the H2020 IS-ENES3 project. This package is using xarray and dask for very fast parallel execution and smaller memory footprint. But with data volumes as well as the number of datasets increasing very rapidly, it becomes very time consuming and uses a lot of computing and storage resources to calculate specific climate indices, even with icclim.

        Providing those users datasets of climate indices pre-computed on CMIP6 simulations would be very valuable. Of course all specific needs cannot be taken into account (such as specific seasons, specific reference periods, etc.), but the most general ones can be fulfilled. The European Open Science Cloud (EOSC) is providing computing and storage resources through the EGI-ACE project and periodic Use Case calls, enabling the possibility to compute all those climate indices. In this EGI-ACE Use Case, icclim will be used to compute 49 standard climate indices on a large number of CMIP6 simulations and experiments, starting with the most popular ones. It could also be extended, time permitting, to the ERA5 reanalysis, CORDEX and CMIP5 datasets. The resulting climate indices datasets will later also be made available in the IS-ENES3 climate4impact (C4I) portal.

        Speaker: Christian Page (CECI, Université de Toulouse, CNRS, Cerfacs, Toulouse, France)
      • 19:00
        ReproVIP: Enhancing Reproducibility of Scientific Results in Medical Imaging 1h

        Background – VIP (the Virtual Imaging Platform) is a web portal for medical imaging (MI) data analysis (Glatard et al. 2013). By leveraging computational and storage resources from the EGI e-infrastructure, VIP provides MI researchers with end-user services to run MI applications on this large-scale computing infrastructure.

        Research Issue – Medical imaging is facing a reproducibility crisis: the increasing complexity of current data processing methods weakens our ability to produce the same results twice, by applying the same treatments to the same sets of inputs. Beyond the trivial influence of the data exploration process on any scientific result (Botvinik-Nezer et al. 2020), there is mounting evidence that computing environments (e.g., library calls, OS kernels, hardware infrastructures) also play a significant role by adding numerical uncertainty (Glatard et al. 2015). Relying on distributed computing resources, the VIP platform is highly concerned by potential versatilities in its digital outcomes.

        Project Outline – The ReproVIP project, funded by the French National Research Agency (ANR-21-CE45-0024-01), addresses this reproducibility issue at every level of data analysis, from the exploration process to the computing environment. It is structured around two complementary goals: (i) evaluate the uncertainty of digital outcomes after EGI-based distributed computing, and (ii) enhance the numerical reproducibility of scientific results obtained through the VIP platform.

        Available Options – To achieve numerical reproducibility at the computing environment level, we will explore a few different solutions. In addition to the already well-known containers, we also aim at using Guix – a GNU-based Linux distribution for advanced package management (Guix-HPC s. d.). To control the exploration process for MI data analysis, we are interested in the EGI-based solution that proposes Jupyter notebooks able to call EGI resources through the DIRAC framework.

        Speaker: Gael VILA (CNRS - CREATIS)
      • 19:00
        The AI4PublicPolicy Virtualized Policy Management Environment (VPME) with fully-fledged policy development/management functionalities based on AI technologies 1h

        AI4PublicPolicy is a joint effort of policymakers and Cloud/AI experts to unveil AI’s potential for automated, transparent and citizen-centric development of public policies. To this end, the project will deliver a novel Open Cloud platform for automated, scalable, transparent and citizen-centric policy management. The AI4PublicPolicy platform, i.e. the Virtualized Policy Management Environment (VPME) will provide fully-fledged policy development/management functionalities based on AI technologies such as Machine Learning, Deep Learning, NLP and chatbots, while leveraging citizens’ participation and feedback.
        More specifically, within the framework of the VPME, the following components are being developed:
        • Data Management Toolkit (UPM)
        • Policy & Dataset Catalogue (UNP)
        • Policy Extraction Toolkit (GFT)
        • Policy Interpretation Toolkit (INTRA)
        • Interoperability Toolkit (UNP)
        • Policy Evaluation Toolkit (GFT)
        The abovementioned technologies will be deployed in the scope of the five real life pilots of the project, i.e.:
        • Athens, Greece: Policies for Infrastructures Maintenance and Repair, Parking Space Management and Urban Mobility
        • Genoa, Italy: Policies for Citizens and Business Services Optimization
        • Nicosia, Cyprus: Policies for Holistic Urban Mobility and Accessibility
        • Lisbon, Portugal: Energy Management and Optimization Policies
        • Burgas, Bulgaria: Data-Driven Water Infrastructure Planning and Maintenance Policies
        The VPME will be integrated with EOSC in order to facilitate access to the Cloud and HPC resources of EOSC/EGI that are required to enable the project’s AI tools, and second to boost the sustainability and wider use of the project’s developments.

        Speaker: Alessandro Amicone (GFT Italy)
      • 19:00
        The Jonas VRE: a science gateway for noise map visualization and data processing in the North-East Atlantic region 1h

        The objective of this poster is to summarize the process of development of the JONAS VRE, which consisted of three main parts:
        1. Capitalization requirements
        2. Capitalization development
        3. Legacy
        For developing the capitalization requirements, we started by presenting the definition of VRE and its main characteristics. The four different steps described in the literature for VRE development were explained. Also, some of the key requirement of VREs were described. The use cases derived
        from the JONAS workshop were developed. The functional and non-functional requirements were detailed. The architecture of the JONAS VRE and the development plan were presented. Finally, the binding and optional requirements of the VRE were described.
        The capitalization development began with some background information on Kubernetes, k3s, JupyterHub and QGIS, which are layers under the JONAS VRE. Then, a brief description of the deployment of the Kubernetes cluster, JupyterHub and QGIS on the EGI infrastructure were presented. The development of the Jupyter notebooks, with emphasis on processing the netCDF files was given. Also, the folder structure of the VRE was presented. Finally, the functional
        requirements as described in the Capitalization requirements were presented and it was explained how the VRE fulfilled each requirement.
        For the JONAS VRE legacy, we first cited its overall objective. and a simplified diagram of the JONAS VRE was presented. This simplified diagram displays the inputs to the VRE, the two VRE applications (JupyterHub and QGIS), and the VRE outputs. Each of the VRE inputs were described, emphasizing the data format, what was provided to the VRE by the different JONAS WPs, and the required user inputs. The VRE processing capabilities were explained. Also, each of the VRE
        outputs were detailed, showing typical images of the VRE graphic outputs. Finally, a summary of the results of the VRE pilot test was provided.

        Speaker: José Antonio Díaz (PLOCAN)
    • 09:00 10:30
      Plenary: Setting the future of digital infrastructures for data-intensive computing Plenary 1st floor (Vienna Andel Prague)

      Plenary 1st floor

      Vienna Andel Prague

      • Sergio Andreozzi (EGI)
      • Jana Klanova (Masaryk University)
      • Christian Cuciniello (EC)
      Convener: Gergely Sipos (EGI.eu)
      • 09:00
        EGI Service Strategy 2022-2024 30m

        Today’s EGI services deliver advanced computing services to support scientists, multinational projects and research infrastructures. They are organised in two catalogues: the internal catalogue delivers coordination and federation services to the data centres that are part of the EGI infrastructure, while the external catalogue addresses researchers, scientific communities, and innovators by delivering data-intensive computing, storage, data management, data analytics, trust and identity management, and training. The external catalogue services are also promoted via the EOSC Portal.

        The EGI Service Strategy sets the priorities for the investigation of new or improved services considering the needs in data-intensive scientific computing gathered from research infrastructures, scientific collaborations, communities of practice and innovators. The creation of a Service Strategy is part of a larger effort to update the EGI Federation strategy and to define specific plans in selected areas. The EGI Service Strategy supports and complements day-by-day
        service portfolio management activities of the EGI
        Federation.

        Speaker: Sergio Andreozzi (EGI.eu)
      • 09:30
        EIRENE RI – European Research Infrastructure for Human Exposome Research 30m

        Research infrastructures (RIs) are key entities enabling high-level research and fostering innovations in any research area. They provide an access to necessary capacities, innovative technologies, and expert human resources. Numerous research infrastructures have been developed in Europe during the last decades but none of them addressed chemical exposures. EIRENE RI (Research Infrastructure for EnvIRonmental Exposure assessmeNt in Europe) fills this gap in the European infrastructural landscape and pioneers the first EU infrastructure on human exposome. EIRENE will provide harmonised workflows covering all processes between the sample collection, data acquisition, and knowledge provided to the end users accessible to academic researchers, private companies, public authorities, and citizens. Linking interdisciplinary (environmental, clinical, socio-economic) data coming from multiple sources will enable large-scale research activities and advance new scientific developments. This will lead to improved understanding of an impact of exposome on the European population, characterization of the risk factors behind development of chronic conditions, and discovery of novel tools for their prevention and treatment.

        Speaker: Prof. Jana Klanova (Masaryk University)
      • 10:00
        EC commitment to Open Science 30m

        Science today is in a full digital transition and producing massive quantity of data and digital outputs. It is facing several challenges that are connected to criteria that do not match such transformation.

        Open Science has the potential to increase quality and efficiency of research and innovation, enhance creativity, collaboration and transparency, bringing back the trust in science through openness.

        The European Commission policy and commitments on Open Science (set out in the ERA policy agenda, in the Pact for R&I in Europe and in different EU Council conclusions) aim at supporting the digital transition of science and making Open Science the new normal. They see a process to reform the research assessment, a future for scholarly communication that is fit for the modern research and a wider promotion of citizen and societal engagement. Nevertheless, the OS policy foster the consolidation of OS enables that are key and foundation to make Open Science the modus operandi for researchers. The main enabler is the European Open Science cloud accompanied by support actions to implement FAIR data management in the research process.

        Speaker: Christian Cuciniello (European Commission)
    • 10:30 11:00
      Compute continuum for Open Science - Achievements of the EGI-ACE project Plenary 1st floor (Vienna Andel Prague)

      Plenary 1st floor

      Vienna Andel Prague

      EGI-ACE is the current flagship project of the EGI community, with a mission to empower researchers from all disciplines to collaborate in data- and compute-intensive Open Science through free-at-point-of-use services that are delivered through EOSC. During this session we will provide a short update about the main achievements of the project after 21 months.
      The session will cover:
      - Project concept and value for Open Science (Gergely Sipos, EGI-ACE Project Technical Coordinator)
      - Supporting scientists and use case achievements (Giuseppe La Rocca, Community Support Lead)
      - Evolving federated compute and storage technologies for EOSC (Enol Fernandez, Cloud Solutions Manager)

      Conveners: Gergely Sipos (EGI.eu), Giuseppe La Rocca (EGI.eu)
    • 10:30 11:00
      Demonstrations Ruby, Crystal

      Ruby, Crystal

      • 10:35
        FAIR EVA: Evaluator, Validator & Advisor 25m Ruby III

        Ruby III

        FAIR EVA (Evaluator, Validator and Advisor) has been developed to check the FAIRness level of digital objects from different repositories or data portals. Developed within the EOSC-Synergy project it aims at helping data producers and data managers to evaluate the adoption of the FAIR principles based on the Research Data Alliance indicators, although the architecture is capable of adapting to new indicators. It requires the object identifier and the repository to check and it can be adapted to different contexts and environments. This demo will present the tool itself, how it can be deployed, how the different tests works and how it can be adapted to different data systems using the plugin system.

        Speaker: Fernando Aguilar (CSIC)
      • 10:35
        openEO Platform: Large-scale Earth Observation Analysis on a federated compute infrastructure 25m Crystal Room

        Crystal Room

        Benjamin Schumacher
        Earth Observaton satellites create a growing data archive enabling environmental
        monitoring services which advance the knowledge about planet earth significantly. openEO Platform builds upon this data archive and allows users to access and process Earth Observation data for their needs on a federated infrastructure. This approach exhibits several advantages:Firstly, the user does not need to download, store, and handle large amounts of Earth Observation data.Secondly, the federated compute platform enables the user to process data fast and facilitates computations at large scale.Lastly,users can easily share their analysis with other uses which simplifies the reproducibility of scientific projects.
        openEO Platform builds on the successful development of the openEO Application
        Programming Interface (API) which was developed in the Horizon 2020 project openEO
        (2017–2020, see https://openeo.org/ )
        The openEO project defined a common set of analytic operators for Earth Observation analysis which was implemented by several backends.This common architecture was expanded by an aggregation layer to openEO Platform, an operational, federated service running at EODC, VITO and Sinergise.openEO Platform is currently built with a strong focus on user co-creation and input from several use-cases from a variety of disciplines. The use-cases include CARD4L compliant ARD data creation with user defined parameterisation,forest dynamics mapping including time series fitting and prediction functionalities, crop type mapping including EO feature engineering supporting machine learning based crop mapping and forest canopy mapping supporting regression-based fraction cover mapping.Three programming interfaces (R, Python and JavaScript) are available to interact with openEO Platform and perform an Earth Observation analysis. EGI Checkin is implemented as the authentication mechanism to enable easy access to users.In this demonstration session we will showcase the use of the platform via Python Jupyter Notebooks and a graphical user interface. The session will cover:Sign up,Sign in,submitting first small jobs and short introduction to larger scale processing.

        Speaker: Benjamin Schumacher (EODC)
      • 10:35
        Using your MATLAB license on EGI services 25m Ruby I, II

        Ruby I, II

        This demonstration will provide tutorials on how MATLAB users can connect to various EGI services with their own licenses to share, collaborate and access data and compute across the European Open Science network.

        1. MATLAB on EGI JupyterHub: Users will learn how they can use their own MATLAB licenses to access and analyze public datasets from hundreds of data providers via the EGI JupyterHub. Users can share research output between diverse user groups, call other languages (eg. Python) from MATLAB and save data in widely accessible formats. Research communities can also leverage this service to have their own custom-built JupyterHub with MATLAB to allow users access to their cloud data. One such community, EISCAT3D will demonstrate how they are successfully taking advantage of MATLAB on this service

        1. MATLAB on EGI HPC services: Users will also learn how they can scale up their computing needs by using High Performance Computing services offered by the various EGI Council members. Using MATLAB Parallel Computing Toolbox and MATLAB Parallel Server users can access multiple compute nodes at their HPC provider of choice. Ghaith Makey and Michaël Barbier from the Simply Complex Lab at Bilkent University in Turkey, will present research and demonstrate their use of parallel computing workflows with MATLAB
        Speakers: Dr Shubo Chakrabarti (MathWorks), Ingemar Haggstrom (EISCAT), Dr Ghaith Makey (Simply Complex Lab, Bilkent University), Dr Michaël Barbier (Simply Complex Lab, Bilkent University)
    • 11:00 11:30
      Coffee Break 30m
    • 11:30 13:10
      Demonstrations Ruby III, Sapphire

      Ruby III, Sapphire

      • 11:30
        motley-cue: SSH access with OIDC tokens 25m Ruby III

        Ruby III

        OpenID Connect (OIDC), an authentication protocol that allows users to be authenticated by an external trusted identity provider, is becoming the de-facto standard for modern Authentication and Authorisation Infrastructures (AAI). Although typically used for web-based applications, there is an increasing need for integrating shell-based services, such as Secure Shell (SSH), with federated AAIs.

        SSH requires local identities that need prior provisioning, and additional credentials such as SSH keys. Using OIDC for SSH can simplify user management for service administrators, and eliminate the need for SSH key management for users.

        Our solution for SSH access via OIDC enables on-the-fly account provisioning and provides a flexible authorisation concept, without modifying existing SSH software or requiring additional service credentials. We developed a set of client and server-side tools that seamlessly integrate with existing SSH software and local identity management policies.

        The client-side tools allow users to directly log into a server with their federated credentials via valid OIDC tokens, without any prior application for an account.

        This contribution aims to present the server-side component, its architecture, and latest developments. The server-side consists of a custom PAM module and a daemon for mapping OIDC identities to local identities (motley-cue). motley-cue uses federated authorisation models for configuring user access, based on Virtual Organisation membership and assurance levels. Moreover, it provides an extensible interface able to forward provisioning events into any local user management system --- support exists for Unix accounts, LDAP, and KIT user management, but admins can extend this to plug in their custom systems. Most recent developments include LDAP integration and support for approval-based provisioning of local accounts.

        All software is free to use and is available on GitHub under MIT license, with support for the major Linux distributions. The software was tested with several major AAIs, such as EGI-Checkin or Helmholtz AAI.

        Speakers: Diana Gudu (KIT), Marcus Hardt (KIT-G), Gabriel Zachmann (Karlsruhe Institute of Technology)
      • 11:45
        Automatic storing, sharing and archiving datasets with Onedata 25m Sapphire

        Sapphire

        In many scientific disciplines, expensive equipment is shared nowadays. The users – scientists, request specific experiments from facilities that perform them on their behalf. The outcome of such an experiment is a dataset, which can be quite huge in many cases. Our introduced system provides an easy way to make data produced by such specialized devices available to the scientific community. It is used to manage the storage of experimental data between several tiers of physical data storage consisting of the experimental facilities where data are acquired, national or scientific domain data storage services, and computing facilities provided on both national and European levels.

        The software is built on the top of the Onedata system. It supports the whole process, from storing produced data from the device, setting up all necessary Onedata options, publishing the datasets, and archiving in permanent storage. It implements varying policies of handling the data, e.g., expiration at the acquisition facility, archiving in multiple copies, and data publication after an embargo period. It can also export datasets to supported repositories or metadata to metadata catalogues. The demonstrated application automatically controls the whole data workflow according to the defined Data Management Plan, which is attached to the dataset in a YAML file.

        We are going to cover in our demonstration:
        - briefly set up and run Oneprovider,
        - setup our application,
        - create a test dataset with metadata,
        - run the data workflow with several configuration possibilities,
        - access the dataset through Onedata web interface and CLI Oneclient,
        - presentation of processing CryoEM data in Scipion adapted to run with Onedata in container and Kubernetes.

        Speaker: Tomáš Svoboda (Masaryk University)
      • 12:00
        What's New with Globus 25m Ruby III

        Ruby III

        Globus is a widely used platform for research data management among EU institutions. While many institutions have traditionally used Globus primarily for reliable file transfer, the platform has evolved to provide a comprehensive set of data management capabilities.

        We will describe and demonstrate the major enhancements made over the past two years, and illustrate how new features and APIs can support the development of applications in service of research. Topics covered will include the new architecture for Globus Connect Server, support for additional storage systems, services for data search and discovery, and various capabilities for automating large-scale data flows. We will also provide a preview of services that support remote computation and an overview of our product roadmap for the coming year.

        Speaker: Vas Vasiliadis (University of Chicago)
      • 12:15
        FAIR and reproducible data management and analysis with openBIS 25m Sapphire

        Sapphire

        Research data management (RDM) in line with the FAIR (Findable, Accessible, Interoperable and Reusable) data principles is increasingly becoming an important aspect of good scientific practice. In experimental disciplines, FAIR RDM is challenging because every step of the research process needs to be accurately documented, and data needs to be securely stored, backed up, and annotated with sufficient metadata to ensure reusability and reproducibility. The use of an integrated Electronic Lab Notebook (ELN) and Laboratory Information Management System (LIMS), with data management capabilities, can help researchers to achieve this goal. In close collaboration with scientists, the Scientific IT Services (SIS) of ETH Zürich have developed and operated such an integrated solution, openBIS, for more than 10 years. As part of the EGI-ACE project, SIS offers the openRDM.eu service since 2021. openRDM.eu supports European research groups with installation, on-boarding and use of openBIS.
        Recently, SIS has been collaborating with scientists from experimental labs at ETH Zürich to enable analysis of their research data managed with openBIS in a reproducible, scalable and collaborative way. To this end, we have developed a platform that provides a connection between openBIS and established open-source tools such as Git for code management, Binder for reproducible computing environments, JupyterLab for interactive computational notebooks, Kubernetes for scalability.
        This presentation will provide an overview of the openBIS software as well as the openRDM.eu service, followed by a demonstration of how data stored with openBIS can be processed and analysed in a FAIR-compliant and reproducible way.

        Speakers: Dr Caterina Barillari (Scientific IT Services, ETH Zürich), Dr Henry Lütcke (Scientific IT Services, ETH Zurich)
      • 12:30
        Serverless workflows along the computing continuum with OSCAR/SCAR: Use cases from AI/ML inference 25m Ruby III

        Ruby III

        OSCAR is an open-source platform that supports serverless computing for event-driven data-processing applications. It abstracts away the deployment and management of computing resources through elastic Kubernetes clusters. Thanks to its integration with the Infrastructure Manager (IM), deployed as part of the European Open Science Cloud (EOSC), users can self-deploy these clusters on public and on-premises Clouds, including the EGI Federated Cloud.

        It supports object-storage systems such as MinIO to trigger the execution of container-based services on file uploads, and the EGI DataHub, for mid-term data storage based on Onedata. Moreover, OSCAR can run on minified ARM-based clusters via K3s, thus making it possible to run it on the Edge.

        Last year, we created new use-case examples and added new functionalities, such as the integration with Knative to improve auto-scaling in synchronous invocations, integration with Apache YuniKorn, support for private registries, and the ability to re-schedule jobs to service replicas.

        OSCAR is integrated with SCAR, an open-source tool that pioneered the usage of containers within AWS Lambda. A common YAML-based Functions Definition Language (FDL) is available to define workflows, with a composer tool that simplifies its production. Latest releases of SCAR include support to mount EFS volumes and the usage of Amazon ECR to support larger container images.

        In this contribution, we plan to demonstrate the benefits of the combination of OSCAR/SCAR to support event-driven data-processing workflows along the computing continuum, where partial processing can take place in the Edge and additional compute-intensive processing can take place on the EGI Federated Cloud and AWS. For this, several use cases will be demonstrated from the field of AI/ML, such as text-to-speech conversion, synchronous inference of ML models existing in the Deep Open Catalog or mask detection in public crowds, the latter exemplified as part of the AI-SPRINT project.

        Speaker: Sebastián Risco (Universitat Politècnica de València)
      • 12:45
        How to access and use the PaaS Orchestrator Service in the EOSC Marketplace 25m Sapphire

        Sapphire

        The PaaS Orchestrator is one of the services available to research communities through the EOSC Marketplace: it allows access to distributed cloud compute and storage resources in a transparent and federated way. Users can easily deploy services without having to worry about where the resources are available and how to create and configure the resources they need: as a matter of fact, all these problems are automatically solved by the PaaS Orchestrator that is able to identify the federated providers where the user is entitled to consume resources thanks to the agreements established between the Virtual Organisations and the providers. Moreover, a set of “pre-cooked” service templates are available through the Orchestrator Web Dashboard: once the user is logged in, she/he can access the service portfolio that includes different categories of services: from the instantiation of virtual machines (with or without additional block storage), to the automatic installation of softwares like docker, docker-compose, elasticsearch and kibana, to the deployment of complex architecture such as kubernetes clusters. Moreover, recently new functionalities have been implemented in the PaaS in order to provide solutions for delivering trusted environments for data analysis (e.g. deployments on private networks, automatic disk encryption).
        The demo will highlight the main functionalities of the PaaS and will show how users can easily interact with the orchestration system using both the command line interface and the web dashboard.

        Scientific communities are encouraged to explore the PaaS Dashboard: if the available services do not fit their requirements, they can contact the support team explaining their use-case. The development team will help them to exploit the PaaS Orchestrator functionalities and, if feasible, new services will be included in the catalogue as a result of these interactions with the research communities.

        Speaker: Marica Antonacci (INFN)
    • 11:30 13:00
      Supporting innovation in Europe for the benefits of SMEs: the EUHubs4data experience Ruby I&II

      Ruby I&II

      Most of Europe’s SMEs lag in data-driven innovation. To tackle this problem, the EU-funded EUHubs4Data project has started building a European federation of Data Innovation Hubs based on existing critical players in this area. To this end, it connects with data incubators and platforms, SME networks, AI communities, skills and training organisations and open data repositories.

      This session will showcase the engagement with European startups and SMEs with practical examples thanks to the project’s experiments, showing that solving real-world problems with cross-border DIH collaboration is possible. Participants could hear about the challenges and solutions from both the experiments and the DIH side and learn about the third open call for experiments that EUHubs4Data will launch in early September. Also, there will be the chance to discuss the implications of the Data Governance Act for DIHs and research infrastructures from the perspective of the Federation.

      • 11:30-11:50 Introduction on EUH4D, the experiments and the 3rd open call, Daniel Alonso (ITI)
      • 11:50-12 EUH4D Federated catalogue implementation, Andrea Manzi (EGI)
      • 12:00:12:30 Experiments in action: the perspective of an SME (Andrei Costin, BINARE) and of a DIH supporting the experiments in practice (Marcin Plociennik, PSNC)
      • 12:30-12:50 The Data Governance Act: practical implications for DIHs and research infrastructures, Natalie Bertels (KUL)
      • 12:50-13:00 Q&A with the participants, facilitated with Mentimeter
      Conveners: Andrea Manzi (EGI.eu), Ilaria Fava
    • 11:45 13:15
      EGI-ACE Lightning Talks: Compute continuum use cases Quartz

      Quartz

      EGI-ACE is a 30-month project (Jan 2021 - June 2023) with a mission to empower researchers from all disciplines to collaborate in data- and compute-intensive research enabled by free-at-point-of-use services. By building on the EGI federation and providers EGI-ACE delivers the EOSC Compute Platform (ECP), a federated system of compute and storage infrastructure extended with platform services to support diverse types of data processing and data analytics cases. The ECP includes High Throughput Compute (HTC), Cloud Compute, High Performance Compute and Container compute services. The ECP empowers scientific data spaces, data analysis platforms and thematic services.
      The session will include presentations of the ECP efforts, highlighting how a compute continuum can be built and used in EGI.

      Convener: Gianni Dalla Torre (EGI.eu)
      • 11:55
        Towards an european e-infrastructure for plant phenotyping 10m

        In recent years, technological progress has been made in plant phenomics (major improvements concerning imaging and sensor technologies). Various initiatives have helped to structure the european phenotyping landscape (EMPHASIS, EPPN) and enable researchers to use facilities, resources and services for plant phenotyping across Europe.

        The EGI-ACE project has given us the opportunity to develop data services and build a federated and interoperable e-infrastructure allowing researchers to share and analyze phenotyping data. Access to the services operated by the EGI Federation made it possible to set up this infrastructure.

        We have taken advantages of the EGI Cloud service to host the open-source Phenotyping Hybrid Information System PHIS (Neveu et al, 2019; www.phis.inra.fr). The information system is connected with the EGI Check-In service for federated authentication.

        We also plan to use other services provided by the EGI-ACE project, such as DataHub, the distributed storage service and Deep Hybrid DataCloud the deep learning and machine learning portal for the EOSC.

        The European plant phenotyping community will benefit from this e-infrastructure. Early adopter users are researchers using the phenotyping platforms at UCPH, UHEL and other universities participating in the NordPlant hub (a climate and plant phenomics university hub for sustainable agriculture and forest production in future Nordic climates) and researchers from the French plant phenomic Infrastructure PHENOME-EMPHASIS.

        Speaker: Vincent Negre (INRAE)
      • 12:05
        MATRYCS: A Big Data Platform for Advanced Services in the Building domain 8m

        MATRYCS is an European Commission co-funded project, started in October 2020, with a duration of 3 years; goal of the project is the design and develop an ICT platform for Big Data management in the building domain. The MATRYCS platform allows the stakeholders to create new business models and business opportunities relying on the value extracted from shared data.

        The platform is deployed by leveraging on the cloud capabilities of EGI infrastructure, provided in the context of the Call for Use Case in the EGI-ACE project. The possibility to use the EGI infrastructure allows a better allocation of resources and an effective definition of the MATRYCS architecture build over the EGI infrastructure; this architecture is based on three software layers on top of the physical layer, which can be directly mapped to the different stages of the Big Data Value Chain.
        MATRYCS Governance layer: it is composed of those services that realize the middleware needed for acquiring, managing and exposing the data. It includes the services required to guarantee data interoperability, cleaning, validation and storage.
        MATRYCS Processing layer: it includes the components needed for the modelling, training, testing and validation of AI and ML based algorithms.
        MATRYCS Analytics layer: it includes the set of services and tools offered to end-users for implementing complex building management applications. As the architecture aims at supporting end-users in the creation of innovation and business, the available services/tools are exposed through the MATRYCS toolbox via different business model options, which include SaaS, PaaS and IaaS.
        The MATRYCS experience demonstrates how it is possible develop industrial oriented applications based on the EGI infrastructure, creating new value added business opportunities.

        Speaker: Dario Pellegrino (Engineering)
      • 12:15
        Using European Open Science Cloud infrastructure for rapid simulations of large-scale global reservoirs 8m

        A.H. Weerts, Jaap Langemeijer, Pieter Hazenberg

        Water reservoirs play an important role in relation to water security, flood risk, agriculture
        production, hydropower, hydropower potential, and environmental flows. However, long-
        term daily information on reservoir volume, inflow and outflow dynamics are not publicly
        available. To enable deriving long-term reservoir dynamics for many reservoirs across
        the globe using a distributed hydrological model, large amounts of computer power are
        needed. Therefore, these types of simulations are generally performed on super
        computers. Nowadays, public cloud computing infrastructure offers interesting
        alternative and allows one to quickly access hundreds to thousands of computer nodes.
        The current work presents an example of making use of the EOSC by simulating the
        dynamics of 3236 headwater reservoirs on a Kubernetes Cluster. Within the cloud,
        distributed model forcing and hydrological parameters at a 1-km grid resolution can be
        derived using HydroMT, which subsequently are used by wflow_sbm to perform long-
        term hydrological simulation over the period 1970-2020. To enable operation in the
        cloud, usage is made of the Argo workflow engine, that is effective able to schedule the
        sequential execution of the HydroMT and wflow_sbm containers. We will present the
        executed modeling setup within the public cloud as well as present some of the results
        derived in this manner by comparing observations with in situ and satellite observations.

        Speaker: Bjorn Backeberg (DELTARES)
      • 12:25
        Bridging Cloud and HPC towards High Performance Data Analytics for climate science 8m

        D. Elia1, F. Antonio1, C. Palazzo1, A. D’Anca1, S. Fiore2 and G. Aloisio1,3

        1 Euro-Mediterranean Center on Climate Change (CMCC) Foundation, Lecce, Italy

        2 University of Trento, Trento, Italy

        3 University of Salento, Lecce, Italy

        The Big Data revolution started at the beginning of this century has been propelled also by the advent of cloud computing solutions, which provided an efficient and cost-effective model for accessing resources on-demand according to the application workload and functional requirements.These new technologies have been gradually exploited in several scientific domains to address the issues associated with large data volumes, besides the more traditional use of High Performance Computing (HPC), which is still required for several compute-intensive applications.The next natural step in this evolution concerns the integration of the Big Data (cloud-based) and HPC software ecosystems for supporting High Performance Data Analytics (HPDA) scientific scenarios at extreme scale. However, the two software ecosystems rely on very different service usage models
        and target different application requirements, making their mixed usage complicated. Software containers can represent the layer for supporting software portability and transparent deployment of scientific HPDA solutions over multiple platforms, allowing developers to bundle the application and all its dependencies (including data dependencies) into a single software image. In this regard, the recent emergence of HPC-friendly container technologies(e.g.,udocker, Singularity,Sarus) can actually enable the use of this model also on HPC infrastructures, thus providing a bridge between Cloud and HPC-based solutions and enabling new paradigms such as HPC as a Service (HPCaaS).In the context of the EGI-ACE project, a HPC pilot concerning the use of data science, management and HPDA solutions for climate science applications is being developed. The pilot is aimed at understanding how containerization technologies can support the integration of cloud and HPC infrastructures to support large-scale data analytics and management. This contribution presents the container-based solutions explored and implemented in the context of the HPC pilot towards
        transparent and portable deployment of HPDA solutions for climate science on top of the resources made available in the EGI infrastructure.

        Speakers: Donatello Elia (CMCC Foundation), Fabrizio Antonio (CMCC Foundation), Cosimo Palazzo (CMCC Foundation), Alessandro D'Anca (CMCC Foundation), Sandro Fiore (University of Trento, Trento, Italy), Prof. Giovanni Aloisio (Advanced Scientific Computing Division, Centro Euro-Mediterraneo sui Cambiamenti Climatici), Enol Fernandez (EGI.eu)
      • 12:35
        The PHIRI project: advances towards an infrastructure for population health research 8m

        The secondary use for research purposes of health data originated in health systems imposes a series of challenges: it has legal and organisational access constraints due to is high sensitivity; as its primary purpose is the caregiving, there is a lack of semantic and syntactic interoperability, i.e., the use of common data models and codifications, making hard to combine and exploit the datasets; and, last but not least, the technical interoperability is hardly addressed when defining the analysis tools and environments.

        The technological packages of the PHIRI are producing a series of prototypes of a federated analysis infrastructure for population health research that face the above challenges in a private-by-design manner, following a data-centric approach, i.e., minimising the movement of personal data. Prototypes’ design and implementation is driven by four use-cases that use real-world data from healthy systems to study the COVID-19 pandemic effects in four aspects of the population health: mental health, delayed cancer treatments, inequalities in the access to treatments and perinatal health. The prototypes will set the basis of a fully operational research infrastructure to be adopted in the ESFRI Roadmap.

        The first successfully delivered PHIRI prototype is based on a containerised solution. Containers encapsulate the analytics algorithms using a common data model defined for each use case. The containers are deployed at the premises of those partners that act or interact with data processors (in terms of GDPR) and produce "local" results that are manually sent to a coordination node that performs the meta-analysis.

        Current work focuses on the automation of container deployment and node coordination, the selection of a single common data model and the development of federated learning algorithms. For its production version, the PHIRI infrastructure is intended to be deployed using the EOSC infrastructure and to interact with the future EHDS infrastructure.

        Speaker: Dr Juan Gonzalez-Garcia (Instituto Aragonés de Ciencias de la Salud)
      • 12:45
        A drug discovery pipeline integrating the processing and analysis of NMR spectra and the identification of lead compounds. 8m

        In this work we present a web service that allows users to execute the full workflow that enables a Nuclear Magnetic Resonance (NMR)-based drug discovery pipeline. NMR spectroscopy has been widely used in the early steps of drug discovery. It is especially suited to the structure-based approach in lead design and is the most powerful method for studies of structure, dynamics, and the interactions of molecules in solution. The NMR-based drug discovery pipeline starts with the acquisition of 2D 1H-15N heteronuclear single quantum correlation (HSQC) spectra on the free protein target, followed by the acquisition of the same experiments for the protein in the presence of different ligands. The changes in the chemical environment of the protein nuclei near the drug binding site induce detectable chemical shift perturbations (CSPs). The measurement of CSPs indicates whether a binding event has occurred at all and, if yes, can provide information on the ligand affinity for the target. Here we have developed a workflow that take as input a HSQC peak assignment of the free protein and a series of raw experimental HSQC data for the screening of a library of candidate ligands. The spectra are automatically processed and assigned; this data is then evaluated to identify the peaks shifted due to the presence of the ligand. This workflow is made available via a web user-friendly interface that is publicly available. The workflow was developed as a Nextflow pipeline and all software was translated to Docker Images. For the front-end and back-end services we have used respectively JavaScript React JS framework and Java Spring Boot framework.

        Speaker: Dr Andrea Giachetti (CIRMMP)
      • 12:53
        Big data in livestock genomics can feed new concepts in One Health 8m

        Epidemiological, biological and virological characteristics of many viruses, including their potential ability to cross species barriers and become zoonoses, suggest that livestock species living close to humans should be considered as part of a global control in a renewed One Health concept. In this context, it is important to comprehensively evaluate if animals could represent risk factors for human health (and vice versa) considering their genetic susceptibility to the diseases and their potential role as reservoir of infecting agents. Here, we mined more than 30 TB of DNA sequences from 1471 animals, including cattle, pigs, rabbits, and avian species to mine these datasets from these two perspectives: (i) evaluation of the variability in genes that are directly involved in the progress of the host infections from viruses; (ii) to obtain a first global landscape unconventional picture of the animal virome contained in these datasets. Genomics data were from publicly available resources and derived from several breeds/populations and different sequencing projects around the world. Variants from the host genome datasets were compared with those present in humans to infer susceptibility/resistance to virus infections. The results can help to design genetic conservation strategies of animal genetic resources. Moreover, the virome characterization from these whole-genome sequencing datasets from the host livestock species can help to evaluate viruses that silently circulate helping the establishment of a risk evaluation system. Overall, the possibility to rapidly obtain, store and process genomics data as in the AnGen1H project, led to discover new elements to consider as potential risk factors to be included in One Health perspectives.

        Speaker: Dr Samuele Bovo (Department of Agricultural and Food Sciences, Division of Animal Sciences, University of Bologna)
    • 11:45 12:30
      Support for Ukraine Topaz

      Topaz

      Military attacks on Ukraine starting in February 2022 have caused immense destruction and human suffering and have led to a refugee crisis in Europe. Research infrastructures and the Ukrainian data centres are no exception; the war caused loss and destruction also to the technical infrastructures in the EGI Federation.

      Many scientists and researchers are among the millions of Ukrainians who have fled their country. Sergiy Svistunov from the Ukrainian Grid Infrastructure is not one of them. He stayed in Ukraine, determined to keep the infrastructure services up and running, deeply committed to science and his country’s independence.

      We will meet Sergiy Svistunov, a Ukrainian representative in the EGI Council, who will tell us how the digital infrastructures supporting research in the country have been coping during the past months of the war. With Sergiy, we will discuss how we can join our forces to support the local research community and the restoration of the computing facilities that were lost during the past months of the war.

      The session will be split into two parts:
      A Q&A part, where participants can directly ask Sergiy, who will share his experiences and provide insights into the current situation, and
      A Discussion on how we can support the rebuilding effort and ensure the continuity of Ukraine's science as well as the well-being of our colleagues.

      Conveners: Magdalen Brus and Tiziana Ferrari

      Conveners: Magdalena Brus (EGI Foundation), Tiziana Ferrari (EGI.eu)
    • 13:00 14:00
      Lunch 1h
    • 14:00 15:00
      EGI-ACE Lightning Talks: Technologies for a Compute Continuum Quartz (Vienna Andel Prague)

      Quartz

      Vienna Andel Prague

      Convener: Levente Farkas
      • 14:00
        An Efficient Distributed Storage Solution for Edge Computing Environments 8m

        Due to the continuous development of Internet of Things (IoT), the volume of the data these devices generate are expected to grow dramatically in the future. As a result, managing and processing such massive data amounts at the edge becomes a vital issue. Edge computing moves data and computation closer to the client enabling latency- and bandwidth-sensitive applications, that would not be feasible using cloud and remote processing alone. Nevertheless, implementing an efficient edge-enabled storage system is challenging due to the distributed and heterogeneous nature of the edge and its limited resource capabilities. To this end, we propose a lightweight hybrid distributed edge/cloud storage framework which aims to improve the Quality of Experience (QoE) of the end-users by migrating data close to them, thus reducing data transfers delays and network utilization.

        Speaker: Dr Antonios Makris (Harokopio University of Athens)
      • 14:10
        Computing at INCDTIM and beyond 8m

        Grid Computing, Cloud Computing, High-Performance Computing are three different fields with the same underlying idea, namely the processing and storing of data. Grid is standardized, Cloud is one step toward standardization, and HPC is an ongoing project around the world with a lot of in-house possibilities. At the National Institute for Research and Development of Isotopic and Molecular Technologies (INCDTIM) we have been processing data for the last 15 years, at Grid site RO-14-ITIM and at the HPC system of 7 TFlop for the last 8 years, but would like to add a cloud computing system at our Institute. This paper describes what we have at the Institute and what projects we have to fulfill our long-lasting dream of having a public/private cloud at our location.

        Speaker: Dr Felix Farcas (INCDTIM)
      • 14:20
        FedCloud client: the powerful client for EGI Federated Cloud 8m

        EGI Federated Cloud consisted of many different OpenStack sites from different organizations. In the past, users are often advised to access the IaaS services via the official endpoints of the sites. It is desired to have a universal client tool that can operate with all sites in the federation.

        The FedCloud client is a high-level Python package for a command-line client designed for interaction with the OpenStack services in the EGI infrastructure. The client can access various EGI services and can perform many tasks for users including managing access tokens, listing services, and mainly execute commands on OpenStack sites in EGI infrastructure.

        Although using OpenStack client as the backend, the FedCloud client uses high-level abstractions of the federation: site and VO names as the main inputs for most of operations. From the view of users, site/VO names are much more friendly and memorable than site endpoints and project IDs in OpenStack commands. Furthermore, FedCloud client can perform federation-wide operations, e.g. listing all VMs in a VO on all sites.

        The FedCloud client can be considered at the shell for EGI Federated Cloud. It is designed to be used in scripts for automation or called directly from Python codes. With native support for JSON format, the outputs from the clients can be processed easily in scripts that enables developing powerful tools like listing all owned VMs in simple way.

        Speaker: Viet Tran (IISAS)
      • 14:30
        DPM to dCache migration 8m

        DPM storage support is gradually declining and will be discontinued in the coming years. Computing sites with this grid storage must decide what to use as their future storage technology and each migration strategy comes with different requirements for site administrator expertise, operational effort and expected downtime. We will describe the migration to dCache, which relies on tools distributed with the latest version of DPM and does not need to copy any data files. This method provides a quick and easy way to make one-to-one grid storage replacement transparent to the client applications with less than a day downtime. DPM to dCache migration tools have already been successfully used for production sites with storage sizes up to 5PB and 50M objects and we will briefly describe the experience from the actual migration.

        Speaker: Petr Vokac (Czech Technical University in Prague (CZ))
      • 14:40
        Service migration and high availability via Dynamic DNS service 8m

        Nowadays, more and more services are dynamically deployed in Cloud environments. Usually, the services hosted on virtual machines in Cloud are accessible only via IP addresses or pre-configured hostnames given by the target Cloud providers, making it difficult to provide them with meaningful domain names. The Dynamic DNS service for EGI Infrastructure is developed for solving the problem.

        The Dynamic DNS service provides a unified, federation-wide Dynamic DNS support for VMs in EGI infrastructure. Users can register their chosen meaningful and memorable DNS hostnames in given domains (e.g. my-server.vo.fedcloud.eu) and assign to public IPs of their servers.

        By using Dynamic DNS, users can host services in EGI Cloud with their meaningful service names, can request valid server certificates in advance (critical for security) and many other advantages.

        This talk is devoted to special use cases of the Dynamic DNS service: service migration and high availability. There are many software solutions for developing high availability services but they are mostly designed for a single site or relying on load balancers. If the entire site hosting the services is down, e.g. due to power outage, software solutions like keepalived/haproxy cannot help.

        The Dynamic DNS service can be used to achieve high availability for critical services that need to operate even a whole cloud site hosting the services are down. Critical services may have backup instances deployed on other sites located on other regions to minimize the risks that all instances of the services are down at the same time. Simple scripts will check the health of instances and assign the service endpoint to a working instance via Dynamic DNS service. Implementation of such a solution via Dynamic DNS is very simple and without single point of failure. The EGI secret management service [1] is the example of the solution.

        1. https://docs.google.com/document/d/18uqpZ2AkdAm9WMsDfQgDnv4Y4qMyoUpBilsLiHPrfvk/edit?usp=sharing
        Speaker: Viet Tran (IISAS)
    • 14:00 15:00
      Integrated High Performance Compute services for national and international customers Opal (Vienna Andel Prague)

      Opal

      Vienna Andel Prague

      EGI-ACE and several other projects in the EOSC context are addressing issues around interoperability of the infrastructures (AAI, data movement, etc.), looking at resource allocation and access policies, and aiming to ultimately enrich the EOSC offering and increase the adoption of HPC.

      In this session we will show some of the ongoing work on HPC integration in Europe and will discuss possible collaboration paths.

      Convener: Enol Fernandez (EGI.eu)
      • 14:00
        HPC in EGI-ACE 15m

        The EOSC Compute Platform, delivered by the EGI-ACE project, is a system of federated compute and storage facilities, complemented by diverse access, data management and compute platform services.
        This session will focus on how HPC systems are made available by EGI-ACE to serve both national and international open science projects. The content will cover:
        - EOSC-compliant federated user access management on HPC systems.
        - Availability and reliability monitoring of federated HPC sites.
        - Integrated usage accounting across HPC, cloud and HTC sites.
        - Access to distributed, federated data from HPC systems.
        - Portable container-based applications for cloud compute, HTC and HPC systems.

        Speaker: Enol Fernandez (EGI.eu)
      • 14:17
        Bridging Fenix and EOSC from Data Transfer Interoperability 15m

        Enabling accessibility of HPC resources to its end users is one of the strategic aims of EOSC, and Fenix, a collaboration of six leading European HPC centers, aims to harmonize and federate e-infrastructure services to support a variety of user communities. In this context, connecting Fenix archival data repositories to the ESCAPE data lake was proposed as a first step towards a broader collaboration between EOSC and Fenix. This presentation will briefly introduce the AAI integration behind the scenes and demonstrate a Fenix-ESCAPE data transfer using FTS.

        Speaker: Mrs Shiting Long (FZJ)
      • 14:34
        C-SCALE HPC 10m
        Speaker: Raymond Oonk (SURFsara BV)
      • 14:46
        Collaborative experiments with HPC Tier-2 sites in Netherlands 10m

        HPC federation is an important topic for the Tier-2 centres here in the Netherlands. Therefore, SURF (Dutch Cooperative for Education & Research) has put an effort to understand what federation might mean for the SURF member institutes and the SURF organization as a whole.

        Our vision is to enable researchers to conduct their research using a federated HPC ecosystem in which they can easily run and migrate their scientific applications & workflows to & from the most appropriate platform while ensuring that the infrastructure is deployed optimally and efficiently to create maximum impact on research. On the other hand, we would like to have optimized use of aggregate compute available in the regional, national & European levels.

        To realize the above vision, it is necessary to work on all the aspects of the HPC ecosystem: Technology, Expertise and Governance, in collaboration with stakeholders at different levels.

        The Innovation Labs at SURF provide an excellent environment for expertise building (capability) by conducting technical experiments, and proof of concepts in a collaborative setting. In this talk, we will share our approach, ideas for collaborative exploration, challenges & strategy for the topic and also for the future. We would also like to invite international partners to become part of this initiative with Innovation labs at SURF.

        Speakers: Sagar -, Mr Peter Hinirch (SURF), Mr Valeriu Codreanu (SURF)
    • 14:00 15:00
      Operational tools and processes - NGIs reports and discussion Ruby I,II (Vienna Andel Prague)

      Ruby I,II

      Vienna Andel Prague

      The session is a meeting point for the 'EGI Operation Managers', the operational representatives from all national infrastructures (NGIs) represented in EGI. The session is open to all conference attendees who want to learn about EGI/NGI operational tools and practices.

      The focus of the meeting will be on the tools and processes used in EGI for federated service delivery.

      The EGI Service Delivery and Information Security team recently created the NGI Liaison role to have 1-to-1 meetings with the NGIs representatives. During 2022 feedback has been gathered by the liaisons from the NGIs over a number of service operation topics, such as feedback on EGI operational tools, processes, NGIs operations and resources status, pritority user communities, the usage and issues with the EGI Core Services.
      During this session a brief report about the outcome of these meetings will be provided, and some NGIs representatives will share their experience and provide a status about their national infrastructures.

      • EGI Infrastructure status and internal services (Alessandro) (10 min)
      • NGI Liaison meetings summary (Matt) (10 min)
      • NGI_IT status (15 Min)
      • NGI_TR status (15 Min)
      • discussion
      Conveners: Alessandro Paolini (EGI.eu), Matthew Viljoen (EGI.eu)
      • 14:00
        EGI Infrastructure status and internal services 10m
        Speaker: Alessandro Paolini (EGI.eu)
      • 14:10
        NGI Liaison meetings summary 10m
        Speaker: Matthew Viljoen (EGI.eu)
      • 14:20
        NGI_IT status 15m
        Speakers: Diego Michelotto (INFN), Doina Cristina Duma (INFN)
      • 14:35
        NGI_TR status 15m
        Speaker: Hakan Bayindir (TUBITAK)
      • 14:50
        Discussion 10m
    • 15:00 15:30
      Coffee Break 30m
    • 15:30 17:00
      Combining Copernicus data and EGI services for Earth Observation at scale Quartz (Vienna Andel Prague)

      Quartz

      Vienna Andel Prague

      This session is organised by the C-SCALE project to bring together Earth Observation and distributed computing communities in Europe.
      Copernicus is a leading provider of Earth observation data, which is used for services providers, public authorities and other international organisations to improve the quality of life for the European citizens. EOSC adds several federations of service providers and research initiatives and solution providers into the shared innovation space.
      C-SCALE combines relevant data and services from these two initiatives, and provide 'Big Copernicus Data Analytics services' that streamline the integration of models, projects and observation programmes.
      The session will present recent achievements in the field, and will provide a forum to discuss exploitation paths for the combined services.

      Convener: Enol Fernandez (EGI.eu)
      • 15:30
        C-SCALE project introduction 10m
        Speaker: Charis Chatzikyriakou (EODC Earth Observation Data Center for Water Resources Monitoring GmbH)
      • 15:40
        Earth System Simulation and Data Processing Platform 25m
        Speakers: Enol Fernandez (EGI.eu), Raymond Oonk (SURFsara BV)
      • 16:10
        Metadata Query Sevice 20m
        Speaker: Zdenek Sustr (CESNET)
      • 16:35
        Demonstration of a C-SCALE workflow solution 25m

        The C-SCALE project is leveraging cross-disciplinary open-source technologies available through the European Open Science Cloud to develop an open federation of compute and data providers to provide homogenous access to resources, thereby enabling its users to generate meaningful results quickly and easily.

        To facilitate community co-design of the open compute and data federation, its functional specifications are derived from community use cases that determine user requirements for the federation members to implement collaboratively with its users.

        Additionally, the use cases test the efficacy of the federation tools and services, thereby providing feedback to the federation members on improvement opportunities to ensure the infrastructure meets its user’s needs.

        Here, we demonstrate the first release of the hydrodynamic and water quality modelling workflow solution which is intended to give users a template and reusable components to develop coastal ocean modelling and forecasting applications for their area of interest.

        Speaker: Dr Bjorn Backeberg (Deltares)
    • 15:30 17:00
      Data and compute services for the food sector Quartz (Vienna Andel Prague)

      Quartz

      Vienna Andel Prague

      FNS-Cloud aims to overcome European research infrastructure fragmentation by federating food nutrition security (FNS) data, tools, and services (resources) essential for addressing diet, health, and consumer behaviour as well as on sustainable agriculture and the bioeconomy. The implemented cloud solution (FNS Cloud, no hyphen) will promote FAIRification of FNS resources and support exploitation by user communities. Sustainability of FNS Cloud (solution) will be achieved largely through added value, linking existing and emerging research infrastructures as well as data, tools, and services. FNS Cloud comprises a webservice, linking resources and delivering Services via a user interface, regardless of location. However, whilst FNS-Cloud (project) assets (datasets, tools, and services) have been built with interoperability in mind, the majority of FNS resources across the EU have developed separately, often organically, making mapping, merging, and exploitation challenging both technically and in terms of the research questions that can be posed and answered.
      The session will bring together representatives from EuroFIR, METROFOOD, Blue Cloud, COMFOCUS, Elixir, ECRIN, FNHRI, EOSC TBD to explore future FAIRification of food sector data, tools and services.

      Speakers:
      - Presentation title: "Service and Semantic Interoperability in EOSC" Speaker: Mark Dietrich, Senior Advisor, EGI Foundation
      - Presentation title: “Multidisciplinary working for a shared goal – education, community and patience” Speaker: Prof. Annette Fillery-Travis BSc, MA, PhD, CChem, FRSC, Professor at Professor at University of Wales Trinity Saint David
      - Presentation title: “Challenges and solutions; Fairification of Food & Health Data" Speaker: Prof. Eileen Gibney, University College Dublin
      - Presentation title: “FNS-Cloud Microbiome Demonstrator - The FAIR-ification of Diet & Microbiome Data” Speaker: Duncan Ng, Bioinformatician at Food Databank National Capacity, Quadram Institute Bioscience
      - Presentation title: “Standardisation and interoperability of food and nutrition data" Speaker: Prof. dr. Barbara Koroušić Seljak, Senior researcher at Jozef Stefan Institute, Computer Systems Department
      - Presentation title: “FNS-Cloud and METROFOOD-RI to support food traceability and transparency” Speaker: Sian Astley for Claudia Zoani - Researcher at Italian National Agency for new Technology, Energy and Sustainable Economic Development (ENEA).
      - Presentation title: “FAIR Fish Data and Tools” Speaker: Anton Ellenbroek, Fisheries Projects Development Consultant, FAO of the UN, Fisheries and Aquaculture Statistics and Information Branch (NFIS).

      After presentations a moderated panel discussion will take place.

      Conveners: Ana Povh, Gergely Sipos (EGI.eu), Sian Astley
    • 15:30 17:00
      Extending EOSC coordination at the National level - Achievements and lessons learnt in the EOSC Synergy project Opal

      Opal

      The EOSC Synergy project (https://www.eosc-synergy.eu/), to extend EOSC coordination at the national level in seven European countries is reaching its end in October 2022 and this session is a timely opportunity to highlight in a tangible, integrated way some of the project’s key results and how they contribute to expand the capacity and capabilities of EOSC by leveraging on the experience, effort and resources of national publicly-funded digital infrastructures.

      After an introduction providing a bird’s eye view of the major project results, the session continues with presentations on two key elements of the EOSC Synergy service integration: the Infrastructure Integration Handbook and the Software Quality Assurance as a Service (SQAaaS) tool. These elements are then presented in a real-case environment via a comprehensive presentation showing an example of the MSWSS thematic service integration for both SQAaaS and infrastructure. Finally, a presentation showing how to work with the EOSC Synergy Learn platform, concludes this session.

      Session Programme

      Chair: Valentino Cavalli (EGI.eu)

      15:30 “EOSC Synergy Key Results, an Overview” - Elisa Cauhé (EGI.eu)
      15:40 “Handbook on EOSC Infrastructure Integration” - Viet Tran (IISAS)
      15:55 “Software Quality Assurance as a Service (SQAaaS)” - Pablo Orviz (CSIC) / Samuel Bernardo (LIP)
      16:10 “Thematic Service integration in EGI Fedcloud” - Miguel Caballer (UPV)
      16:30 “The EOSC Synergy Learn Platform and its Capabilities” - Marcin Plociennik (PSNC)
      16:50 Discussion, Q&A

      Conveners: Elisa Cauhe (EGI.eu), Gwen Franck, Valentino Cavalli (EGI.eu)
    • 15:30 17:00
      Green Computing Ruby I, II (Vienna Andel Prague)

      Ruby I, II

      Vienna Andel Prague

      This session brings together EGI service providers and external experts to discuss current status and ways forward to lower the environmental impact of advanced compute services, and distributed data analysis workflows operated in the EGI context.

      Convener: Catalin Condurache (EGI.eu)
      • 15:30
        Introduction 10m
        Speaker: Catalin Condurache (EGI.eu)
      • 15:40
        Identifying the impact of policy decisions on energy consumption in the Energy Data Centre 20m

        Catherine Jones leads the UK based Energy Data Centre, a capability of the UK Energy Research Centre. The EDC holds a variety of research outputs relevant to energy research, some of which have a long-term preservation remit. The EDC undertook a short project in 2021 to understand the impact of preservation and collection management policies on EDC procedures and whether energy used for routine activities were detectable in the energy consumption of the EDC. This talk discusses the approach, findings, lessons learnt and what we did after the project had reported.

        Speaker: Catherine Jones (UKRI-STFC)
      • 16:00
        Jisc Industrial Internet of Things (IIoT) & Building Management Services (BMS) PoC with Honeywell Forge 15m
        Speaker: Paul Stokes (Jisc)
      • 16:15
        Moving to a more eco-friendly data center in France: possibilities and constraints 15m

        The management of data center in French public research institutes is subject to a regulatory framework that defines and sets out several policies, such as equipment purchasing procedures, the supply of fluids, etc.
        This presentation will briefly detail this framework and how a policy to improve the environmental footprint of a data center can be conducted in France.

        Speaker: Jerome Pansanel (CNRS)
      • 16:30
        Q&A session 30m

        This session will be moderated by a panel of experts:
        - Catherine Jones (UKRI-STFC)
        - Sagar Dolas (SURF)
        - Shaun de Witt (UKAEA)

    • 17:00 18:00
      Community Managers networking meeting: Community Managers Crystal

      Crystal

      The session is a meeting point for the 'EGI Community Managers', the information-sharing network about user engagement, training, technical support and partner liaison activities in the EGI context. The session aims at presenting the status of the activities of the Community Managers. It offers an occasion to discuss any support needs with the EGI Shepherds and to list practical actions for the following months.

      The session is open to any other attendee.
      https://confluence.egi.eu/pages/viewpage.action?pageId=82382404

      Convener: Ilaria Fava (EGI.eu)
    • 17:00 18:00
      Demonstrations Ruby

      Ruby

      • 17:00
        EOSC DIH Demo's: OiPub and 4Science 1h Ruby I&II

        Ruby I&II

        Speaker: Elisa Cauhe (EGI.eu)
      • 17:00
        Onedata and OpenFaaS Lambdas for data processing 25m Ruby III

        Ruby III

        Distributed data management is getting more and more critical in EOSC environments. Recently long term preservation in the distribution has become more critical than ever before. So this demo will demonstrate new functionalities of the Onedata platform to execute custom processing workflows based on FaaS services.

        The processing will be demonstrated in the context of the long term preservation and archications tasks. Still, the solution is generic and might be used for other purposes, especially for data reproducibility problems where the easiness of the distribution of the data computation tools is a crucial challenge these days.

        Speaker: Lukasz Dutka (CYFRONET)
      • 17:00
        OPENCoastS+: on-demand forecast of circulation and water quality in coastal regions 25m Sapphire

        Sapphire

        OPENCoastS+ (https://opencoasts.a.incd.pt/) is an online service that assembles on-demand coastal dynamics forecast systems for selected areas and keeps them running operationally for a period defined by the user. This service provides a tool that targets the needs of different users, from researchers to coastal managers, anticipating natural disasters and contamination events from anthropogenic sources, helping in search and rescue operations, and supporting a better understanding of the physical and ecosystem dynamics in coastal areas, among other applications.
        OPENCoastS+ extends from OPENCoastS to integrate water quality, and generates 2-day forecasts of water dynamics circulation variables (water levels, velocities, temperature, salinity, wave parameters) and water quality variables (Escherichia coli and enterococcus, or a user-specified generic tracer).The relevant physical and water quality processes are simulated using the modeling suite SCHISM.
        The service integrates three main features: i) “Configuration Assistant”, guiding the user in the creation of a new forecast system following 7-8 simple steps; ii) “Forecast Systems”, which allows the users to manage their forecast systems; and iii) “Outputs Viewer”, where the user visualizes the daily predictions for each forecast and compares model predictions with observations from EMODnet monitoring stations.
        OPENCoastS+ service is provided through the European Open Science Cloud (EOSC) computational resources. All software pieces of the OPENCoastS+ service are open-source (Apache license) and available in the https://gitlab.com/opencoasts repositories.
        Herein, we will demonstrate the main features of OPENCoastS+ through selected coastal applications, as well as details about the service automated deployment using an Infrastructure as Code (IaC) approach.

        Speaker: Marta Rodrigues (LNEC - Laboratório Nacional de Engenharia Civil)
      • 17:30
        ENES Data Space: an EOSC-enabled Data Space Environment for Climate Science 25m Sapphire

        Sapphire

        The scientific discovery process has been deeply influenced by the data deluge started at the beginning of this century. This has caused a profound transformation in several scientific domains which are now moving towards much more open and collaborative approaches.

        In the context of the European Open Science Cloud (EOSC) initiative launched by the European Commission (EC), the ENES Data Space represents a domain-specific implementation of the data space concept, a digital ecosystem supporting the climate community towards a more sustainable, effective, and FAIR use of data.
        More in detail, the ENES Data Space aims to provide scientists with an open, scalable and cloud-enabled science gateway for climate data analysis on top of the EGI Federated Cloud infrastructure. The service, developed in the context of the EGI-ACE EU project, provides ready-to-use compute resources and datasets, as well as a rich ecosystem of open source Python modules and community-based tools (e.g., CDO, NCO, Xarray, Dask, PyOphidia, Cartopy, Matplotlib, etc.), all made available through the user-friendly JupyterLab interface. In particular, the ENES Data Space provides access to a multi-terabyte set of specific variable-centric collections from large-scale global experiments to support researchers in realistic climate model analysis experiments. The data pool consists of a mirrored subset of the CMIP (Coupled Model Intercomparison Project) climate model datasets from the ESGF (Earth System Grid Federation) federated archive. Results and output products as well as experiment definitions (in the form of Jupyter Notebooks) can be easily shared among users through data sharing services integrated in the infrastructure.

        This demonstration will showcase how scientific users can benefit from the ENES Data Space and practically exploit its main features and capabilities for research purposes.

        Speaker: Mr Fabrizio Antonio (Advanced Scientific Computing Division, Centro Euro-Mediterraneo sui Cambiamenti Climatici)
      • 17:30
        Towards Reference Architectures: A Cloud-agnostic Data Analytics Platform Empowering Autonomous Systems 25m Ruby III

        Ruby III

        In this demo we would like to present a scalable, cloud-agnostic and fault-tolerant data analytics platform for state-of-the-art autonomous systems that is built from open-source, reusable building blocks. As a baseline for further new reference architectures [2,3] it represents an architecture blueprint for processing, enriching and analyzing various feeds and streams of structured and unstructured data from advanced Internet-of-Things (IoT) -based use cases.

        Reference architectures have the potential to increase the efficiency and reliability of the development process in many application domains. In our demo, we would like to present how Reference Architectures can be applied to develop cloud-based applications. High abstraction-level reference architectures typically incorporate state-of-the-art approaches and system design principles but lack references to particular implementations. Contrary, low-level architectures focus on the implementation details. An example of a high-level reference architecture is the Lambda Architecture [4] and the Kappa Architecture [5], whereas [6] is an example for a low-level architecture offered by a public cloud provider promoting its services. Our approach [10] was developed within the DIGITbrain [1] European project.

        Our platform builds on industry best practices, leverages on solid open-source components in a reusable fashion and is based on our experience gathered from numerous IoT and Big Data research projects [7,8,9]. Our platform is container-based, built using orchestration tools and utilizes reusable building blocks. The platform consists of two main parts. The first part contains the custom components of the different use cases, whereas the second part hosts the core components shared between all use cases.

        The platform is currently used in the framework of the National Laboratory for Autonomous Systems in Hungary (abbreviated as ARNL). We would like to demonstrate the platform through a selected use case from ARNL involving data collection from autonomous vehicles.

        Speaker: Dr Attila Csaba Marosi (SZTAKI)
    • 17:00 18:00
      Lightning Talks: EOSC Compute Platform 1 Quartz

      Quartz

      Convener: Andrea Cristofori (INFN)
      • 17:00
        Enabling quantum computation for EOSC users 8m

        Quantum computing is a new emerging paradigm allowing the
        solution of problems not resolvable with traditional computing
        approaches. With hardware resources becoming available, interested
        researches have the possibility to experiment with quantum resources at
        small scale. Providers like D-Wave (Leap) or AWS (Braket) offer
        cloud-like access to their quantum resources. Different types of quantum
        hardware is available: annealing systems, trapped-ion quantum computers
        (gate-based machines), or computers using superconducting qubits.
        Access to these resources is usually available by using some sort of API
        or SDK, depending on the provider. For example, D-Wave offers the
        Python-based Ocean SDK, while AWS has the Python-based Braket SDK.
        Beyond APIs and SDKs offering access to these services, additional
        libraries were created in order to support a given scientific domain
        over quantum resources. For example PennyLane is a Python library for
        differentiable programming of quantum computers.
        The presentation gives an overview of the above technologies, and shows
        a container-based reference architecture providing playground for
        quantum computing. The RA has JupyterLab deployed with a number of
        quickstart examples showing the usage and advantage of quantum
        computing,, along with all the necessary dependencies deployed. Using
        this RA, EOSC users can start experimenting with quantum resource within
        minutes.

        Speaker: Zoltan Farkas (SZTAKI)
      • 17:10
        E-Science Centre with EGI resources for the Plasmasphere, Ionosphere and Thermosphere research community 15m

        PITHIA-NRF (Plasmasphere Ionosphere Thermosphere Integrated Research Environment and Access services: a Network of Research Facilities) is a project funded by the European Commission’s H2020 Programme to build a distributed network of observing facilities, data processing tools and prediction models dedicated to ionosphere, thermosphere and plasmasphere research. One of the core components of PITHIA-NRF is the PITHIA e-Science Centre that supports access to distributed data resources and facilitates the execution of various models on local infrastructures and remote cloud computing resources. The University of Westminster team, together with EGI is responsible for the development of the e-Science Centre within the project. Resources in the e-Science Centre are registered using a rich set of metadata that is based on the ISO 19156 standard on Observations and Measurements (O&M), and specifically augmented and tailored for the requirements of space physics. When it comes to the execution of Models, the PITHIA e-Science Centre supports three main types of model execution and access scenarios: models can be executed on resources of the various PITHIA nodes, can be deployed and executed on EGI cloud computing resources, or can also be downloaded and executed on the users’ own resources. This presentation will report on the current state of the development work, after the first year of the project and will also outline the development roadmap. A first prototype of the e-Science Centre is now available supporting resource registration and ontology-based search functionalities. Additionally, proof of concepts of the various execution mechanisms have also been implemented.

        Speaker: Tamas Kiss (University of Westminster, London, UK)
      • 17:25
        Quantum-notebook: a Docker stack for quantum computing 8m

        Activities on Quantum computing are increasing thanks to the push of large investments promoted by Governments, Industries, and international actors of research. This environment stimulates the creation and integration of tools and components to design and simulate quantum circuits.
        At the current state of the art, there are several different languages and frameworks for programming quantum computers, among them some of the most famous are Qiskit, Cirq, QASM, Q# and others.

        In this work is presented a Quantum-Notebook built as a ready-to-deploy Docker image, based on JupyterHub technologies which implements a set of largely used tools for simulation or quantum programming.

        Built on top of Jupiter Docker stack, the Quantum-Notebook provides a ready to use web-app to start directly programming in the preferred language, simplifying the installation steps. The image can be pulled and run on any device, such a laptop, server, or a cloud VM thanks to the versatility of docker.
        Finally the Quantum-Notebook is easy to extend with additional libraries and is reusable in different contexts for development, simulation or training sessions.

        The goal of this work is to give a contribution for helping, researchers, students, teachers and interested people to approach quantum programming.

        Speaker: Silvio Pardi (INFN)
      • 17:35
        Expanding the capacity and capabilities of an Earth Observation application by means of the European Open Science Cloud 8m

        Scientific services are becoming increasingly data intensive, not only in terms of computationally intensive tasks but also in terms of storage resources. In this scenario, Earth observation applications handle huge amounts of data, mainly large satellite imagery, to perform a wide variety of studies: from the monitorization of different land and water variables to the prediction of the evolution of an Earth area in a given period of time. For this kind of services, the usage of the Cloud Computing paradigm allows them to meet these demands. However, adapting applications and services to this set of complex technologies and solutions is not trivial.

        In the context of the EOSC-Synergy project, there has been an effort with ten different thematic services in refactoring their architecture and integrating EOSC services from the EOSC marketplace, leading to increased performance and capacity and enhanced functionality. SAPS is one of these thematic services, an Earth observation application that employs Energy Balance algorithms to estimate evapotranspiration, a value that can be applied to analyze, among other aspects, the evolution of forest masses and crops. The output of this service is especially relevant for researchers in Agriculture Engineering and Environment, because it depicts the impact of human and environmental actions on vegetation, leading to better forest management and analysis of risks.

        Furthermore, thanks to the EGI ACE project and its open call for use cases, SAPS is enjoying the EOSC cloud infrastructure (involving both computational and storage resources), and several platform services, like the EGI Checkin to manage authentication and authorization of its users, and the the EC3 tool to dynamically manage the underlying Kubernetes cluster where SAPS is deployed. This contribution provides a summary analysis of the adaptations made in the SAPS thematic service to take advantage of the EOSC ecosystem, including infrastructure, services and tools.

        Speaker: Dr Amanda Calatrava (UPVLC)
      • 17:45
        Low Barrier Sciencemesh User Access to EGI Services 8m

        We can say that EOSC has a vision of delivering a Seamlessly Accessible Cloud for Research. An obvious approach, then, is to start with existing systems, and ensure that science users can use them as a joined-up offering, without problems of accessibility, interoperability or eligibility. Two of the best established systems in the pan-european domain are EGI's cloud compute service, and CS3MESH4EOSC's synch&share storage/collab service (the "ScienceMesh").

        Speaker: Milan Danecek
    • 18:30 22:35
      Conference Dinner - supported by EGI-ACE 4h 5m
    • 09:00 13:00
      GREAT KOM (invitation only) Ruby I,II

      Ruby I,II

    • 09:00 10:30
      Plenary: International collaboration for excellence in science: opportunities and challenges Plenary 1st floor (Vienna Andel Prague)

      Plenary 1st floor

      Vienna Andel Prague

      The session brings together representatives from institutions collaborating with EGI at the international level, to explore existing challenges and benefits for international collaboration and the way forward. A presentation about the ongoing EGI activities will provide the framework for the discussion.
      - Giuseppe La Rocca (EGI)
      - Eric Yen (ASGC Taiwan)
      - Jianhui Li (CNIC)

      Convener: Ilaria Fava
    • 10:30 11:00
      Coffee Break 30m
    • 11:00 13:00
      Global Open Science Cloud Workshop - International Infrastructure, Data and Cloud integration efforts Crystal Room (Vienna Andel Prague)

      Crystal Room

      Vienna Andel Prague

      This workshop will be an extension of European Open Science Cloud (EOSC) discussion during the conference, and will provide the unique opportunity to review the global open science infrastructure development, share experiences, and identify concrete collaborations.

      This is the 3rd annual Global Open Science Cloud (GOSC) workshop, a joint effort of the EGI Foundation, the Computer Network Information Center (CNIC) of the Chinese Academy of Science, and CODATA, the committee of data of the International Science Council.

      The first session will focus on CODATA GOSC discussions, starting with a keynote from Simon Hodson, the CODAT executive director, who will give an overview of GOSC activities and introduce the newly funded project, WorldFAIR. It is followed by 2 well developed Case Studies -- Radar group by Dr Ingemar Haggstrom from EISCAT, and SDG-13 group by Dr Lili Zhang from CNIC, and they will present users' needs.

      In order to answer the question: How can GOSC help? A second keynote will come from GOSC TI Group, and Prof Jianhui Li will present the proposed technology solutions. Further, we will dig deep into 3 technical areas: AAI, Cloud federation and Data Interoperability, and we are honoured to invite leading technical experts, Incl. Dr Nicolas Liampotis who leads the EOSC AAI discussion, Dr Enol Fernandez who leads the EGI FedCloud task force, and Dr Milan Ojsteršek who chairs the GOSC DI group.

      The second session focuses on the concrete implementation cases. This time we bring in the discussions with global cloud industries, and Dr Ye Huang from China Alibaba will give a keynote on advanced Cloud technology in Alibaba and their collaboration use cases with CNIC.

      The rest of the session will celebrate the success of the Cloud federation across regions and countries achieved via EGI-ACE. We will learn the challenges of connecting regional clouds of China and South Africa with EGI; we will also learn the successful stories of connecting national clouds of Hungary, Romania, Armenia with EGI.

      Workshop page: https://indico.egi.eu/event/5957/
      Target Audience:

      • Science communities who encountering challenges of international collaborations requiring across-country/region e-Infrastructures service solution
      • Technical Infrastructure providers who are interested in supporting global open science

      Format
      The workshop will be hybrid:

      • Face-to-Face (registration to the conference, code: science)
      • Remote zoom connection (link will be provided on the day, please follow the updated information on this page)

      Agenda
      First Session: Overview of Global Open Science Cloud and Technology Solutions
      Chair: Dr Yin Chen, EGI Foundation
      Link: https://indico.egi.eu/event/5882/sessions/4854/#20220922
      11:00-12:00 Overview of GOSC and Study Cases

      • Keynote talk 30’ Overview of CODATA Global Open Science activities and the WorldFAIR project, Dr Simon Hodson, CODATA executive director, Europe (remote)
      • 15’ What users’ need? – GOSC Radar Case Study Working Group, Dr Ingemar Häggström, Head of Operation, EISCAT, Sweden (on site)
      • 15’ What users’ need? – GOSC SDG-13 Case Study Working Group, Dr Lili Zhang, CNIC/CAS research scientist, China (remote)

      12:00-13:00 GOSC Technology Solutions

      • Keynote talk 20’ What can GOSC help? – GOSC Technical Infrastructure Framework, Prof Jianhui Li, Director, CNIC/CAS, China. (remote)
      • 15’ How to implement? – EOSC AAI experiences for GOSC AAI, Dr Nicolas Liampotis, GRNET, Greek (on site)
      • 15’ How to implement? – International Cloud Federation, Enol Fernández, Cloud Manager EGI.eu, Europe (on site)
      • 10’ How to implement? –GOSC Data Interoperability Working Group, Dr Milan Ojsteršek, on EOSC interoperability framework, FAIR Digital Objects and the work of the DataIO WG

      ** 13:00-14:00 Break **
      Second Session: Implementation of the Global Open Science Cloud
      Chair: Enol Fernández, EGI Foundation
      Link: https://indico.egi.eu/event/5882/sessions/4914/#20220922

      15:30- 16:00 Connection with Global Cloud Industries

      • Keynote talk 30’ Advance Cloud technology in Alibaba and CNIC-Alibaba integration, Dr. Ye HUANG, Raymond Ma, Alibaba, Regional Manager, China (on site)

      16:00-16:30 Connection between regions

      • 15’ Cloud Federation between EGI-China, Prof Jianhui Li, Director CNIC, China (remote)
      • 15’ Cloud Federation between EGI-South Africa, Rob Simmonds, Associate Director of New Technologies at IDIA, South Africa (remote)

      16:30-17:15 Connection between countries

      • 15’ Cloud Federation between EGI- Hungary (Dr Robert Lovas, SZTAKI, Hungary (on site)
      • 15’ Black Sea Open Science Cloud for the Blue Growth in the Black Sea Region - Confirmed (Prof. Eden MAMUT, Romania)
      • 15’ Cloud Federation between EGI- RENAM (MD) / ASNET(Armenia)

      Speakers
      Dr Simon Hodson has been Executive Director of CODATA since August 2013. Simon is an expert on data policy issues and research data management. He has contributed to influential reports on Current Best Practice for Research Data Management Policies and to the Science International Accord on Open Data in a Big Data World. He chaired the European Commission’s Expert Group on FAIR Data which produced the report Turning FAIR into Reality. He is currently vice-chair of the UNESCO Open Science Advisory Committee, tasked with drafting the UNESCO Recommendation on Open Science, which is intended for adoption in November 2021. As a significant part of his CODATA role, Simon is tasked with preparing a major ISC and CODATA Decadal Programme on ‘Making Data Work for Cross-Domain Grand Challenges’, which will improve the coordination of specifications for data integration and interoperability for interdisciplinary research. Simon also contributes to the coordination of the CODATA Data Policy Committee. Additionally, Simon leads or participants in numerous projects, Working Groups and Steering Groups. In recent years, Simon has been a co-chair (2015-2018) of the GEO Data Sharing Working Group, to which CODATA has made a longterm contribution; co-chair of the OECD Global Science Forum and CODATA Project on Sustainable Business Models for Research Data Repositories; a member of the Board of Directors of the Dryad Data Repository (2012-2018), a not-for-profit initiative to make the data underlying scientific publications discoverable, freely reusable, and citable; Project Director, African Open Science Platform Project (2016-19); member of the Scientific Advisory Board of CESSDA ERIC, the European data infrastructure for the social sciences. Simon has a strong research background, as well as considerable project and programme management experience: from 2009 to 2013, as Programme Manager, he led two successful phases of Jisc’s innovative Managing Research Data programme in the UK.

      Prof. Li Jianhui is the director of Science and Technology Cloud Department at the Computer Network information Center (CNIC) of the Chinese Academy of Sciences (CAS), and a Professor at the University of Chinese Academy of Sciences (UCAS). He obtained Ph.D. degree on computer science from the Institute of Computing Technology of CAS in 2007. He spent over 15 years in the research of scientific data management, data-intensive computing and big data analysis. he led the design and development of CAS scientific data infrastructure and open data cloud. In 2016, He founded “China Scientific data”, which is the first open access data journal for scientific data publication in China. Currently, he is leading the design, development and operation of CSTCloud (China Science and Technology Cloud), which is the national level open science platform. He also serves as the CODATA vice president and actively engage in open data and open science international cooperation.

      Dr. Ye Huang received his PhD in Grid Computing from University of Fribourg, Switzerland. After very active publication and academic activities during his research work, Dr. Huang switched to industry in 2011 and worked in a variety of enterprises in Germany as full-stack DevOps engineer and Cloud architect. Dr. Huang joined Alibaba Cloud in Europe as solution architect, he took the role as head of solution architect of Alibaba Cloud DACH region from 2018; From 2021, Dr. Huang works as director of industry solution of Alibaba Cloud International Business, responsible for Alibaba Cloud's products and solutions roll-out to industrial scenarios in global, including but not limited to industries like internet, financial, manufacturing, retail, and more. From April. 2022, Dr. Huang is appointed as country manager of the Netherlands market of Alibaba Cloud, for Alibaba Cloud products and services business offering and partner development in Benelux region. Dr. Ye Huang is also the guest lecturer of University Zurück Switzerland.

      Dr Ingemar Häggström, is the Head of operations, EISCAT Scientific Association. EISCAT is running the only incoherent scatter radars in Europe for atmospheric and geospace research. A new system, the imaging radar EISCAT_3D, and are now being built. For these, a data portal have been developed within a Competence Centre of EOSC-hub, utilising several services provided by the EGI Cloud Federation. The EGI Check-in have been enhanced to identify EISCAT prime users and the portal, built on DIRAC4EGI, support a controlled access to distributed storage with analysis applications in high-performance compute cloud environments. Here we present EISCAT and EISCAT_3D in the global scale of geospace research, and how scientists can interact with and access the resources of the infrastructure. This involves interchanges with similar radar facilities around the world and a system for setting up a common (meta)data federation, federated processing and data movements.

      Dr Zhang Lili is a research scientist at the Computer Network Information Center of Chinese Academy of Sciences and also a member of CODATA International Data Policy Committee. Her research focuses on open data and open science policy, practice; information economics.

      Dr Nicolas Liampotis is an AAI Research Engineer, at GRNET - Greek Research and Technology Network. He has participated in numerous national (funded by various public organisations, as well as private companies) and international (EU-funded) research and development projects (Tequila, Daidalos II, PERSIST, SOCIETIES, ECONET, VI-SEEM, EGI-Engage, EUDAT2020, MAGIC, AENEAS, AARC2, SeaDataCloud, EOSC-hub, NI4OS-Europe). His role in these projects was that of an architecture designer, technical contributor, and software engineer, focusing on the design, development, evaluation, and optimization of solutions for trust management, privacy protection and federated access in distributed infrastructures. He is currently working at GRNET as a Trust & Identity engineer and is leading the development of the EGI Check-in Authentication & Authorisation Infrastructure service that enables access to research data and services provided across the infrastructures participating in EOSC-hub. He is also involved in the architecture working group within the AARC Engagement Group for Infrastructures (AEGIS), which brings together representatives from research and e-infrastructures, operators of AAI services for a more effective uptake of AAI recommendations in their federated access solutions.

      Dr Milan Ojsteršek is the Head of Laboratory for heterogenous computer systems, FERI, University of Maribor. He received the PhD in computer science from the University of Maribor in 1994. Milan leads the Laboratory for heterogenous systems at the Faculty of Electrical Engineering and Computer Science on University of Maribor. His research and project work focuses on heterogeneous computing systems, digital libraries, natural language processing, web application development, web technologies, knowledge management, semantic web and service-oriented architecture.

      Dr Enol Fernandez is Cloud Solutions Manager at the EGI Foundation, helping user communities to execute their applications on the federated EGI infrastructure. He leads the definition and innovation of compute services of EGI and supports the co-design of custom implementations for meeting the computing requirements of communities research platforms and applications. Previously he worked as middleware developer and providing support to user communities at UAB and CSIC in the context of distributed computing European projects. He holds a PhD in Computer Science from Universitat Autòmoma de Barcelona and a Computing Engineering Degree from Universidad de La Laguna.

      Prof Eden Mamut holds a Professorship in Engineering Thermodynamics and Advanced Energy Systems and Director of the Institute for Nanotechnologies & Alternative Energy Sources at “Ovidius” University of Constanta, Romania. He was elected as the Secretary General of the Black Sea Universities Network for the mandate 2020 – 2024. Eden’s main field of research Incl. Multi scale thermo-fluid modeling, Analysis and optimization of complex energy systems, Renewable Energy Sources, Multi Criteria & Multi Scale, Methods on Sustainable Development, Sustainable Transport Systems and Digitalization in Energy Engineering. Eden has published 95 papers, author of 12 books, with 3 registered patents.

      Convener: Yin Chen (EGI.eu)
      • 11:00
        Overview of CODATA Global Open Science activities and the WorldFAIR project 30m
        Speaker: Dr Simon Hodson (CODATA executive director)
      • 11:30
        What users’ need? – GOSC Radar Case Study Working Group 15m
        Speaker: Ingemar Haggstrom (EISCAT)
      • 11:45
        What users’ need? – GOSC SDG-13 Case Study Working Group 15m
        Speaker: Dr Lili Zhang (Research Scientist, CNIC, China)
      • 12:00
        What can GOSC help? – GOSC Technical Infrastructure Framework 20m
        Speaker: Prof. Jianhui Li (Director, CNIC/CAS, China)
      • 12:20
        How to implement? – EOSC AAI experiences for GOSC AAI 15m
        Speaker: Nicolas Liampotis (GRNET)
      • 12:35
        How to implement? – International Cloud Federation 15m
        Speaker: Enol Fernandez (EGI.eu)
      • 12:50
        How to implement? –GOSC Data Interoperability Working Group 10m
        Speaker: Dr Milan Ojsteršek
    • 11:00 11:45
      Lightning talks: Data Spaces & Data Lakes
      Convener: Marco Rorro (EGI.eu)
      • 11:00
        Data spaces for climate data analysis 8m

        Climate change, both natural and anthropogenic, is a pressing issue of today, for which data-based models and decision support techniques offer a more comprehensive understanding of its complexity. The understanding of climate change is critical for supporting the needs of an ever broadening spectrum of society's decision-makers, as they strive to deal with the influences of Earth’s climate at global to local scale. To this purpose, climate data analysis is facing new challenges as the growth in the size of the datasets increases and a growing gap between technological sophistication of industry solutions and scientific software arises. Contributions to the increase in climate data volume include the systematic increase in model spatial and temporal resolution; number of components on model output; number of simulations to sample uncertainties; developments in the field of data-driven climate models that enable the creation of rapid and inexpensive, large-ensemble forecasts with thousands of ensemble-members and new sources of observational data. In order to provide new approaches to data analysis that accommodate this data volume, research is moving towards a notion of data space integrated systems, targeted for decision support, and to the deployment of Climate Analytics-as-a-Service (CAaaS) based on cloud native data repositories. The purpose of this this work it’s to describe the current state of the art of climate data analysis, the challenges that the community is facing, and provide a vision on data analysis solutions based on data spaces, in an attempt to find synergies between diverse disciplines and research ideas that must be explored to gain a comprehensive overview of the challenge.

        Speakers: Mr Ezequiel Cimadevilla (Instituto de Física de Cantabria (IFCA, CSIC-UC, Spain)), Dr Antonio S. Cofiño (Instituto de Física de Cantabria (IFCA, CSIC-UC, Spain))
      • 11:15
        Developing a distributed and fault tolerant Dataverse architecture. 8m

        Dataverse is an open source data repository solution with increased adoption by research organizations and user communities for data sharing and preservation. Datasets stored in Dataverse are catalogued, described with metadata, and can be easily shared and downloaded. However, despite all its features, Dataverse is still missing an architecture that ensures a distributed, fault tolerant, highly available and out-of-the-box service deployment.
        In this presentation we will report the efforts by the Portuguese Distributed Computing Infrastructure (INCD), to address these current limitations by creating a dataverse deployment architecture that is easy to set-up, portable, highly available and fault tolerant.
        We tackled this objective, following a DevOps approach, resorting to a wide range of open software tools such as Linux containers, source code repositories, CI/CD pipelines, keepalived in conjunction with Virtual IPs (VIPs), pg_auto_failover for database replication and high availability object storage as scalable data storage backend. The solution is implemented on top of the Openstack cloud management framework, while the authentication is performed through the egi-checkin.
        This architecture, is therefore capable of providing a stable and fault tolerant Dataverse installation, while keeping a flexible enough set-up to allow for the expansion of the storage and facilitate the upgrade to new versions.
        The deployment architecture is currently under testing and will be used to support a catchall data repository for the Portuguese research and academic community. Furthermore, we expect that this solution can be deployable in EGI fedcloud resources to support FAIR data both for thematic services and generic use.

        Speaker: Zacarias Benta
      • 11:30
        Blue-Cloud Data Federation 8m

        The Blue-Cloud project makes substantial progress providing a collaborative cyber platform with smart federation of an unprecedented wealth of multidisciplinary data, analytical tools, and computing facilities to explore and demonstrate the potential of cloud-based Open Science and address ocean sustainability. Blue-Cloud is undertaken within the "Future of Seas and Oceans Flagship Initiative" of EU HORIZON 2020 programme and is deploying the thematic EOSC for the marine domain.

        Federation of data resources has been achieved by developing and deploying the Blue Cloud Data Discovery and Access service (DD&AS). It facilitates sharing of datasets from blue data infrastructures (BDIs) with a common interface. The DD&AS uses web services and APIs, as provided and maintained by BDIs. M-to-M interactions serve harvesting metadata, submitting queries, and retrieving resulting metadata, data sets and data products. The DD&AS has broker components for metadata and data and a common interface for discovery and retrieval of data sets and data products from each of the federated BDIs. The query mechanism has a two-step approach:
        • Firstly, interesting data are discovered at collection level in a common metadataformat, with free search, geographic and temporal criteria;
        • Secondly, users drill down within identified collections to get more specific data at granule level, by including additional search criteria;
        • Finally, users can retrieve the data sets using a shopping mechanism.

        Currently, the DD&AS gives access to more than 10 Million data sets for physics, chemistry, geology, bathymetry, biology, biodiversity, and genomics from EMODnet, CMEMS, SeaDataNet, Argo, EuroArgo, ICOS, SOCAT, EcoTaxa, ELIXIR-ENA, and EurOBIS.

        The DD&AS can be expanded by federating additional BDIs, this way providing a harmonised and easy discovery and access for the European ocean and marine data space. Moreover, it is planned to expand the functionality with sub-setting at data level, require additional APIs at each of the BDIs.

        Speaker: Dick Schaap (Mariene Informatie Service MARIS BV)
    • 11:00 13:00
      Services, Processes, Policies for federated delivery of Open Science Resources - EGI contributions to the EOSC-Core Opal, Topaz (Vienna Andel Prague)

      Opal, Topaz

      Vienna Andel Prague

      Members of the EGI Federation are among the key contributors to the EOSC-Core, the 'set of services providing the means to discover, share, access and re-use data and services for Open Science'.
      EGI federation members contribute to the EOSC-Core with the
      - EOSC Marketplace
      - Check-in AAI proxy
      - Usage Accounting system
      - Collaboration Systems
      - Service configuration management system
      - Helpdesk system
      - Service access order management system
      - Interoperability framework
      - Service management processes

      This session will be a forum to present and discuss the status and future of these contributions.

      EOSC Platform architecture (15') Diego
      EOSC Exploitation, Show Cases integration Stories (10') Matt
      EOSC AAI (10') Nicolas/Valeria
      EOSC Marketplace (10') Roksana
      EOSC Helpdesk (10') Renato - on behalf of Pavel
      EOSC Monitoring (10') Emir
      EOSC DIH (10') Marcin
      Panel Discussion where all speakers will be invited to participate (15')

      Conveners: Diego Scardaci (EGI.eu), Matthew Viljoen (EGI.eu)
    • 11:45 13:00
      Demonstrations Sapphire

      Sapphire

      • 11:45
        EOSC-Performance: compare EOSC sites for your needs 25m Sapphire

        Sapphire

        EOSC-Performance is a platform available on the EOSC Marketplace to search and compare multiple computing sites, including those available to the EOSC community. Users can visually compare benchmarks results from a wide range of computing resources, covering cloud and HPC. Users and service providers can contribute to the platform by adding new benchmarks or uploading results for the computing resources of their interest. This upcoming autumn update includes an improved Web GUI with a number of new features: enhanced visual comparison of benchmark results, e.g. by allowing data regression analysis, data export, and a few more.

        The EOSC-Performance service leverages OIDC and EGI-Check-In for authentication, a Dynamic DNS service from EGI Federated cloud, and applies the SQAaaS best practices from the EOSC-Synergy project. The service implies API-First approach to build the frontend, which is provided for users following OpenAPI version 3 specification.

        The service features, typical use cases, and highlights on the internal architecture will be demonstrated.

        Speaker: Dr Valentin Kozlov (Karlsruhe Institute of Technology)
      • 12:35
        Unified access to multiple clouds and HPC clusters 25m Sapphire

        Sapphire

        The PROMINENCE platform, originally developed in the Fusion Science Demonstrator in EOSCpilot and extended in the Fusion Competence Centre in EOSC-Hub, was designed to allow users to transparently run batch workloads on clouds. All infrastructure provisioning and failure handling is fully automated and is totally invisible to users. Any number of clouds can be used simultaneously and opportunistic usage of idle resources is supported, allowing usage of clouds to be maximised and users to gain access to additional resources.

        In EGI-ACE PROMINENCE has been extended to support traditional HPC clusters as a backend in addition to clouds, enabling users to leverage an even wider range of resources. This is particularly important to some communities, such as the fusion energy research community, where access to HPC clusters is more prevalent than clouds.

        Here we will demonstrate running both HTC and true HPC jobs using PROMINENCE, in addition to running hybrid workflows which make use of both cloud and HPC resources.

        Speaker: Andrew Lahiff (CCFE / UK Atomic Energy Authority)
    • 11:45 13:00
      Lightning Talks: EOSC Compute Platform 2 Quartz

      Quartz

      Convener: Marco Rorro (EGI.eu)
      • 11:45
        Towards an Interdisciplinary Citizen Science Interoperable Service in EOSC 8m Quartz

        Quartz

        Citizen Science data is being split across very many different portals, each portal operated by a small community with different APIs for fetching the data programmatically and returning data in a specific structure. Several efforts are done to integrate these disparate projects into useful open datasets. An example in biodiversity is the Global Biodiversity Information Facility dataset that aggregates many data sources. Still, most of these aggregated resources are served by specific thematic APIs. This makes re-use of the data a real challenge, in particular when it comes to interdisciplinary knowledge building where merging data from different themes is required.

        In Cos4Cloud we have developed one approach based on the Internet of Things. Our approach – called STAplus – uses an extension to the existing Open Geospatial Consortium standard SensorThings API. STAplus aims to reinforce the FAIR’s aspects of Interoperability and Reusability. To add the necessary elements for considering the citizens and their recognition, we propose a generic data model that supports additional business logic. Because our extended data model is backwards compatible to the existing SensorThings API v1.1, it can be applied to already existing deployments and thereby offering a wide potential uptake. The approach was already validated with Cos4Cloud implementations using use cases such as camera traps and Pl@ntNet data.

        In addition, we evaluated this approach to existing biodiversity citizen observatories by conducting a feasibility study using EGI infrastructure where we operated a cloud based service. In this service we hosted a meaningful snapshot of the Natusfera observations (Natusfera is a fork of iNaturalist) and we make that available via STAplus.

        I our talk we will introduce the approach and present quantitative results of our feasibility story.

        (This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no 863463.)

        Speaker: Dr Andreas Matheus (Secure Dimensions)
      • 12:00
        Photon and Neutron (PaN) Community: Beyond ExPaNDS and PaNOSC 8m Quartz

        Quartz

        As the projects ExPaNDS and PaNOSC are reaching the end of their terms, the PaN facilities are acquiring FAIR principles enabling the PaN community and scientists in general access and reuse of a wealth of data for multidisciplinary use cases.

        The two projects have established firm foundations for the deployment and adoption of federated services to allow facilities and scientists to exploit the PaN data beyond their original intended use. EOSC and the horizontal e-infrastructure providers such as EGI, have provided a strong basis to facilitate PaN services and data to the wider community via the EOSC Marketplace, OpenAire and B2FIND data explorers.

        The pandemic experience has only raised the importance of the fundamental need of federated infrastructures allowing standardised remote access for scientists to be able to execute beam line experiments remotely, data post-processing workflows as well as data curation practises in a harmonised way across facilities. The emergence of the PaN Open Data Commons is an outstanding result of the projects that will allow the European and National facilities across Europe access to FAIR PaN data.

        In this abstract, the ExPaNDS project summarises the work done on standardised analysis pipelines, common APIs, reference metadata framework, cataloguing and PaN training services via implementation and deployment of real life use cases.

        Speaker: Teodor Ivanoaica (IFIN-HH/ELI-NP)
      • 12:15
        EOSC-FUTURE – ENVRI-FAIR SP Environmental Indicators – Ocean 8m Quartz

        Quartz

        In the EOSC-Future project, ENVRI-FAIR partners are involved in developing two Science Projects (SPs), one about Invasive Species, and one about a Dashboard of the State of the Environment. The Dashboard should provide easy means to users to determine the state of the environment and follow trends of our Earth system for a selected number of parameters within the Earth components of Atmosphere, Ocean, and Biodiversity.

        MARIS leads the development of the Ocean component in cooperation with IFREMER, OGS and NOC-BODC. It consists of a Map Viewer that displays in-situ measurements of selected Essential Ocean Variables (EOVs), namely Temperature, Oxygen, Nutrients and pH. These measurements are retrieved from selected Blue Data Infrastructures (BDIs) such as Euro-Argo and SeaDataNet CDI using tailor-made APIs for fast sub-setting at data level. The user interface is designed for citizen scientists and allows them to interact with the large data collections retrieving parameter values from observation data by geographical area and using sliders for date, time and depth. The in-situ values are co-located with product layers from Copernicus Marine, based upon modelling and satellite data.

        Performance is a major challenge as users should not wait too long for on-the-fly retrieving and displaying the data in the Map Viewer following their selection criteria. This requires intense cooperation between marine researchers and EOSC computing experts.

        In-situ data sets are also used in algorithms to generate aggregated values as dynamic trend indicators for sea regions. These are displayed at the Environmental Indicators dashboard and provide ocean trend indicators for the selected EOVs for designated areas. While, users can then click on such an indicator guiding them to the Map Viewer to browse deeper into the data and details facilitating the trends. Also, for this step use will be made of selected EOSC services.

        Speaker: Dick Schaap (Mariene Informatie Service MARIS BV)
      • 12:30
        At the heart of future computing centers: research on algorithms 8m

        Hermann Heßling, Michael Kramer, Stefan Wagner:
        At the heart of future computing centers: research on algorithms

        Future research facilities like the Square Kilometre Array Observatory (SKAO) are confronted with zetta-scale computing: only a tiny fraction of the data collected by the thousands of telescopes and antennas can be archived in the long term. Consequently, the relevant information must be extracted in real time out of huge data streams. While we have described the technical challenges of SKAO and the implications for a "Smart Green Computing" in our contributions to the last two EGI conferences, we now address the aspect of how data centers could be organized to address the challenges at hand. Data centers play a critical role in scaling existing community analysis tools, which have often been designed for rather small to medium data sets and computing systems. Further development of existing methods and algorithms requires cooperative interaction at the European level.

        Speakers: Hermann Heßling (University of Applied Sciences (HTW) Berlin), Michael Kramer (Max-Planck-Institut fuer Radioastronomie)
      • 12:45
        WeNMR under the hood: How to operate a complex collection of scientific web services. 8m Quartz

        Quartz

        Under the WeNMR banner various scientific web services are provided to the community such as DisVis, HADDOCK, PDBTools-Web, Prodigy, Whiscy, proABC-2 and Prodigy, all operating under the Utrecht University WeNMR portal - wenmr.science.uu.nl. Over the years, our aim has been, next to the now standard practice of simply providing the community with the source code for such tools, to also provide intuitive web graphical user interfaces and the means to access computational resources. The heterogenous development processes and standards of each tool pose complex operational challenges, due to factors such as their own intricacies (extensive calculations, high I/O, etc.) their state (third-party dependencies, code interpreter versioning, etc.) or usability curve (high number of parameters, input conditionalities, etc.). The WeNMR services are accessible via the main web portal, which acts as the hub for the interconnected apps. Each individual service has its own graphical frontend, tailored to its specific user input needs and software capabilities of each codebase. For accounting and reporting purposes, job executions coming from any service are added to a central SQL database that also hosts the user information for services in which registration is required (GDPR compliant). The backend of the portal is tightly coupled with an in-house middleware that orchestrates the submission and retrieval of jobs prepared via the web interface to the appropriate computing resources, either local HPC cluster or distributed HTC EOSC resources (via the EGI Workload Manager). The development operations have been recently reviewed and are being optimized to match the high load of the WeNMR services, some of which have almost tripled in usage because of the COVID19 pandemic. Over the past decade, we have served a worldwide community of over 28 500 users that have submitted over 434 000 jobs.

        Speaker: Rodrigo Vargas Honorato (Utrecht University)
    • 13:00 14:00
      Lunch 1h
    • 13:00 14:00
      Lunch 1h
    • 13:00 14:00
      Lunch 1h
    • 14:00 15:45
      Closing plenary: Pathways for improved coordination and cooperation among e-infrastructures Plenary

      Plenary

      The realisation of coordination and collaboration of the European e-Infrastructures that are responsible for delivering advanced federated services for networking, computing, data processing, discovery, exploitation and sharing across European research infrastructures is paramount to the achievement of scientific excellence through our international scientific collaborations.

      For example, in the coming decade the increasing rate of data taking and scarcity of computing resources will require the coordinated provisioning of Cloud, HTC, HPC and scientific applications to deal with real-time data acquisition, efficient data transfer and access, data reduction and calibration of sensors with fast feedback loops from processed data, smart approaches to data processing, simulation and visualisation tools, sharing and access to massive capacity from heterogeneous computing facilities (HTC, HPC, Cloud) and CPU architectures. At the same time the global scientific and social challenges bring the urgency of scientific teams collaborating and sharing data and knowledge at the earliest possible stages of their research. Such scientific collaborations will drive e-Infrastructures to jointly tackle open science policy and service delivery challenges.

      The theme of coordination and collaboration is also at the heart of the e-IRG White Paper 2022. e-IRG identified three areas of cooperation: high-level strategy and coordination in Europe, service provisioning and innovation, and sees ‘a clear need for a single e-Infrastructure umbrella forum for community building, high-level strategy setting and coordination for the entire e-Infrastructure. This umbrella forum is not a separate organisation, but a forum in which the user communities and the strategy and coordination bodies for the different parts of the European e-Infrastructure work on a concerted approach.

      During the session we will explore pathways for e-Infrastructures to improve cross-infrastructure coordination to jointly deliver and support integrated access policies, service delivery, and overall an integrated environment to better serve the user needs, including aspects like governance, sustainability, and related policies. We will hear testimony from scientific communities on the needs of e-Infrastructure coordination and we will discuss e-Infrastructures governance structures that can help improve their current way they engage and serve users. The ultimate goal of bridging the gaps across e-Infrastructures is to provide integrated user-friendly services addressing their complex digital needs to accelerate their science.

      A panel of experts from scientific communities and e-Infrastructures will lead the discussion and the audience will have the opportunity to suggest concrete actions.

      This session is supported by e-IRG.

      Panelists:

      Chiara Ferrari, Director of SKA-France and European SKA Forum Chair
      Tiziana Ferrari, Director, EGI Foundation
      Maria Girone, CTO, CERN OpenLAB
      Natalia Manola, CEO, OpenAIRE AMKE
      Antti Pursula, Head of Secretariat, EUDAT CDI
      Paul Rouse, Chief Community Relations Officer, GÉANT
      Matthias Schramm, Big Earth Observation Data for Environment, Vienna University of Technology
      Josephine Wood, Senior Programme Manager, EuroHPC JU

      Convener: Arjen van Rijn (NIKHEF)
    • 14:00 18:00
      GREAT KOM (invitation only) Ruby I,II

      Ruby I,II

    • 15:45 18:00
      Global Open Science Cloud Workshop - International Cloud Integration Crystal Room

      Crystal Room

      This workshop will be an extension of European Open Science Cloud (EOSC) discussion during the conference, and will provide the unique opportunity to review the global open science infrastructure development, share experiences, and identify concrete collaborations.

      This is the 3rd annual Global Open Science Cloud (GOSC) workshop, a joint effort of the EGI Foundation, the Computer Network Information Center (CNIC) of the Chinese Academy of Science, and CODATA, the committee of data of the International Science Council.

      The first session will focus on CODATA GOSC discussions, starting with a keynote from Simon Hodson, the CODAT executive director, who will give an overview of GOSC activities and introduce the newly funded project, WorldFAIR. It is followed by 2 well developed Case Studies -- Radar group by Dr Ingemar Haggstrom from EISCAT, and SDG-13 group by Dr Lili Zhang from CNIC, and they will present users' needs.

      In order to answer the question: How can GOSC help? A second keynote will come from GOSC TI Group, and Prof Jianhui Li will present the proposed technology solutions. Further, we will dig deep into 3 technical areas: AAI, Cloud federation and Data Interoperability, and we are honoured to invite leading technical experts, Incl. Dr Nicolas Liampotis who leads the EOSC AAI discussion, Dr Enol Fernandez who leads the EGI FedCloud task force, and Dr Milan Ojsteršek who chairs the GOSC DI group.

      The second session focuses on the concrete implementation cases. This time we bring in the discussions with global cloud industries, and Dr Ye Huang from China Alibaba will give a keynote on advanced Cloud technology in Alibaba and their collaboration use cases with CNIC.

      The rest of the session will celebrate the success of the Cloud federation across regions and countries achieved via EGI-ACE. We will learn the challenges of connecting regional clouds of China and South Africa with EGI; we will also learn the successful stories of connecting national clouds of Hungary, Romania, Armenia with EGI.

      Workshop page: https://indico.egi.eu/event/5957/
      Target Audience:

      Science communities who encountering challenges of international collaborations requiring across-country/region e-Infrastructures service solution
      Technical Infrastructure providers who are interested in supporting global open science
      Format
      The workshop will be hybrid:

      Face-to-Face (registration to the conference, code: science)
      Remote zoom connection (link will be provided on the day, please follow the updated information on this page)
      Agenda
      First Session: Overview of Global Open Science Cloud and Technology Solutions
      Chair: Dr Yin Chen, EGI Foundation
      Link: https://indico.egi.eu/event/5882/sessions/4854/#20220922
      11:00-12:00 Overview of GOSC and Study Cases

      Keynote talk 30’ Overview of CODATA Global Open Science activities and the WorldFAIR project, Dr Simon Hodson, CODATA executive director, Europe (remote)
      15’ What users’ need? – GOSC Radar Case Study Working Group, Dr Ingemar Häggström, Head of Operation, EISCAT, Sweden (on site)
      15’ What users’ need? – GOSC SDG-13 Case Study Working Group, Dr Lili Zhang, CNIC/CAS research scientist, China (remote)
      12:00-13:00 GOSC Technology Solutions

      Keynote talk 20’ What can GOSC help? – GOSC Technical Infrastructure Framework, Prof Jianhui Li, Director, CNIC/CAS, China. (remote)
      15’ How to implement? – EOSC AAI experiences for GOSC AAI, Dr Nicolas Liampotis, GRNET, Greek (on site)
      15’ How to implement? – International Cloud Federation, Enol Fernández, Cloud Manager EGI.eu, Europe (on site)
      10’ How to implement? –GOSC Data Interoperability Working Group, Dr Milan Ojsteršek, on EOSC interoperability framework, FAIR Digital Objects and the work of the DataIO WG
      13:00-14:00 Break
      Second Session: Implementation of the Global Open Science Cloud
      Chair: Enol Fernández, EGI Foundation
      Link: https://indico.egi.eu/event/5882/sessions/4914/#20220922

      15:30- 16:00 Connection with Global Cloud Industries

      Keynote talk 30’ Advance Cloud technology in Alibaba and CNIC-Alibaba integration, Dr. Ye HUANG, Raymond Ma, Alibaba, Regional Manager, China (on site)
      16:00-16:30 Connection between regions

      15’ Cloud Federation between EGI-China, Prof Jianhui Li, Director CNIC, China (remote)
      15’ Cloud Federation between EGI-South Africa, Rob Simmonds, Associate Director of New Technologies at IDIA, South Africa (remote)
      16:30-17:15 Connection between countries

      15’ Cloud Federation between EGI- Hungary (Dr Robert Lovas, SZTAKI, Hungary (on site)
      15’ Black Sea Open Science Cloud for the Blue Growth in the Black Sea Region - Confirmed (Prof. Eden MAMUT, Romania)
      15’ Cloud Federation between EGI- RENAM (MD) / ASNET(Armenia)

      Convener: Enol Fernandez (EGI.eu)
      • 15:45
        ELKH Cloud: milestones towards EOSC and ESFRI 8m

        The federated science cloud of the Eötvös Loránd Research Network, ELKH Cloud is one of the award-winner research infrastructures in Hungary. Members of the scientific community are not only using but are also developing and operating the cloud services: the Institute for Computer Science and Control (SZTAKI) and the Wigner Research Centre for Physics provide the computing and data services to more than 200 research projects since 2016 (the inauguration with the support from the Hungarian Academy of Sciences).
        Based on positive feedback received in recent years, as well as growing demand for artificial intelligence applications, the cloud capacity was significantly expanded by 2022 with support from ELKH. As a result, 5900 vCPU, max. 500+ vGPUs, 28 TB RAM, 338 TB SSD storage, 1.25 PB HDD storage and 100 Gbps network capacities have become available to the users.
        Research often requires complex, large-scale platforms based on the coordinated operation of multiple components. The ELKH Cloud therefore provides customisable, reliable and scalable reference architecture templates, among others with the help of cloud orchestration methods.
        In addition to operating systems and basic IaaS level cloud functionalities, the most popular artificial intelligence research and data ingestion frameworks are also available at PaaS level. The enhanced ELKH Cloud provides a competitive research infrastructure that also welcomes projects initiated by universities and national laboratories.
        ELKH aims to make the enhanced ELKH Cloud an integral part of the European Open Science Cloud (in the EGI-ACE project) and the SLICES ESFRI initiatives (in the SLICES-SC and SLICES-PP projects).

        Speaker: Dr Robert Lovas (SZTAKI)
    • 15:45 16:15
      break 30m
    • 16:25 18:25
      Artificial Intelligence and Machine Learning - jointly organised by EGI-ACE, AI4PublicPolicy, StairwAI, LETHE, iMagine Opal, Topaz (Vienna Andel Prague)

      Opal, Topaz

      Vienna Andel Prague

      This session will bring together a rich set of H2020 projects and initiatives all working on Artificial Intelligent and Machine Learning frameworks and applications in partnership with EGI members. The session will present achievements and will discuss ways forward to expand the scope and applicability of the presented solutions.

      Conveners: Marco Rorro (EGI.eu), Ville Tenhunen
      • 16:25
        Introduction 10m Opal, Topaz

        Opal, Topaz

        Vienna Andel Prague

        Speaker: Marco Rorro (EGI.eu)
      • 16:35
        Next generation of the EOSC Portal - ML/AI enhanced user interface 20m Opal, Topaz

        Opal, Topaz

        EOSC is a pan-European initiative that offers access to resources and services that foster scientific research. It involves various stakeholders, from researchers, providers, facilitators, up to commercial users. As a result, EOSC provides variety of resources for an open science market in Europe. Currently, it provides access to diverse and large data sources, comprising research papers, access to specialized infrastructure, datasets, research projects etc. To find relevant and interesting items, the user must understand what they need quite well, either by specifying expressive search queries or navigating through ontologies of resources and services, which could be overwhelming. An AI/ML-enhanced recommender system will provide assistance by combining different sources of data together, offering each user a personalized and customized view on the resources that they could be interested in. These features would not only facilitate discoverability of resources offered in EOSC Portal, but also extract and exploit relevant relationships among them, to deliver concise, data-supported information to the user. It is particularly important both for large research institutes, and users representing a long-tail of science, i.e., not affiliated to large and well-funded organizations, conducting research niche domains, or just not aware of the existing European resources. Moreover, EOSC is a dynamic and evolving environment, any implemented feature has to take into account the possibilities of further co-creation and advances to address needs unforeseen at the inception. The demonstration will feature the overall user interface for EOSC, featuring a weighted search, based on various criteria. We will introduce the use-cases, concepts, and classifications of information in the next-generation user interface, we'll also demonstrate and explain the components and interactions of the RS, specifically with the outlook into flexibility of the overall system and possible evolution.

        Speaker: Krzysztof Martyn
      • 16:55
        Distributed Deep Learning by Horovod - a new EOSC service 20m Opal, Topaz

        Opal, Topaz

        The "Distributed Deep Learning by Horovod" EOSC service provides the infrastructure, resources and libraries to its users in order to perform effective distributed training of deep neural networks. Access to a ready-to-use Horovod cluster is provided through Jupyterlab. The talk will give a short introduction of the service developed and recently onboarded by the Neanias EU project and supported by the Hungarian ELKH cloud.

        Speaker: Dr Jozsef Kovacs (SZTAKI)
      • 17:15
        AI4EOSC 20m Opal, Topaz

        Opal, Topaz

        Vienna Andel Prague

        The AI4EOSC (Artificial Intelligence for the European Open Science Cloud) delivers an enhanced set of advanced services for the development of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) models and applications in the European Open Science Cloud (EOSC). These services are bundled together into a comprehensive platform providing advanced features such as distributed, federated and split learning; novel provenance metadata for AI/ML/DL models; event-driven data processing services or provisioning of AI/ML/DL services based on serverless computing. The project builds on top of the DEEP-Hybrid-DataCloud outcomes and the EOSC compute platform and services in order to provide this specialized compute platform. Moreover, AI4EOSC offers customization components in order to provide tailor made deployments of the platform, adapting to the evolving user needs. The main outcomes of the AI4EOSC project will be a measurable increase of the number of advanced, high level, customizable services available through the EOSC portal, serving as a catalyst for researchers, facilitating the collaboration, easing access to high-end pan-European resources and reducing the time to results; paired with concrete contributions to the EOSC exploitation perspective, creating a new channel to support the build-up of the EOSC Artificial Intelligence and Machine Learning community of practice.

        Speaker: Alvaro Lopez Garcia (CSIC)
      • 17:35
        iMagine - Imaging data and services for aquatic science 20m Opal, Topaz

        Opal, Topaz

        Vienna Andel Prague

        iMagine is a new EU HORIZON project, coordinated by EGI, which aims to deploy, operate, validate, and promote a dedicated iMagine AI framework and platform, connected to EOSC and AI4EU, giving researchers in aquatic sciences open access to a diverse portfolio of AI based image analysis services and image repositories from multiple RIs, working on and of relevance to the overarching theme of healthy oceans, seas, coastal and inland waters. The presentation will go deeper into the backgrounds, objectives, and planned approach for this challenging new project.

        Speakers: Dick Schaap (Mariene Informatie Service MARIS BV), Alvaro Lopez Garcia (CSIC)
      • 17:55
        StAIrwAI, LETHE and AI4PublicPolicy 20m Opal, Topaz

        Opal, Topaz

        Vienna Andel Prague

        Speakers: Ville Tenhunen, Marco Rorro (EGI.eu), Andrea Cristofori
      • 18:15
        Closing discussion about future challenges 10m Opal, Topaz

        Opal, Topaz

        Vienna Andel Prague

    • 17:10 18:10
      Combining Copernicus data and EGI services for Earth Observation at scale: Internal Quartz (Vienna Andel Prague)

      Quartz

      Vienna Andel Prague

      This session is organised by the C-SCALE project to bring together Earth Observation and distributed computing communities in Europe.
      Copernicus is a leading provider of Earth observation data, which is used for services providers, public authorities and other international organisations to improve the quality of life for the European citizens. EOSC adds several federations of service providers and research initiatives and solution providers into the shared innovation space.
      C-SCALE combines relevant data and services from these two initiatives, and provide 'Big Copernicus Data Analytics services' that streamline the integration of models, projects and observation programmes.
      The session will present recent achievements in the field, and will provide a forum to discuss exploitation paths for the combined services.

      Conveners: Diego Scardaci (EGI.eu), Enol Fernandez (EGI.eu)
    • 19:00 21:50
      Social activities 2h 50m
    • 19:55 21:30
      iMagine project welcome dinner (invitation only) 1h 35m
    • 09:00 15:00
      GREAT KOM (invitation only): GREAT KOM Ruby III

      Ruby III

    • 09:00 15:00
      iMagine project Kick-Off Meeting (Invitation only) Ruby I, II, III (Vienna Andel Prague)

      Ruby I, II, III

      Vienna Andel Prague

      09:00 → 09:05 Welcome and agenda overview.
      Convener: Gergely Sipos (EGI.eu)

      09:05 → 09:55 Partner self-introductions. 2’ each x 25
      Convener: Gergely Sipos (EGI.eu)

      09:55 → 10:15 Project overview and expected impact.
      Conveners: Dick Schaap (Mariene Informatie Service MARIS BV) , Gergely Sipos (EGI.eu)

      10:15 → 10:30 Break

      10:30 → 12:30 WP5 - AI services for image analysis (Status and Plans for M1-3). 8’talk +4 ‘Q&A for each WP5 service.
      Convener: Dick Schaap (Mariene Informatie Service MARIS BV)

      12:30 → 12:50 WP4 - DEEP AI & cloud infrastructure services overview (Status and Plans for M1-3) (20’ with Q&A).
      Conveners: Alvaro Lopez Garcia (CSIC), Gergely Sipos (EGI.eu)

      12:50 → 13:30 Lunch

      13:30 → 14:30 WP3 - Prototype services, integration support and feedback (Prototype status, Integration plans for prototypes and production services for M1-3). Introduction of prototypes (8’ talk + 4’ Q&A each). Discussion of integration plans bw WP3, WP5, WP5.
      Convener: Dr Valentin Kozlov (Karlsruhe Institute of Technology)

      14:30 → 15:00 WP1 & WP2 Work practices
      Confluence, email lists, meetings (Mandy)
      Upcoming M&D (Gergely)
      External events (Ilaria)
      How to track events, effort, VA, etc (Ilaria, Mandy)
      Exploitation and Innovation Management (Smitesh)

      Conveners: Gergely Sipos (EGI.eu), Ilaria Fava, Mandy Yuju Lin (EGI.eu)
      • 09:00
        Lunch 1h
    • 09:30 11:00
      Training: Authentication and Authorisation Infrastructure Quartz (Vienna Andel Prague)

      Quartz

      Vienna Andel Prague

      The world of research is mainly characterized by scientific communities, scientific applications, community services, and infrastructure resources joined together to achieve research objectives. Digital identity management is essential when it comes to regulating and filtering access to resources or services based on the characteristics of the user. The EGI Check-in service enables access to EGI services and resources using the federated identity mechanism.

      This tutorial will give an overview of digital identity management with EGI AAI service components and will provide guidelines to support the communities needs for federated access through the EGI Check-in service.

      Target audience:
      This tutorial is designed for end users who want to know how to access the federated services and resources of the EGI Infrastructure and for Community Managers who want to know which approach to adopt for access management.

      AGENDA

      User Identity Management with EGI Check-in (15’)
      ( Sign up, Identity linking, Profile view, User profile in EOSC)

      Q&A (5’)

      Virtual Organization (VO) Management with COmanage Registry (30’)
      (VO Enrollment flow, VO group membership management, VO statistics)

      Q&A (5’)

      Virtual Organization (VO) Management with PERUN (30’)
      (Application form, VO group membership management, Join to a VO in Perun and login with Check-in)

      Q&A (5’)

      Conveners: Nicolas Liampotis (GRNET), Slavek Licehammer (CESNET), Valeria Ardizzone (EGI.eu)
      • 09:30
        Introduction to EGI Authentication and Authorization Infrastructure (AAI) 10m
        Speaker: Valeria Ardizzone (EGI.eu)
    • 09:30 11:00
      Training: Data management with FTS and RUCIO Sapphire (Vienna Andel Prague)

      Sapphire

      Vienna Andel Prague

      FTS allows scientists to move data files asynchronously from one storage to another. The service includes dedicated interfaces to display statistics of on-going transfers and manage storage resource parameters. FTS is ideal to move large amounts of files AND very large files as the service has mechanisms to verify checksums and ensure automatic retry in case of failures.
      Rucio allows policy based management of data with expressive statements. You to say what you want with the files, and Rucio will figure out the details of how to do it. For example, three copies of my file on different continents with a backup on tape. Or to automatically remove copies of data after a set period or once its access popularity drops.
      FTS and Rucio power some of the largest scientific data taking and analysis experiments of the world, such as the Large Hadron Collider.
      The EGI federation, based in its partners STFC and CERN offers FTS and Rucio services for scientific communities.

      Target audience: This tutorial is designed for scientific communities who want to build scalable, reliable and sustainable data management infrastructures.

      Agenda

      Rucio (45’) - Timothy Noble
      - Intro + future plans
      - Admin commands
      - User command
      - Demo
      - Instructions for users

      FTS (45’) - Rose Cooper
      - Intro + future plans
      - Transfer commands
      - Monitoring interface
      - Demo Transfer + monitoring
      - Instructions for users

      Conveners: Andrea Manzi (EGI.eu), Rose Cooper, Tim Noble (STFC)
    • 09:30 11:00
      Training: How to deploy ready-to-use BigData Platform on top of the EOSC Compute Platform - the DODAS solution Topaz (Vienna Andel Prague)

      Topaz

      Vienna Andel Prague

      DODAS enables the execution of user analysis code both in batch mode and interactively via the Jupyter interface. DODAS is highly customizable and offers several building blocks that can be combined together in order to create the best service composition for a given use case. The currently available blocks allow to combine Jupyter and HTCondor as well as Jupyter and Spark or simply a jupyter interface. In addition, they allow the management of data via caches to optimize the processing of remote data. This can be done either via XCache or MinIO S3 object storage capabilities. DODAS is based on docker containers and the related orchestration relies on Kubernetes that enables the possibility to compose the building blocks via a web-based user interface thanks to Kubeapps.

      In this tutorial the DODAS fundamentals will be presented and will be shown a live user oriented demo.

      Target audience: This tutorial is designed for scientific communities, developers, and end users who want to set-up interactive analysis platforms integrated with existing batch systems.

      Conveners: Daniele Spiga (INFN), Diego Ciangottini (INFN)
    • 09:30 13:00
      Workshop: Security Architecture and Risk Management Crystal Room (Vienna Andel Prague)

      Crystal Room

      Vienna Andel Prague

      Security architecture covers the overall system that is required to protect and defend your infrastructure, from people to hardware, from processes to policies, from auditing to risk management. The focal point of the security design is the security and privacy of sensible data and minimising the attack surface.
      In this workshop we will tackle the main aspects of security architecture, the essentials of OS security, secure network design, virtualisation security and container security. We will discuss some of the security tools and learn which are the key steps of risk management, including how to identify the risks, assess the damage they can cause and implement key security controls to prevent them.

      This workshop is jointly organized in collaboration with the EOSC Future project.

      Convener: Sven Gabriel (NIKHEF)
      • 09:30
        Introduction to Operational IT Security 1h

        In this introductory session we will discuss the needed basis for security operations and how to use the results of risk management for the security architecture.

        Speaker: Sven Gabriel (NIKHEF)
      • 10:30
        coffee break 30m
      • 11:00
        Security Architecture 1h
        Speaker: Daniel Kouril (CESNET)
      • 12:00
        SOC 1h
        Speaker: David Crooks (STFC)
    • 11:00 11:30
      Coffee 30m
    • 11:30 13:00
      Training: Enabling international federated access in your service with Check-in Quartz (Vienna Andel Prague)

      Quartz

      Vienna Andel Prague

      The proposed tutorial focuses on how to enable user access with the EGI Check-in service using federated identities, covering also some aspects of Data Protection and the service integration process with the EGI Federation Registry tool.

      Target audience:
      IT administrators both from Research Communities and e-Infrastructures

      Conveners: Nicolas Liampotis (GRNET), Valeria Ardizzone (EGI.eu)
    • 11:30 13:00
      Training: Federating and serving distributed data for computational users with EGI DataHub Sapphire (Vienna Andel Prague)

      Sapphire

      Vienna Andel Prague

      EGI runs a 'DataHub service' based on the Onedata technology from CYFRONET. DataHub is a high-performance data management solution that offers unified data access across globally distributed environments and multiple types of underlying storage. It allows researchers to share, collaborate and perform computations on the stored data easily.

      Users can bring data close to their community or to the compute facilities they use, in order to exploit it efficiently. This is as simple as selecting which (subset of the) data should be available at which supporting provider.
      This tutorial will show to users and scientific communities how to publish, share, discover and reuse data with the EGI DataHub service.

      The main features of DataHub are:
      - Discovery of data spaces via a central portal.
      - Policy based data access.
      - Replication of data across providers for resiliency and availability purposes.
      - Integration with EGI Check-in allows access using community credentials, including from other EGI services and components.
      - File catalog to track replication of data and manage logical and physical files.

      With the EGI DataHub communities can implement various access policies for the data they share:
      - Unauthenticated, open access
      - Access after user registration or
      - Access restricted to members of a scientific community

      In this tutorial the EGI DataHub fundamentals will be presented and will be shown a live user oriented demo.

      Target audience: This tutorial is designed for scientific communities, and IT-service providers who are interested to elaborate big datasets in a hybrid cloud scenarios

      Agenda:

      Intro
      - Basics - Onedata 101
      - Intro to EGI DataHub
      - DataHub current features

      Onedata roadmap for Datahub
      - Directory capacity
      - Data archive

      Hands On
      - Using the Web interface in EGI DataHub

      Conveners: Andrea Manzi (EGI.eu), Lukasz Dutka (CYFRONET)
    • 11:30 13:00
      Training: Reproducible Open Science With Big Data - The EGI Notebooks and Binder services Topaz (Vienna Andel Prague)

      Topaz

      Vienna Andel Prague

      The EGI Notebooks service is an environment based on Jupyter and the EGI cloud service that offers a browser-based, scalable tool for interactive data analysis. The notebooks environment provides users with notebooks where they can combine text, mathematics, computations and rich media output.

      In this tutorial the EGI Notebooks fundamentals will be presented and will be explained how to use it with Binder and other open-source solutions to implement Open Science.

      Target audience: This tutorial is designed for scientific communities, for programmers and IT-service providers who are interested to use Jupyter Notebooks as a tools for Open Science.

      Convener: Enol Fernandez (EGI.eu)
    • 13:00 14:00
      lunch 1h