EGI Conference on Challenges and Solutions for Big Data Processing on Cloud

Europe/Amsterdam
CWI Conference Centre

CWI Conference Centre

CWI Building Science Park 123, 1098XG Amsterdam
Tiziana Ferrari (EGI.EU)
Description
This year the aim of the EGI Autumn conference will focus on the open issues related to the efficient processing of big data towards the realization of the EGI vision of the Open Science Commons, and will feature user-orientated sessions, as well as a rich track on Cloud including
  • the joint EGI-GÉANT two-days Symposium on Federated Community Cloud Services for e-Science on the requirements, opportunities and next steps for the implementation of a publicly funded community cloud for the European Research Area.
  • the CloudWatch plugfest and workshop on Cloud standards profiles that will see implementations of a number of Cloud related standards tested against each other for interoperability, and a closing workshop where participants will elicit use cases, supporting implementations and hosting standards developement organisations for specific profiles on existing standards, where needed.

DOCUMENT for live comments: http://go.egi.eu/livedoc

On the 24th of September after the EGI conference, the participants are warmly invited to join at the EUDAT networking cocktail from 17:30 until 19:30 in the foyer area of the RDA conference venue. It will be an excellent opportunity to network and share a drink together. EUDAT will host a poster session for organisations, initiatives and projects to showcase their data related activities and results and EGI participants are welcome to submit an application for a poster to be displayed during the networking cocktail. For more information see the EUDAT networking cocktail session details.

CALL FOR PARTICIPATION
A call for participation to the joint EGI-GÉANT Symposium is open until Tuesday 19 August. Cloud providers and integrators of community clouds offerings, cloud specialists, cloud users and other stakeholders from the research and education community to participate by submitting proposals for presentations and workshops.

WHO SHOULD ATTEND THE CONFERENCE

  1. User communities with cloud service requirements: the conference will gather expert researcher, cloud technologists and cloud providers to discuss scientific use cases and requirements for big data management, analysis and re-use on cloud,
  2. Community and commercial cloud providers: the conference will provide networking opportunities for cloud providers to discuss technology roadmaps and challenges related to operating a cloud federation,
  3. Cloud technologists: the conference will offer opportunities to increase the user adoption of community cloud service solutions and new collect requirements.
REGISTRATION
While the programme is being finalized, we encourage you to register online – no registration fee is required, but capacity offered by the conference venue is limited.

Registration for the Cloud Plugfest free of charge, but handled separately.

KEY DATES

  • deadline for submission of abstracts to the EGI-GÉANT Symopsium: 19 August 2014
  • notification of acceptance of abstracts: 29 August
  • deadline for registration to the conference: 15 September

Participants
  • Adam Huffman
  • Afonso Duarte
  • Alessandro Costantini
  • Alexandre Bonvin
  • Alvaro Lopez Garcia
  • Alysson Bessani
  • Anatoli Danezi
  • Andre Schaaff
  • Andrea Manieri
  • Andrea Manzi
  • Andreas Drakos
  • Andres Steijaert
  • Andrii Zinchenko
  • Annabel Grant
  • Arjen van Rijn
  • Balazs Konya
  • Bernd Schuller
  • Bill Pulford
  • Björn Hagemeier
  • Boris Parak
  • Boro Jakimovski
  • Brook Schofield
  • Carmela Asero
  • Chris Morris
  • Christine Staiger
  • Christos Filippidis
  • Christos Kanellopoulos
  • Christos Loverdos
  • Christos Markou
  • Cyril Lorphelin
  • Daniel Kouril
  • David Britton
  • David Colling
  • David Groep
  • David Kelsey
  • David Meredith
  • Davide Salomoni
  • Dean Flanders
  • Diego Scardaci
  • Dobrisa Dobrenic
  • Doina Cristina Aiftimiei
  • Dr David Wallom
  • Emanouil Atanassov
  • Emir Imamagic
  • Enol Fernández
  • Enrico Vianello
  • Erik-Jan Bos
  • Erwin Goor
  • Eugene Krissinel
  • Farhat Naureen Memon
  • Fernando Aguilar
  • Feyza Eryol
  • Gabor Terstyanszky
  • Geneviève Moguilny
  • Geneviève Romier
  • Gergely Sipos
  • Geunchul Park
  • Giacinto Donvito
  • Giovanni Aloisio
  • Guido Aben
  • Han-Wei YEN
  • Haris Gavranovic
  • Helmut Heller
  • HENRY AZUKA ILOKA
  • Horst Schwichtenberg
  • Hrachya Astsatryan
  • Ian Collier
  • Irene Nooren
  • Isabel Campos Plasencia
  • Iván Díaz Álvarez
  • izhar ul hassan
  • Jan Bot
  • Jan Gruntorad
  • Jan Meizner
  • Javier Jimenez
  • Jeremy Coles
  • Jerome Pansanel
  • Jesus Marco
  • John Gordon
  • Jorge Gomez
  • João Pina
  • Juan Luis Font
  • Jure Kranjc
  • Kagkelidis Konstantinos
  • Karl Meyer
  • Kostas Koumantaros
  • Krzysztof Kurowski
  • Kurt Baumann
  • Licia Florio
  • Linda Cornwall
  • Lorène Béchard
  • Luciano Gaido
  • Ludek Matyska
  • Lukasz Dutka
  • Malgorzata Krakowian
  • Marcin Radecki
  • Marion MASSOL
  • Marios Chatziangelou
  • Mariusz Sterzel
  • Mark McAndrew
  • Mark Mitchell
  • Mark Santcroos
  • Marko Bonac
  • Michael Enrico
  • Michal Procházka
  • Michel Drescher
  • Michel Jouvin
  • Mihály Héder
  • Mikael Borg
  • Mikael Linden
  • Miroslav Ruda
  • Mohammad Nur
  • Myung-Seok Choi
  • Natalia Manola
  • Nuno Ferreira
  • Olav Kvittem
  • Onur Temizsoylu
  • Owen Appleton
  • Panos Louridas
  • Patricio Sandoval
  • Patrick Fuhrmann
  • Peter Baumann
  • Rafael C Jimenez
  • Ralph Müller-Pfefferkorn
  • Rimantas Kybartas
  • Rob Quick
  • Robert Lovas
  • Roberto Sabatino
  • Roger Jones
  • Ron Trompert
  • ruben Riestra
  • RUBEN VALLES
  • Sandro Fiore
  • Sang-Hwan Lee
  • Sara Coelho
  • Sara Ramezani
  • Sergio Andreozzi
  • Simon C. LIN
  • Simon Leinen
  • Stefan Paetow
  • Stuart Pullinger
  • Sven Gabriel
  • Sy Holsinger
  • Tadeusz Szymocha
  • Taesang Huh
  • Tamas Balogh
  • Tiziana Ferrari
  • Todor Gurov
  • Tomasz Szepieniec
  • Viet Tran
  • Vincenzo Spinoso
  • Wo Chang
  • Won-Kyung Sung
  • Yannick LEGRE
  • Yuri Demchenko
  • Zain-ul-Abdin Khuhro
  • Zdenek Sustr
  • Ziad El Bitar
    • 13:30 14:00
      Welcome and Open Science Commons: vision and strategy Turingzaal

      Turingzaal

      CWI Conference Centre

      Convener: Dr Tiziana Ferrari (EGI.EU)
      slides
    • 14:00 15:30
      BOF on Let's keep the "single" in Single Sign On H.320 (NIKHEF)

      H.320

      NIKHEF

      European Researchers use a large and increasing range of web facilities to support their work, including from collections of publications and published data, collections of data with restricted access, and remote access to experimental equipment.

      Managing multiple user ids is becoming onerous. There are several active Single Sign On initiatives which aim to help with this. This session will be an opportunity for people planning to provide or use Single Sign On services to meet each other and coordinate this work.

      Conveners: Chris Morris (STFC), Peter Solagna (EGI.EU)
      • 14:00
        Introduction 5m
        Speaker: Chris Morris (STFC)
      • 14:05
        FIM4R requirements 15m
        Speaker: David Kelsey (STFC)
        Slides
      • 14:20
        Moonshot 15m
        Speaker: Stefan Paetow (JISC)
        Slides
      • 14:35
        Diamond Light source use cases 15m
        Speaker: Bill Pulford
        Slides
      • 14:50
        Enabling SSO to EGI Cloud services, a PoC 15m
        Speaker: Peter Solagna (EGI.EU)
        Slides
      • 15:05
        The EUDAT perspective for SSO capabilities 15m
        Speakers: Chris Morris (STFC), Dr Jens Jensen (STFC)
        Slides
      • 15:20
        ELIXIR requirements for SSO capabilities 10m
        Speaker: Mikael Linden (CSC)
        Diapositivas
    • 14:00 15:30
      ELIXIR and KM3NeT storage case studies and EGI data management services Eulerzaal

      Eulerzaal

      CWI Conference Centre

      Convener: Patrick Fuhrmann (DESY)
      slides
      • 14:00
        Introduction 10m
        Speaker: Patrick Fuhrmann (DESY)
        Slides
      • 14:10
        Computing Requirements for a several cubic kilometer sized Underwater Neutrino Telescope 20m
        Speakers: Christos Filippidis (NCSR Demokritos), Christos Markou (National Center for Scientific Research Demokritos)
        Slides
      • 14:30
        The ELIXIR storage case studies 20m
        Speakers: Mikael Borg (Bioinformatics Infrastructure for Life Sciences), Rafael Jimenez (ELIXIR Hub, EBI)
        Slides
      • 14:50
        Architecture EGI Data Management Software Stack 20m
        Speaker: Patrick Fuhrmann (DESY)
        Slides
      • 15:10
        An overview of EGI cloud storage services 20m
        Speaker: Dr Giacinto Donvito (INFN)
        Slides
    • 14:00 15:30
      Status, perspectives and security for the EGI Federated Cloud Turingzaal

      Turingzaal

      CWI Conference Centre

      Conveners: David Wallom (OXFORD), Michel Drescher (EGI.EU)
      • 14:00
        EGI Federated Cloud status 10m
        Speaker: David Wallom (OXFORD)
        Slides
      • 14:10
        Federated Cloud computing services 10m
        Speaker: Boris Parak (CESNET)
        Slides
      • 14:20
        Federated Cloud storage services 10m
        Speaker: Christos Loverdos (GRNET)
      • 14:30
        Appliance lifecycle services 10m
        Speaker: Marios Chatziangelou (IASA)
        Slides
      • 14:40
        VO services and identity propagation 10m
        Speaker: Mr Miroslav Ruda (Cesnet)
        Slides
      • 14:50
        Operational integration & federation 10m
        Speaker: Malgorzata Krakowian (EGI.EU)
        Slides
      • 15:00
        EGI Federated Cloud – Support process for use cases 10m
        The EGI Federated Cloud was launched in production in May 2014 during the EGI CF 2014 and, after this event, many European scientific communities have been interested to test this new kind of resources offered by the EGI infrastructure to understand whether their research could profit of this new feature. So, in the last months, the EGI.eu User Community Support Team (UCST) received a lot of requests of support from communities asking help to understand how to start to use the EGI Federated Cloud. To properly manage all these requests and to offer an adequate support service to all the interested communities, the EGI.eu UCST defined a support process for use cases, identifying all the steps needed to allow a communities to fully exploit the EGI Federated Cloud (from testing to production). This support process, centrally coordinated by EGI.eu, also involves the NGIs, in charge of support national use cases. Relevant training material was also prepared to complimentary the direct user assistance. This presentation will describe the defined support process for the EGI Federated Cloud use cases and introduce the training material now available. Furthermore the main use cases currently supported will be shortly listed, showing their status and how they are profiting of the EGI Federated Cloud. Finally, a list of requirements, collected by the user communities and not fully satisfied by the EGI Federated Cloud yet, will be presented. This list will be used to drive the future EGI Federated Cloud evolution towards the real user needs.
        Speaker: Diego Scardaci (INFN)
        Slides
      • 15:10
        Q&A 20m
        Speaker: Michel Drescher (EGI.EU)
    • 15:30 16:00
      Coffee Break 30m
    • 16:00 17:30
      BOF on Let's keep the "single" in Single Sign On H.320 (NIKHEF)

      H.320

      NIKHEF

      European Researchers use a large and increasing range of web facilities to support their work, including from collections of publications and published data, collections of data with restricted access, and remote access to experimental equipment.

      Managing multiple user ids is becoming onerous. There are several active Single Sign On initiatives which aim to help with this. This session will be an opportunity for people planning to provide or use Single Sign On services to meet each other and coordinate this work.

      Conveners: Chris Morris (STFC), Peter Solagna (EGI.EU)
      • 16:00
        Instruct and Bio MedBridges approaches to SSO 15m
        Speaker: Chris Morris (STFC)
        Slides
      • 16:15
        Using federated AAI for collaborations: state of the art and future 15m
        Speaker: Licia Florio (TERENA)
        Slides
      • 16:30
        Discussion 1h
    • 16:00 17:30
      ELIXIR and KM3NeT storage case studies and EGI data management services Eulerzaal

      Eulerzaal

      CWI Conference Centre

      Convener: Patrick Fuhrmann (DESY)
      slides
      • 16:00
        The GLOBUS data transfer service 15m
        Speaker: Helmut Heller (BADW)
        Slides
      • 16:15
        The FTS3 data transfer service 15m
        Speaker: Andrea Manzi (CERN)
        Slides
      • 16:30
        The CERN Data Management tool box 20m
        Speaker: Andrea Manzi (CERN)
        Slides
      • 16:50
        The dCache storage technology 20m
        Speaker: Patrick Fuhrmann (DESY)
        Slides
      • 17:10
        Discussion 20m
    • 16:00 17:30
      Status, perspectives and security for the EGI Federated Cloud Turingzaal

      Turingzaal

      CWI Conference Centre

      Conveners: David Wallom (OXFORD), Michel Drescher (EGI.EU)
      • 16:00
        Welcome and introduction 10m
        Speaker: David Wallom (OXFORD)
      • 16:10
        Hydrodynamic and Water Quality Modeling using EGI FedCloud 25m
        The framework of this development is the collaboration of our center within the European LIFE+ project ROEM+ where a Spanish SME, Ecohydros, is addressing the problem of modeling Water Quality in a water reservoir (Cuerda del Pozo in Soria) that is supplying drinking water to an small city in Spain. Understanding the Water Quality model requires a complete simulation of the processes that affect the water, and then validating this simulation with real data. Our group at IFCA has worked together with Ecohydros for the last five years, first to implement a near real time data acquisition system that provides this data directly to an offline data management system, and more recently, using a well-known software suite, Delft3D, to model the physical, chemical and biological conditions of the water. The key process to be modeled is Eutrophication, a process caused by the excess of nutrients in the water reservoirs leading to an increase in vegetation and other organisms and microorganisms, in particular algae, deriving in algae bloom when combined with certain conditions of solar radiation and water temperature. The following depletion of oxygen in the water leads to the death of many microorganisms and in general to a large reduction of life in it, and their impact is important in water reservoirs used for urban supply. The final aim of the project is to develop an early warning system for this reservoir that allows policy makers and authorities to know when an algae bloom is going to happen, in order to take actions. The work here presented is centered in the deployment in a Cloud infrastructure, FedCloud environment in this case, of the different components required by the model, and its execution with different parameters and conditions to get predictions that when contrasted with real data allow the validation of this model. Delft3D is an Open Source software suite that works over a mesh made from a map of the modeled water body, in our case Cuerda del Pozo reservoir in Soria (Spain). The program runs with 2D and 3D meshes with a number of layers that can be edited by the user. Hence mesh resolution is an important factor for program performance. With a low or medium resolution mesh (cells larger than 250x250 meters with few vertical layers) execution can be successfully accomplished with standard PCs. However, when a detailed simulation is required, as it is the case to model the complex conditions’ leading to eutrophication, the resolution has to be increased (e.g. 100x100 meter cells with more than 30 vertical layers) and more powerful computers are needed. Given our project requirements in CPU (>=2.5Ghz, few cores), memory (>12Gb) and disk (up to a few Terabytes), we need a Cloud services provider like EGI FedCloud which allow us to manage the entire workflow of data processing and analysis. Also, given the needs for the output we need a service able to support the storage of few terabytes and let us to transfer it using an easy and fast way. Eutrophication is a problem that impacts directly in water quality and human health. This project will provide as overall result an optimization of the eutrophication management, but it will also provide tools for the integrated management of the watershed to assess in terms of ecological status the combined effects of different natural processes and pressure, including climate, land use, agricultural and forestry management, etc. As the LIFE+ project presents its results, new platforms for new water reservoirs may be implemented (this is happening already now in another water reservoir at Avila, with similar problems) and the same model can be implemented. But the key point is to understand, thanks to this model, the main reason for the eutrophication to happen, and if it is due to human activity (like farming near the water) propose and adopt the measures to reduce or eliminate it. From EGI point of view, this project is a good example of the use of a cloud infrastructure in the research following the initiative of an SME. For small and medium consultancy companies in the environmental field, a data analysis platform that can manage a workflow of a complex model and a large amount of data provides a competitive point. Also Cloud opens the door for providing also graphical results to the researchers via Web Apps or streaming of desktops, storing a big amount of data in the Cloud itself and provides companies resources that they do not have. For biology and other type of researchers that do not often have enough knowledge in informatics, a ready-to-launch image can be set up that contained every component needed to execute a model using Delft3D Software Suite. Researchers would thereby be able to select the required configuration of the virtual machine they need in terms of CPUs, memory and storage and, using a cloud infrastructure, and select the image that contained Delft3D ready to process. That way installation and configuration process is avoided and researchers would be able to run a model more quickly and more efficiently and delete the virtual machine once their models are finished. For further information please reade the attached file.
        Speaker: Fernando Aguilar (CSIC)
        Slides
      • 16:35
        Experimental Federated Geoprocessing Cloud Services for Earth Science Community 25m
        The Federated Geoprocessing Cloud Services for Earth Science Community consists of a few heterogeneous components (like an Interoperable Web Portal for Parallel Geoprocessing of Satellite Image Indices, cloud storage etc.) with separate authentication and authorization approaches. The unified service keeps the identity information of the community members in one centralized place and provides all these different components to the Earth Science Community. The main aim of this article is to introduce the Federated Geoprocessing Cloud Services for Earth Science Community consists of identity and service providers.
        Speaker: Dr Hrachya Astsatryan (Head of HPC Laboratory, Institute for Informatics and Automation Problems of the National Academy of Sciences of Armenia)
        Slides
      • 17:00
        Panel discussion 30m
        Speakers: David Wallom (OXFORD), Davide Salomoni (INFN), Fernando Aguilar (CSIC), Mr Riestra Ruben Riestra (inmark), Dr Sven Gabriel (FOM)
    • 17:30 19:30
      EUDAT Networking Cocktail RDA 4th Plenary Venue (Meervaart Congres & Event)

      RDA 4th Plenary Venue

      Meervaart Congres & Event

      EUDAT is holding its 3rd Conference in De Meervaart, at the RDA 4th Plenary venue. The EGI Conference participants are warmly invited to join at the networking cocktail starting at 17:30 in the foyer area of the conference centre. It will be an excellent opportunity to network and share a drink together.

      EUDAT will host a poster session for organisations, initiatives and projects to showcase their data related activities and results and EGI participants are welcome to submit an application for a poster to be displayed during the networking cocktail.

      Travel directions See the EGI Conference overview page

      Showcase your activities and results at the EUDAT 3rd Conference Networking Cocktail The 3rd EUDAT Conference, 24-25 September 2014, De Meervaart, will address key themes in the area of data infrastructures and is particularly of interest for data practitioners, infrastructure providers and the researchers who use them as well as computer scientists and policy makers. Even if you are attending another event organised in conjunction with the EUDAT conference you are welcome to attend the networking cocktail and display a poster taking advantage of the broad and rich audience attending the meeting.

      The poster will be on display throughout the 2 days event and will be of special focus during the networking cocktail on 24th September from 17:30 onwards (De Meervaart Foyer). For more details and poster application visit poster session.

    • 09:00 10:30
      CloudWATCH Cloud Plugfest and Standards Profile Workshop Groung floor meeting room (Matrix I Building)

      Groung floor meeting room

      Matrix I Building

      More and more consumers are expressing concerns about the lack of control, interoperability and portability. Why? Because they are central to avoiding vendor lock-in, whether at the technical, service delivery or business level, thus ensuring broader choice. As a user, open standard interfaces protect you from vendor lock-in, so you avoid significant migration costs you would face when open interfaces are not provided.

      For a European research provider interoperability means more efficient resource utilisation. The EGI federated cloud is prime example of this. Through a decade of peer collaborative work in Europe and beyond, the experts behind FedCloud have gained considerable expertise in standards development and implementation, laying the foundation for interoperability testing and fairer competition. This expertise is now also supporting interoperability between Europe and Brazil through EUBrazil Cloud Connect.

      The Cloud Interoperability Plugfest project, Cloud Plugfests, is an international co-operative community series designed to promote interoperability efforts on cloud-based software, frameworks, and standards among vendors, products, projects and implementations. The series supports ongoing and continuing interoperability efforts among and between the sponsoring organisations, and with the cloud community at large. These efforts include organised software demonstrations, in-person developer gatherings, and continuous access to professional-grade cloud testing frameworks and tools.

      The Plugfest will see Brazilian participation for the very first time. EUBrazil Cloud Connect will test the OCCI implementation of fogbow, middleware that has been developed by the Federal University of Campina Grande. This new connecting bridge with Brazil focuses on a new implementation of OCCI API (fOCCI) as well as an extension to accommodate requests for resources exploited in an opportunistic way. This workshop is also an opportunity to showcase growing international interest in interoperability testing and why the Plugfest is important to certify compliance of the implementation with the specification.

      Reducing ambiguity through standards profiling

      A Standard very often supports multiple use cases in its specification text which can lead to ambiguity and a lack of real interoperability across different interfaces. Ambiguity is a fundamental challenge for interoperability standards and key to maximising interoperability. A profile on a standard clarifies in an unambiguous way how a standard has to be interpreted, explaining how to implement it based on your specific use case.
      As a starting point for this, CloudWATCH has created a portfolio of European and international use cases on technical, policy and legal requirements. The common standards profiles derived from these use cases will be tested around the federation of cloud services.

      This challenging work is part of CloudWATCH’s mission to making an active contribution to standards and certification, driving interoperability as critical to boosting innovation in Europe.

      CloudWATCH use case portfolio - insights from standards groups
      The workshop will take a brief look at the standardisation landscape today. It will identify some of the most important requirements that existing standards can meet. Interactive discussions will then explore possible pathways to filling gaps, either through new standards or extensions to existing standards. The workshop will wrap up with an overview of the benefits of interoperability and how it is set to become a hot topic around best practices in the business community in 2015.

      Organisers: Fraunhofer FOKUS and EGI.eu

      • 09:00
        Welcome 10m
        Speaker: Michel Drescher (EGI.EU)
        Audio conferencing details
      • 09:10
        Setup & Networking 10m
      • 09:20
        Mapping of groups and standards 10m
      • 09:30
        Interoperability testing 1h
    • 09:00 10:30
      EGI Pay-for-Use Eulerzaal

      Eulerzaal

      CWI Conference Centre

      Since January 2014, EGI has been running a dedicated Pay-for-Use Proof of Concept (PoC) to understand how new business models can augment the traditional free at the point of use delivery model.
      Activities have picked up as of May with a dedicated task in EGI-InSPIRE PY5.
      Initial results of the first 6-months were summarised in a dedicated report and presented at the EGI-InSPIRE EC review in June (see material).
      Moving into the “2nd Phase” of activities, these 2, 90-min sessions serve as a checkpoint in activities in order to review, discuss and plan the final work before preparing the final report by the end of the year and allow for input from the wider community to help shape future activities.
      The sessions offer a mix of presentations and ample discussion opportunities around the following topics: PoC overview and results to date, business model and pricing schemes, technical tools and development, call for participation, pre-commercial Procurement and public procurement of innovative solutions, legal and policy aspects.

      Convener: Sy Holsinger (EGI.EU)
      Report
      • 09:00
        Pay-for-Use PoC: Overview and results to date 20m
        The purpose of this presentation is to provide a high-level overview of the Pay-for-Use Proof of Concept activities, objectives and main results to date to ensure that all participants are on the same page for contributing to discussions.
        Speaker: Sy Holsinger (EGI.EU)
        Slides
      • 09:20
        Business Models and Pricing Schemes 20m
        The main objective of this presentation is to articulate the discussions that have taken place over the last several months, building on Phase 1 achievements in order to take the following decisions on: 1.) Do the business models represent the full landscape? If not, what is missing or what needs to change? 2.) Selection of pricing schemes - What can be supported technically? Are they flexible and able to support a business strategy? What is missing in order to provide/sign a contract?
        Speaker: Sy Holsinger (EGI.EU)
        Slides
      • 09:40
        Discussion 15m
      • 09:55
        All things Technical: How it currently works and changes to be made (e.g. tools) 20m
        Speaker: Diego Scardaci (INFN)
        Slides
      • 10:15
        Discussion 15m
    • 09:00 10:30
      EGI-GEANT Symposium: setting the scene Turingzaal

      Turingzaal

      CWI Conference Centre

      Welcome & introduction, the EGI & GÉANT perspectives, the national view

      Conveners: Andres Steijaert (SurfSARA), Dr Tiziana Ferrari (EGI.EU)
      • 09:00
        Welcome and introduction 15m
        Slides
      • 09:15
        The EGI perspective 20m
        Speaker: Dr Tiziana Ferrari (EGI.EU)
        Slides
      • 09:35
        The GÉANT perspective 20m
        Speaker: Andres Steijaert (SURFnet)
        Slides
      • 09:55
        Cloud computing activities and contributions by CESNET 10m
        CESNET is the Czech NREN and NGI. Its computing branch, METACentrum, has a history of virtualization-based solutions (data center virtualization, on-demand virtual cluster services, etc.), providing mainly computing services for the Czech academic community. It operates its own national private academic IaaS cloud, which is free for academic use. Its establishment was motivated by the need to react flexibly to user communities (mostly provisioning highly specialised worker nodes for scientific computing). CESNET also operates an OpenNebula-based site in EGI's federated cloud platform. CESNET is active in development for clouds, contributing to major open-source projects such as OpenNebula, EMI or EGI FedCloud, and leading the development of others, such as the rOCCI framework (OCCI library and compatibility service for various cloud management platforms) or Perun (an Identity and attributes manager of choice for many VOs related to the EGI environment). CESNET also participates in the GN3plus project, focusing on Cloud standardization and interoperability, and on networking solutions for clouds.
        Speaker: Zdenek Sustr (CESNET)
        Slides
      • 10:05
        GRNET Cloud Services and Collaborations 10m
        GRNET has been active in european e-infrastructure collaborations for more than 10 years to date. In the area of Grid computing, it has been active in all EGEE-series projects, and now in EGI-InSPIRE and beyond. In networking, it has long been a key partner in the GEANT projects. In cloud computing it has developed and operates its own cloud infrastructure, offering Infrastructure as a Service to the Greek research and academic community. Recently these different strands have been coming together. In EGI-InSPIRE, GRNET is working towards bridging the world of grids and clouds; in cloud computing, it expands its collaborations in Europe; in GEANT, it leads the Cloud Integration task of the Support to Clouds activity. Here we will give a short presentation of the cloud services and activities offered by GRNET in both EGI inspire and GN3+, including our plans for the future.
        Speakers: Kostas Koumantaros (GRNET), Panos Louridas (GRNET)
        Slides
      • 10:15
        New storage and Cloud computing services proposed by France Grilles 10m
        Small and medium scientific communities acquire more and more data and the availability of e-Infrastructures for managing and analysing these data is essential for providing new scientific results in a timely manner. To address these needs, the french NGI France Grilles, which operates the national production grid infrastructure, has started two projects in 2013: FG-iRODS and FG-Cloud. They aim at federating distributed storage and computing resources. These initiatives became a reality and France Grilles, in addition to the already existing grid and DIRAC service, is offering now two new services to the french scientific community: * A federated and distributed storage infrastructure based on iRODS * A federated IaaS Cloud In this contribution, an overview of each project will be given, including the technical infrastructure, the roadmap and the coming developments. Use case of these services will also be presented.
        Speaker: Jerome Pansanel (CNRS)
        Slides
    • 10:30 11:00
      Coffee break CWI Building

      CWI Building

      CWI Building Science Park, 1098XG Amsterdam
    • 11:00 12:30
      CloudWATCH Cloud Plugfest and Standards Profile Workshop Groung floor meeting room (Matrix I Building)

      Groung floor meeting room

      Matrix I Building

      More and more consumers are expressing concerns about the lack of control, interoperability and portability. Why? Because they are central to avoiding vendor lock-in, whether at the technical, service delivery or business level, thus ensuring broader choice. As a user, open standard interfaces protect you from vendor lock-in, so you avoid significant migration costs you would face when open interfaces are not provided.

      For a European research provider interoperability means more efficient resource utilisation. The EGI federated cloud is prime example of this. Through a decade of peer collaborative work in Europe and beyond, the experts behind FedCloud have gained considerable expertise in standards development and implementation, laying the foundation for interoperability testing and fairer competition. This expertise is now also supporting interoperability between Europe and Brazil through EUBrazil Cloud Connect.

      The Cloud Interoperability Plugfest project, Cloud Plugfests, is an international co-operative community series designed to promote interoperability efforts on cloud-based software, frameworks, and standards among vendors, products, projects and implementations. The series supports ongoing and continuing interoperability efforts among and between the sponsoring organisations, and with the cloud community at large. These efforts include organised software demonstrations, in-person developer gatherings, and continuous access to professional-grade cloud testing frameworks and tools.

      The Plugfest will see Brazilian participation for the very first time. EUBrazil Cloud Connect will test the OCCI implementation of fogbow, middleware that has been developed by the Federal University of Campina Grande. This new connecting bridge with Brazil focuses on a new implementation of OCCI API (fOCCI) as well as an extension to accommodate requests for resources exploited in an opportunistic way. This workshop is also an opportunity to showcase growing international interest in interoperability testing and why the Plugfest is important to certify compliance of the implementation with the specification.

      Reducing ambiguity through standards profiling

      A Standard very often supports multiple use cases in its specification text which can lead to ambiguity and a lack of real interoperability across different interfaces. Ambiguity is a fundamental challenge for interoperability standards and key to maximising interoperability. A profile on a standard clarifies in an unambiguous way how a standard has to be interpreted, explaining how to implement it based on your specific use case.
      As a starting point for this, CloudWATCH has created a portfolio of European and international use cases on technical, policy and legal requirements. The common standards profiles derived from these use cases will be tested around the federation of cloud services.

      This challenging work is part of CloudWATCH’s mission to making an active contribution to standards and certification, driving interoperability as critical to boosting innovation in Europe.

      CloudWATCH use case portfolio - insights from standards groups
      The workshop will take a brief look at the standardisation landscape today. It will identify some of the most important requirements that existing standards can meet. Interactive discussions will then explore possible pathways to filling gaps, either through new standards or extensions to existing standards. The workshop will wrap up with an overview of the benefits of interoperability and how it is set to become a hot topic around best practices in the business community in 2015.

      Organisers: Fraunhofer FOKUS and EGI.eu

      • 11:00
        Interop testing 1h 30m
    • 11:00 12:30
      EGI Pay-for-Use Eulerzaal

      Eulerzaal

      CWI Conference Centre

      Since January 2014, EGI has been running a dedicated Pay-for-Use Proof of Concept (PoC) to understand how new business models can augment the traditional free at the point of use delivery model.
      Activities have picked up as of May with a dedicated task in EGI-InSPIRE PY5.
      Initial results of the first 6-months were summarised in a dedicated report and presented at the EGI-InSPIRE EC review in June (see material).
      Moving into the “2nd Phase” of activities, these 2, 90-min sessions serve as a checkpoint in activities in order to review, discuss and plan the final work before preparing the final report by the end of the year and allow for input from the wider community to help shape future activities.
      The sessions offer a mix of presentations and ample discussion opportunities around the following topics: PoC overview and results to date, business model and pricing schemes, technical tools and development, call for participation, pre-commercial Procurement and public procurement of innovative solutions, legal and policy aspects.

      Convener: Sy Holsinger (EGI.EU)
      Report
      • 11:00
        PCP and PPI (Pre-commercial Procurement and public procurement of innovative solutions): Overview and opportunities 20m
        Pre-Commercial Procurement (PCP) and Public Procurement of Innovation (PPI) are promoted by the EC as an instrument to enable the public sector to provide the market with a clear indication of their needs, boosting industry research towards solutions that are compliant with their needs. As involved in the PCP process itself, the companies participating in the PCP will get the opportunity to acquired a leadership in a new european ICT market created by the PCP R&D cooperation. This presentation illustrates how PCP can be an opportunity for ICT resources suppliers to offer new solutions to the public sector.
        Speaker: Vincenzo Spinoso (INFN)
        Slides
      • 11:20
        Discussion 20m
      • 12:00
        Final Discussion and Wrap-up 30m
    • 11:00 12:30
      EGI-GEANT Symposium: Platforms Turingzaal

      Turingzaal

      CWI Conference Centre

      Convener: Nuno Ferreira (EGI.EU)
      • 11:00
        Hadoop analytics provisioning based on a virtual infrastructure 25m
        More than a year ago CESGA started to provide a Big Data service based on Hadoop. The initial request for this type of service came from the LIA2 research group working in Gaia, an ambitious mission from the European Space Agency. The initial implementation of the service was based on physical resources that were allocated through advanced reservations in one of our supercomputer clusters. Unfortunately this lead to delays in the provisioning of the resources and the need for reconfigurations each time the Hadoop cluster was created. At that time, a devoted cluster was not a viable alternative because the demand for such service did not warranty a continuous usage resources. After evaluating different alternatives, the service was moved to our private cloud infrastructure were the provisioning could be done in a much more flexible way. The performance loss was very small for Gaia's jobs so their executions where moved to this platform where they used up to 100 nodes to run their jobs. The new platform allowed to offer a Hadoop on demand service to all our users where they could get familiar with the Hadoop ecosystem and develop their our algorithms. Different alternatives to extend the virtual infrastructure have been evaluated, including an extensive study of the suitability of FedCloud to run federated Hadoop clusters and a comparison with Amazon EC2. The results of such study are promising, showing a small degradation of performance for small to medium jobs, making the FedCloud platform suitable for development and testing purposes.
        Speaker: Ivan Alvarez (FCTSG)
        Slides
      • 11:25
        Science SQL 25m
        SQL has been lingua franca for any-size data services in business, and has been tremendously successful in delivering flexible, scalable technology. Not so, however, in scientific and engineering environments. The main reason is data structure support: While flat tables are suitable for accounting and product catalogs, science needs substantially more complex information categories, such as graphs and multi-dimensional grids ("arrays"). The consequence has been a historical divide been "data" (large, only for download, no search) and "metadata" (small, agile, searchable). This is changing now. In june 2014, ISO has commenced work on an extension to SQL which embeds any-size multi-dimensional arrays in tables and extends the query language with declarative operators. This standard, which will go by the official title ISO 9075 Part 15: SQL/MDA (for "Multi-Dimensional Arrays") can be expected to be a game changer in Big Science Data: With so-called Array Databases, users enjoy the well-known query flexibility on all spatio-temporal data, servers can transparently scale by utilizing parallelization, distribution, and new hardware, and the information integration achieved abolishes the data/metadata distinction once and for all. SQL/MDA can be expected to become the lingua franca for data access in and across data centers worldwide. We present the SQL/MDA concepts and their scalable implementation based on real-life services with 130+ TB individual offerings and introduce to the state of the art in the research field of Array Databases where in 2014 a single array query has successfully been distributed automatically to 1000+ cloud nodes. The presenter is member of the ISO SQL working group and co-editor of SQL/MDA. With his work on rasdaman he has coined the field of Array Databases.
        Speaker: Peter Baumann (Jacobs University)
        Slides
      • 11:50
        The Ophidia framework: toward cloud-based big data analytics for eScience 20m
        In many domains such as life sciences, climate, and astrophysics, scientific data is often n-dimensional and requires tools that support specialized data types and primitives if it is to be properly stored, accessed, analyzed and visualized. The n-dimensionality of scientific datasets, and their data cube abstraction, leads to a need for On-Line Analytical Processing (OLAP)-like primitives such as slicing, dicing, pivoting, drill-down, and roll-up. These primitives have long been supported in data warehouse systems and used to perform complex data analysis, mining and visualization tasks. Unfortunately, current OLAP systems fail at large scale—different storage models and data management strategies are needed to fully address scalability. Yet the analysis of scientific datasets has a higher computing demand with regard to current OLAP systems, which definitely leads to the need of having parallel/distributed solutions to meet the (near) real-time requirement. Finally OLAP systems are domain agnostic, so they do not provide domain-based support, functions and primitives that are essential to fully address scientific analysis. Currently, scientific data analytics relies on domain-specific software and libraries providing a huge set of operators and functionalities. This approach will fail at the large scale, because most of these software: (i) are desktop based, rely on local computing capabilities and need the data locally; (ii) cannot benefit from available multicore/parallel machines since they are based on sequential codes; (iii) do not provide declarative languages to express scientific data analysis tasks, and (iv) do not provide newer or more scalable storage models to better support the data multidimensionality. A related work in this area is the Ophidia project, a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate (as a whole) the entire set of fragments associated to a data cube. Some relevant examples include: (i) data sub-setting (slicing and dicing), (ii) data aggregation, (iii) array-based primitives (the same operator applies to all the implemented UDF extensions), (iv) data cube duplication, (v) data cube pivoting, (vi) NetCDF-import and export. Additionally, the Ophidia framework provides array-based primitives to perform data sub-setting, data aggregation (i.e. max, min, avg), array concatenation, algebraic expressions and predicate evaluation on large arrays of scientific data. Multiple primitives can be nested to implement a single more complex task (e.g., aggregating by sum a subset of the entire array). Bit-oriented plugins have also been implemented to manage binary data cubes; compression algorithms can be included as primitives too. The entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. A comprehensive benchmark and test cases are being defined with climate scientists to extensively test all of the features provided by the system. Preliminary experimental results are already available and have been published on scientific research papers. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), sea situational awareness (TESSA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). In particular, in the context of the EU FP7 EUBrazil Cloud Connect project (http://eubrazilcloudconnect.eu/), the Ophidia framework is being extended in order to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity.
        Speaker: Dr Sandro Fiore (CMCC)
        Slides
      • 12:10
        VERCE use case and requirements 20m
        The EU-funded project VERCE (http://verce.eu/) aims to address specific seismological use-cases employing resources spanning available e-Infrastructures on the basis of requirements elicited from the seismology community. It provides a service-oriented infrastructure to deal with the challenges the researchers face while carrying out the data-intensive and high-performance computations employed in modern day seismology. In particular, the implementation of the project is driven by two major use-cases. The first is the computationally intensive forward and inverse modelling of Earth system models, which is implemented with support for multiple waveform simulators running on HPC systems and x86 clusters. The second is a data-oriented seismic wave cross-correlation. VERCE offers a Scientific Gateway integrating access to workflows running on different infrastructures and data management including procurement of experiment parameters using a data infrastructure based on iRODS and supplemental web services. In this talk we will present where we see opportunities for VERCE to benefit from Cloud technology and will further present a proof-of-concept service already running on the Cloud, which is designed to complement the services of the VERCE infrastructure. It collects trace series from seismological data centers over Web Services following the International Federation of Digital Seismograph Networks standards (FDSN-WS), distributes them on Cloud resources and executes data pre- and post-processing scripts specified in Python using the ObsPy framework/seismological toolbox. The processing is carried out in Docker containers running on the Cloud VMs for additionally sandboxing the user code.
        Speaker: Andre Gemuend (FRAUNHOFER)
        Slides
    • 12:30 14:00
      Lunch CWI Building

      CWI Building

      CWI Building Science Park, 1098XG Amsterdam
    • 12:30 14:00
      Technology Coordination Board (on invitation) Meeting room (EGI.eu, Matrix 1)

      Meeting room

      EGI.eu, Matrix 1

      CWI Building Science Park, 1098XG Amsterdam
      Conveners: Michel Drescher (EGI.EU), Dr Tiziana Ferrari (EGI.EU)
    • 14:00 15:30
      CloudWATCH Cloud Plugfest and Standards Profile Workshop Ground floor meeting room (Matrix I Building)

      Ground floor meeting room

      Matrix I Building

      More and more consumers are expressing concerns about the lack of control, interoperability and portability. Why? Because they are central to avoiding vendor lock-in, whether at the technical, service delivery or business level, thus ensuring broader choice. As a user, open standard interfaces protect you from vendor lock-in, so you avoid significant migration costs you would face when open interfaces are not provided.

      For a European research provider interoperability means more efficient resource utilisation. The EGI federated cloud is prime example of this. Through a decade of peer collaborative work in Europe and beyond, the experts behind FedCloud have gained considerable expertise in standards development and implementation, laying the foundation for interoperability testing and fairer competition. This expertise is now also supporting interoperability between Europe and Brazil through EUBrazil Cloud Connect.

      The Cloud Interoperability Plugfest project, Cloud Plugfests, is an international co-operative community series designed to promote interoperability efforts on cloud-based software, frameworks, and standards among vendors, products, projects and implementations. The series supports ongoing and continuing interoperability efforts among and between the sponsoring organisations, and with the cloud community at large. These efforts include organised software demonstrations, in-person developer gatherings, and continuous access to professional-grade cloud testing frameworks and tools.

      The Plugfest will see Brazilian participation for the very first time. EUBrazil Cloud Connect will test the OCCI implementation of fogbow, middleware that has been developed by the Federal University of Campina Grande. This new connecting bridge with Brazil focuses on a new implementation of OCCI API (fOCCI) as well as an extension to accommodate requests for resources exploited in an opportunistic way. This workshop is also an opportunity to showcase growing international interest in interoperability testing and why the Plugfest is important to certify compliance of the implementation with the specification.

      Reducing ambiguity through standards profiling

      A Standard very often supports multiple use cases in its specification text which can lead to ambiguity and a lack of real interoperability across different interfaces. Ambiguity is a fundamental challenge for interoperability standards and key to maximising interoperability. A profile on a standard clarifies in an unambiguous way how a standard has to be interpreted, explaining how to implement it based on your specific use case.
      As a starting point for this, CloudWATCH has created a portfolio of European and international use cases on technical, policy and legal requirements. The common standards profiles derived from these use cases will be tested around the federation of cloud services.

      This challenging work is part of CloudWATCH’s mission to making an active contribution to standards and certification, driving interoperability as critical to boosting innovation in Europe.

      CloudWATCH use case portfolio - insights from standards groups
      The workshop will take a brief look at the standardisation landscape today. It will identify some of the most important requirements that existing standards can meet. Interactive discussions will then explore possible pathways to filling gaps, either through new standards or extensions to existing standards. The workshop will wrap up with an overview of the benefits of interoperability and how it is set to become a hot topic around best practices in the business community in 2015.

      Organisers: Fraunhofer FOKUS and EGI.eu

      • 14:00
        Interop testing 1h 30m
    • 14:00 15:30
      Developing the Open Science Commons Eulerzaal

      Eulerzaal

      CWI Conference Centre

      Convener: Sergio Andreozzi (EGI.EU)
      • 14:00
        Open Science, Open Data, Open Access 30m
        Speaker: Sergio Andreozzi (EGI.EU)
        Slides
      • 14:30
        Tracking the Scientific Output enabled by EGI with OpenAIRE 30m
        Speaker: Paolo Manghi (Istituto di Scienza e Tecnologie dell'Informazione - CNR)
        Slides
      • 15:00
        Big Data Management for Science and beyond: problems and perspectives 30m
        Big Data, or Data Deluge but also “Data Torrent” or “Data Bonanza” are simple evocative concepts that represent the overwhelming availability of digital data in several forms (the four/five Vs) from any discipline or contexts. However this new digital scientific world is still in its infancy: not all the management issues have been fully uncovered and still EU is lagging behind with respect US and the commercial environment in particular. The boundaries among the Research and Business environments are disappearing. The same data, produced by production environments are used to explore new ideas and services and suddenly promote commercial services or solutions. In the future there will be no distinction among the two, with several challenges for the owner of data and the owner of the underline infrastructures. Many communities are addressing the various problems for managing, preserving, searching and making sustainable the tools and instruments needed to deal with such data availability. In Space, for instance, the ESA data coming from the Sentinel, as part of the Copernicus program, will represent a flux of 2TB of data per day, potentially (but not yet demonstrated) useful to any environment, from the Smart Cities till the Environmental studies. Local and mobile sensors claimed as available from the smart revolution will provide MBs per day in each of our smart homes, and several TBs per day in the smart cities. Billion of Smartphone’s data and personal preferences and behaviors over Telco Networks will constitute data sets with uncover possibilities of services and business opportunities. Big data represents a challenge not only for the core eScience (Physics, Astronomy, Bioinformatics, and Chemistry, Climate and Environmental studies) but for any disciplines. A truly multidisciplinary Data eInfrastructures for eScience, need to expand its perspective and explore all these contexts. E.g. in US, EARTHCUBE, represent an attempt to collect and made available Space and other scientific data to any scientist and researchers, as well as commercial services. Such market place, somewhere referred as Information as-a-Service is an example of new paths that should be explored to exploit such unprecedented availability of digital data. Starting from existing multidisciplinary experiences, such as D4Science (along with its specialization with the Fishery and Marine Communities) and EUDAT or HelixNebula, the workshop will promote the sharing of state of the art, in term of issues, challenges and solutions from the Space, Smart Cities and Telco environments. Potential Speakers range from Space experts till Smart City’service providers and Telco representatives, EU technical coordinators and EC policy makers. The objective is to identify common elements, needs and approaches onto which establish a wider partnership for the reinforcement of the European Research Area and enabling all the scientific disciplines to benefit from this availability of data either for research and (eventually) for business opportunities.
        Speaker: Andrea Manieri (Engineering)
        Slides
    • 14:00 15:30
      EGI-GEANT Symposium: Cloud Security Turingzaal

      Turingzaal

      CWI Conference Centre

      This session will present the "Challenges" for Operational Security on federated Cloud operations in terms of the developments needed for security policy, procedures and monitoring with a view to moving to "Solutions" over the coming year or two.

      Convener: David Kelsey (STFC)
      • 14:00
        Cloud security: introduction 10m
        Speaker: David Kelsey (STFC)
        Slides
      • 14:10
        The requirement for traceability 15m
        Speaker: David Groep (FOM)
        Slides
      • 14:25
        Vulnerability Handling 10m
        Speaker: Linda Cornwall (STFC)
        Slides
      • 14:35
        Security Monitoring 10m
        Speaker: Daniel Kouril (CESNET)
        Slides
      • 14:45
        Cloud Resource Providers 15m
        The EGI CSIRT Questionnaires - analysis of the responses
        Speaker: Dr Sven Gabriel (FOM)
        Slides
      • 15:00
        Future plans 15m
        Speaker: Ian Collier (STFC)
        Slides
      • 15:15
        Cloud security: discussion 15m
    • 15:30 16:00
      Coffee break CWI Building

      CWI Building

      CWI Conference Centre

      CWI Building Science Park 123, 1098XG Amsterdam
    • 16:00 17:30
      CloudWATCH Cloud Plugfest and Standards Profile Workshop Ground floor meeting room (Matrix I Building)

      Ground floor meeting room

      Matrix I Building

      More and more consumers are expressing concerns about the lack of control, interoperability and portability. Why? Because they are central to avoiding vendor lock-in, whether at the technical, service delivery or business level, thus ensuring broader choice. As a user, open standard interfaces protect you from vendor lock-in, so you avoid significant migration costs you would face when open interfaces are not provided.

      For a European research provider interoperability means more efficient resource utilisation. The EGI federated cloud is prime example of this. Through a decade of peer collaborative work in Europe and beyond, the experts behind FedCloud have gained considerable expertise in standards development and implementation, laying the foundation for interoperability testing and fairer competition. This expertise is now also supporting interoperability between Europe and Brazil through EUBrazil Cloud Connect.

      The Cloud Interoperability Plugfest project, Cloud Plugfests, is an international co-operative community series designed to promote interoperability efforts on cloud-based software, frameworks, and standards among vendors, products, projects and implementations. The series supports ongoing and continuing interoperability efforts among and between the sponsoring organisations, and with the cloud community at large. These efforts include organised software demonstrations, in-person developer gatherings, and continuous access to professional-grade cloud testing frameworks and tools.

      The Plugfest will see Brazilian participation for the very first time. EUBrazil Cloud Connect will test the OCCI implementation of fogbow, middleware that has been developed by the Federal University of Campina Grande. This new connecting bridge with Brazil focuses on a new implementation of OCCI API (fOCCI) as well as an extension to accommodate requests for resources exploited in an opportunistic way. This workshop is also an opportunity to showcase growing international interest in interoperability testing and why the Plugfest is important to certify compliance of the implementation with the specification.

      Reducing ambiguity through standards profiling

      A Standard very often supports multiple use cases in its specification text which can lead to ambiguity and a lack of real interoperability across different interfaces. Ambiguity is a fundamental challenge for interoperability standards and key to maximising interoperability. A profile on a standard clarifies in an unambiguous way how a standard has to be interpreted, explaining how to implement it based on your specific use case.
      As a starting point for this, CloudWATCH has created a portfolio of European and international use cases on technical, policy and legal requirements. The common standards profiles derived from these use cases will be tested around the federation of cloud services.

      This challenging work is part of CloudWATCH’s mission to making an active contribution to standards and certification, driving interoperability as critical to boosting innovation in Europe.

      CloudWATCH use case portfolio - insights from standards groups
      The workshop will take a brief look at the standardisation landscape today. It will identify some of the most important requirements that existing standards can meet. Interactive discussions will then explore possible pathways to filling gaps, either through new standards or extensions to existing standards. The workshop will wrap up with an overview of the benefits of interoperability and how it is set to become a hot topic around best practices in the business community in 2015.

      Organisers: Fraunhofer FOKUS and EGI.eu

      • 16:00
        Summary of interop results 20m
      • 16:20
        Breakouts on Cloud standards 40m
        Participants (including remote) divide up into breakout groups with each discussing: - Discuss the impact of gathered interop results - Assess the needs for specification revisions - Discuss the need for extensions - Find related initiatives for profile definitions
      • 17:00
        Breakout reporting back 10m
      • 17:10
        Next steps & Cloud standards roadmaps 20m
    • 16:00 17:30
      Developing the Open Science Commons Eulerzaal

      Eulerzaal

      CWI Conference Centre

      • 16:00
        OpenAIRE: Future directions 30m
        Speaker: Natalia Manola (University of Athens, Greece)
        Slides
      • 16:30
        EGI Open Data Platform: Vision, conceptual framework, functionalities 30m
        Speaker: Lukasz Dutka (CYFRONET)
        Slides
      • 17:00
        Access to and Use of EGI by SMEs 30m
        Speaker: Javier Jimenez (EGI.EU)
        Slides
    • 16:00 17:30
      EGI-GEANT Symposium: AAI Turingzaal

      Turingzaal

      CWI Conference Centre

      Convener: Mr Brook Schofield (TERENA)
      • 16:00
        Introduction 20m
        Speaker: Christos Kanellopoulos (GRNET)
        Slides
      • 16:20
        Extending eduGAIN with HEXAA 20m
        One of the key challenges of Today’s e-Infrastructures is to allow borderless interworking of researchers, applications and research facilities. There have been a huge progress in this area in the last decade. Today federated access is a well-known concept and all major academic infrastructure projects include it in their agenda. However, this increasing adoption made it clear that adjustments have to be made in the federated infrastructure. In our presentation we introduce and demonstrate HEXAA, an External Attribute Provider (EAP). EAP is a new role in the federated system. An EAP provides attributes to the SP-s, just like an IdP, but is separated administratively from the home organization, hence “external”. EAPs can be operated by any communities, like research consortiums, interest groups. They can be operated on NREN level, and most importantly inter-federation level at eduGAIN. In technical terms, HEXAA is a SAML Attribute Authority, which can be requested by Shibboleth, simpleSAMLphp and other SAML 2.0 compatible software upon accessing a resource. After a session has been established between the IdP and SP, the SP software will look for additional attributes in HEXAA. Therefore, there is no need to force IdPs or local SP databases to store attributes that are necessary only for specific services. The attributes are managed in HEXAA GUI. The attributes can be authoritative, such as group or role membership, and these are facilitated by the Virtual Organization manager feature of HEXAA. Other attributes belong to the user profile, such as addresses, preferences, or any other information that is not released by the IdP, despite they can be key for successful service provisioning. All features of HEXAA can be accessed through its REST API, too. An important aspect of HEXAA software is that it implements privacy by design. From the very first phase of the project the developer team included a legal expert, so that the software is able to conform to the widest range of regulatory environments, with special attention to the current European Directive and the upcoming European Data Protection Regulation. In eduid.hu, the Hungarian Federation we have a federation-level HEXAA service. We have been using OpenNebula cloud with earlier and current version of HEXAA, also we have integrated it with the WS-PGRAGE/guse science gateway framework. HEXAA is therefore usable with any Liferay application as well. Drupal, Icinga, NFSEN, pydio (ajaxplorer), wikis and other applications are also integrated. HEXAA is a free and open source software, implemented in php using Symfony. In the presentation the HEXAA architecture and the major design decisions will also be discussed and it will be shown how the system is integrated with SAML SP-s.
        Speaker: Dr Mihaly Heder (MTA SZTAKI)
        Slides
      • 16:40
        Perun - identity and access management system 20m
        In this presentation we will introduce identity and access management system called Perun. The system provides functionality which covers management of the whole user life cycle in nowadays e-Infrastructures, from user enrolment into the e-Infrastructure to user expiration. Perun system supports management of virtual organizations, rights delegation, group management and enrolment management for making flexible user management. In compare to ordinary identity management systems Perun also provides service and access management. Perun is complex tool which eases management of research communities or users and services within the organizations. Perun system is used in production on national (e.g. Czech NGI) and international level (e.g. accesses to the EGI fedcloud resources are managed by Perun).
        Speaker: Michal Prochazka (CESNET)
        Slides
      • 17:00
        Using OpenConext for service delivery platforms 20m
        The goal of OpenConext is to make collaboration easier in Research and Education, by letting users: - Use their own favorite tools as much as possible. - Re-use their credentials (username/password) for every tool they use. - Create their own collaboration groups and re-use those for every tool as well. - Have a 'collaboration home'. OpenConext is a middleware suite, combining Federated Identity Management, federated groups, portals and distributed applications to deliver services. SURFnet, the Dutch NREN, developed the OpenConext software and released it as Open Source in 2012, aiming at creating a successful international Open Source project. Since then, several international organisations have picked up OpenConext to deploy their own platforms. This presentation will detail how OpenConext is useful for the research community (features and use), the OpenConext roadmap and how the community can participate in this initiative. OpenConext is open for collaboration! https://www.openconext.org/
        Speaker: Pieter van der Meulen (SURFnet)
        Slides
      • 17:20
        Discussion 10m
    • 09:00 10:30
      Developing the concept of a service catalogue and market place for EGI Eulerzaal

      Eulerzaal

      CWI Conference Centre

      Convener: Diego Scardaci (INFN)
      • 09:00
        Developing the concept of a service catalogue and market place for EGI - Overview 20m
        - Current status - EGI-InSPIRE PY5 activities - Service catalogue and marketplace in EGI-Engage
        Speaker: Diego Scardaci (INFN)
        Slides
      • 09:20
        Marketplace & service catalogue concepts, first design analysis 30m
        - user stories - main features to develop
        Speaker: Mr Dean Flanders (SWITCH)
        Slides
      • 09:50
        Open discussion 40m
        What other functionalities are needed in the marketplace? What functionalities are needed in the service catalogue? How do they relate to each other? What is the relationship with the pay-for-use activity and business model?
        Speaker: Mr Dean Flanders (SWITCH)
    • 09:00 10:30
      EGI-GEANT Symposium: Technology Turingzaal

      Turingzaal

      CWI Conference Centre

      Convener: Enol Fernandez (CSIC)
      • 09:00
        Open Cloud eXchange (OCX) as a part of the High Performance Cloud Services Delivery Infratsructure for Big Data Applications 25m
        Big Data applications and access to large scientific datasets require advanced networking infrastructure. This presentation will introduce into the concept of Open Cloud eXchange (OCX) that has been proposed by the GN3plus JRA1 activity to bridge the gap between two major components of the cloud services provisioning infrastructure: Cloud Service Provider (CSP) infrastructure; and cloud services delivery infrastructure which in cases of Big Data applications requires dedicated local infrastructure and quality of services that cannot be delivered by the public Internet infrastructure. In both cases there is a need for interconnecting the CSP infrastructure and local access network infrastructure, in particular, to solve the "last mile" problem in delivering cloud services to customer locations and individual (end-)users. The OCX remains neutral to actual cloud services provisioning and limits its services to Layer 0 through Layer 2 to remain transparent to current cloud services model. The proposed OCX concept will leverage the existing Internet eXchange (IX) and GLIF Open Lightpath Exchange (GOLE) solutions and practices, adding specific functionality that will simplify inter-CSP and customer infrastructure integration when supporting basic cloud services provisioning models. The presentation will discuss ongoing development, recent and planned demos being developed by the OCX development team.
        Speaker: Dr Yuri Demchenko (UvA)
        Slides
      • 09:25
        The Charon File System: A cloud-of-clouds infrastructure for Biobanks data storage and sharing 20m
        During the last years several cloud-of-clouds (or multi-cloud) storage systems have been proposed with the objective of minimizing trust on cloud providers, decreasing costs and improving performance [RACS, DepSky]. Such systems range from archival storage [RACS], object stores [DepSky], key-value stores [SPANStore] and even full-fledged file systems [SCFS]. In this talk we will present Charon, an experimental cloud-of-clouds file system designed to be a storage and sharing infrastructure for integrating European biobanks. Charon is the core component for building federations of Biobanks using the BiobankCloud PaaS, a bioinformatics processing, storage and interconnection platform being developed in the BiobankCloud FP7 project (http://www.biobankcloud.com). Charon's main objective is to build a data-centric (or “servless”) infrastructure that enable federated biobanks to share large volumes of data and metadata (related with samples and studies [MIABIS]) respecting security and performance constraints. Furthermore, we want to give authorized bioinformaticians a “dropbox-like” experience when accessing biobanks datasets. Besides the obvious scalability issues (related with the number and size of files kept in the system), there are other important requirements and design principles that guide the development of Charon: 1. A truly servless design: we want to minimize the operational effort required for maintaining the shared infrastructure by implementing the whole system at the client-side, relying only on on widely available cloud services. 2. Dependable metadata storage: all the file system and biobank-specific metadata [MIABIS] must be widely available to authorized users despite possible failures on communication and cloud providers. 3. Flexible data location: due to the legal, performance and criticality constraints, shareable data must be stored either in the edges (file system clients), in a single cloud or even in a cloud-of-clouds. 4. Efficient read/write and read/read sharing: the system must be as efficient as possible when reading data created by others and, at the same time, consistency issues and write-write conflicts must be managed automatically by the infrastructure. Differently from our previous work on cloud-of-clouds (CoC) file systems [SCFS], which required coordination servers for controlling data sharing, Charon relies only on cloud services such as Amazon S3, without requiring any dedicated process other than the ones running on the file system clients (i.e., biobank servers and bioinformatician desktops). Furthermore, the system manages file metadata and data in a different ways. The former is encapsulated in namespace containers (shared or private, depending on the directory sub-tree visibility) that are stored in the CoC, while file data can be kept at different locations. When a file is created, users can specify if it will be maintained locally, in a single cloud provider or in multiple cloud providers (CoC). An improved version of DepSky [DepSky] is used for ensuring data is stored in an efficient, secure and dependable way in the CoC (by combining erasure codes, secret sharing and Byzantine-quorum replication). An interesting novel aspect of Charon is its concurrency control algorithm. Although we give up strong consistency for achieving low latency, we still need concurrency control to avoid write-write conflicts on shared files. Such control is provided by a new design for a CoC lease control based on existing cloud services. More specifically, we devised a compositional lease algorithm that uses fail-prone lease objects implemented using appropriate services offered by individual cloud providers. Currently, we implemented efficient lease objects for Windows Azure, Amazon Web Services, Rackspace and Google App Engine. Our approach significantly improve lock’ latency when compared with other data-centric coordination protocols. References [RACS] Hussam Abu-Libdeh, Lonnie Princehouse, and Hakim Weatherspoon. RACS: a case for cloud storage diversity. In Proc. of the 1st ACM symposium on Cloud computing. 2010. [DepSky] A. Bessani, M. Correia, B. Quaresma, F. Andre, and P. Sousa. DepSky: Dependable and secure storage in cloud-of-clouds. ACM Transactions on Storage, 9(4), 2013. [SPANStore] Zhe Wu, Michael Butkiewicz, Dorian Perkins, Ethan Katz-Bassett, and Harsha V. Madhyastha. SPANStore: cost-effective geo-replicated storage spanning multiple cloud services. In Proc. of the 24th ACM Symposium on Operating Systems Principles. 2013. [SCFS] A. Bessani, R. Mendes, T. Oliveira, N. Neves, M. Correia, M. Pasin, and P. Verissimo. SCFS: a shared cloud-backed file system. In Proc. of the 2014 USENIX Annual Technical Conference, 2014. [MIABIS] Loreana Norlin, Martin N Fransson, Mikael Eriksson, Roxana Merino-Martinez, Maria Anderberg, Sanela Kurtovic, and Jan-Eric Litton. A minimum data set for sharing biobank samples, information, and data: Miabis. Biopreservation and Biobanking, 10(4). 2012.
        Speaker: Alysson Bessani (University of Lisbon, Faculty of Sciences)
        Slides
      • 09:45
        User-driven networking in IaaS clouds 20m
        IaaS clouds are becoming a commodity service that attracts an increasing number of applications. New types of applications also bring new requirements on the cloud infrastructure, including those related to networking, which may not always be possible to address using current technologies. Isolated network environments, arbitrary network topologies and ISO Layer 3 services, full user control over network flows, etc. are a few examples of services which are hard to support in current cloud infrastructures. Our talk will present a framework to establish an overlay network for IaaS clouds. Using common network technologies the framework builds a network on the top of a cloud, which provides an ISO Layer 2 network interconnecting the cloud machines into user-driven network topologies. These isolated network environments, so-called sandboxes, provide users with full control over the networking on Layer 3. This approach allows the user to establish sandboxes with arbitrary addressing schemas of IPv4 and IPv6 or even non-IP protocols. The framework manages the pseudo-wires used as L2 links between the nodes and makes it possible to handle them separately. Individual connections therefore can be monitored or even configured to emulate various network characteristics like particular bandwidth limits, delays, packet loss, etc. In this way it is possible to simulate various kinds of connectivity models, including mobile networks, ADSLs, etc. The framework presented has been designed to be used by the end users and is mostly implemented using customized virtual machines deployed in the cloud. The setup of the framework requires only minimal support from the cloud provider. The concept of the framework has been successfully demonstrated in the environment of the Cyber Providing Ground that is being developed to facilitate security research and training. The prototype implementation of the environment will be briefly introduced in the presentation.
        Speaker: Daniel Kouril (CESNET)
        Slides
      • 10:05
        The ARGO A/R Compute Engine 10m
        In the context of the ARGO framework (initially the A/R EGI funded mini project) our team at AUTH (being part of a larger consortium formed by GRNET, SCRE and CNRS) has developed and deployed the A/R core compute engine, which uses Big Data tools for the processing of the incoming monitoring data on top of virtualised resources. The current production instance is deployed on ~okeanos (GRNET’s IaaS cloud). The compute engine relies heavily on Big Data tools (Hive and Pig mostly) and the results are offered towards a central portal through a distributed MongoDB and a REST API (written in golang) which also reside on virtualised resources. Through the proposed presentation we intend to share our experiences so far in developing, deploying and running Big Data processing tools and services on top of IaaS Cloud resources.
        Speaker: Mr Kostas Kagkelidis (AUTH/GRNET)
        Slides
    • 10:30 11:00
      Coffee break CWI Building

      CWI Building

      CWI Conference Centre

      CWI Building Science Park 123, 1098XG Amsterdam
    • 11:00 12:30
      Developing the concept of a service catalogue and market place for EGI Eulerzaal

      Eulerzaal

      CWI Conference Centre

      Convener: Diego Scardaci (INFN)
    • 11:00 11:30
      EGI-GEANT Symposium: Cloud standards Turingzaal

      Turingzaal

      CWI Conference Centre

      Convener: Andres Steijaert (SURFnet)
      • 11:00
        CDMI server architecture & reference implementation over Pithos File/Object Storage Service 15m
        CDMI server architecture & reference implementation over Pithos File/Object Storage Service We report on our design and experience adopting CDMI in GRNET SA, the Greek NREN. We have designed and implemented a reference CDMI server as a layered architecture that can be used to easily plug-in your storage backend. We have leveraged this architecture to support GRNET’s Pithos File/Object Storage Service and expose it under a CDMI REST API. Our implementation is used-case based and evolves using agile software development principles. In order to achieve our goals of agility in requirements and scalability of design, we have selected the Scala programming language and in particular Twitter’s Finagle. The project is Open Source, under a GPLv3 license.
        Speaker: Christos Loverdos (GRNET)
      • 11:15
        The rOCCI framework - EC2 and support for public cloud services 15m
        The rOCCI framework [1][2][3][4] is designed to simplify the implementation of the OCCI 1.1 protocol in Ruby and provide the base for a working client and server implementation targeting multiple cloud management frameworks and commercial service providers via its back-ends. It was adopted by the EGI Federated Cloud [5] and chosen to act as one of the designated OCCI implementations. This led to further development and provided much needed practical experience. This talk aims to introduce new developments in the rOCCI framework, especially recent efforts to support Amazon EC2 as a backend for rOCCI-server. It will briefly describe challenges involved in developing such a backend, the implemented architecture and currently available features. It will also discuss future plans for developing similar backends for MS Azure, Google Compute or a not-yet-selected VMWare product. [1] https://github.com/EGI-FCTF/rOCCI-server [2] https://github.com/EGI-FCTF/rOCCI-core [3] https://github.com/EGI-FCTF/rOCCI-api [4] https://github.com/EGI-FCTF/rOCCI-cli [5] http://www.egi.eu/infrastructure/cloud/
        Speaker: Boris Parak (CESNET)
        Slides
    • 11:30 12:30
      EGI-GEANT Symposium: User experiences Turingzaal

      Turingzaal

      CWI Conference Centre

      Convener: Mr Roberto Sabatino (DANTE)
      • 11:30
        CCP4 Cloud: requirements, plans and approaches 20m
        Collaborative Computational Project Number 4 (CCP4) is a community resource, hosted by STFC UK, which aims at maintenance, distribution and development of software for macromolecular crystallography. CCP4 user base is estimated at 20,000 individual researchers, and includes both academic and industrial sectors. In addition to the traditional methods of software delivery, CCP4 actively looks for new modes and trends in computational practices present in research community. Having a clear user's interest to remote, server-based setups, which would be capable of storing experimental and derived data, as well as whole projects and workflows and give access to computational resources needed for automatic structure solution, CCP4 initiates a project on providing CCP4 Software in a cloud-type setup. These plans have been approved for funding by BBSRC, and the corresponding activities start now. In my talk, I will outline our motivation and formulate user's and technical requirements for CCP4 Cloud project. I will also present our preliminary plans and approaches that we investigate.
        Speaker: Eugene Krissinel (STFC)
      • 11:50
        In-situ visualisation and application steering over the web in the cloud 20m
        With the constant growth of scientific data to be analysed and visualised in nearly real-time, the in-situ visualisation as a user-facing service is getting attention. The basic idea behind the in-situ visualization service is to perform data analysis and advanced visualization during the execution process, so users could react or even steer computing simulations easily at any stage, for instance check whether application parameters were set correctly or they have to be modified on fly due to instability. In fact, such scenario is common in many scientific areas, especially for CFD. Additionally, there is a strong need for different groups (e.g. scientists, engineers, government institutions) to use in-situ visualization in order to improve collaboration based on the same and interactive view on the generated results. However, achieving such capabilities where high-speed and reliable network for heavy visualization data transfer is available on-demand, along with computational resources booked in advanced (for both, simulation and data analysis) is not trivial. In this paper we propose a new approach to facilitate running in-situ visualization in the cloud using high-speed GEANT network in conjunction with EGI e-infrastructure. The CFD application is managed on the EGI e-infrastructure using the QosCosGrid [QCG] middleware [1]. In a nutshell, QCG is an integrated system that offers advanced job and resource control features and also provides a virtualized cross-infrastructure environment. By connecting many heterogeneous HPC local queuing systems, and being integrated via a portal layer with OpenStack, it can be considered as a highly efficient management platform for variety of demanding HTP/HPC applications, including parameter sweep, workflows, hybrid MPI-OpenMP, distributed multi-scale and recently large-scale in-situ experiments [2,3]. In our approach the following actions are required to setup all the needed services from EGI and GEANT e-infrastructures to deal with in-situ visualization. First, QCG middleware is responsible for remote submission and control of CFD simulations which in turn, prior to its execution, will create the virtual machine with rendering services and an appropriate visualization environment. If required, it will also create a requested quality network connection between all components involved in scenario using “Bandwidth on Demand” GEANT services to ensure proper bandwidth and efficient communication between remote and local sites. Then, the QCG-Coordinator service is used to synchronize execution and to exchange parameters between remote services provided by EGI and GEANT. In our case, the CFD code has to be recompiled with an external in-situ visualization library to enable remote communication with the QCG-Coordinator. For the data analysis and visualisation we use the VAPOR tool [4] - a visual data discovery environment tailored towards the specialized needs of the geosciences CFD community. VAPOR is a desktop solution that is capable of handling terascale size data sets, providing advanced interactive 3D visualization with data analysis. With a proven functionality in NWP (Numerical Weather Prediction) environment, this tool is run over the cloud i) to ensure the best underlying hardware for data analysis and visualisation is used, ii) to avoid necessity of installing required software on users’ machines, that are involved in the collaboration scenario, and iii) to minimize network traffic in the collaboration scenario. In our approach, users interact and share web sessions with VAPOR by means of an ordinary web browser. It is possible thanks to various improvements to our rendering service called Vitrall, which proxyfies all user interactions and returns back a real-time display of visualisation provided by VAPOR. Currently, Vitrall supports HTML5 capabilities, like WebSockets and built-in video streaming [5]. Additionally, Vitrall provides useful collaboration features and enable many users from potentially distributed locations to share the same in-situ visualisation session. The complete advanced visualization service in different configurations has been tested on high-speed and reliable network connections provided by GEANT and Future Internet experiments [6, 7]. Consequently, it offers both user-friendly and interactive visual interfaces to end-users hiding the complexity of the underlying cloud, data processing, communication and on-demand services. [1] B. Bosak, J. Komasa, P. Kopta, K. Kurowski, M. Mamoński, T. Piontek, New capabilities in QosCosGrid middleware for advanced job management, advance reservation and co-allocation of computing resources – quantum chemistry application use case, In Building a National Distributed e-Infrastructure–PL-Grid, Springer Berlin Heidelberg, 2012, pp. 40-55. [2] Borgdorff, J., Mamonski, M, Bosak, B., Kurowski, K., Ben Belgacem, M., Chopard, B., Groen, D., Coveney, P.V. & Hoekstra, A. (2014). Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment. Journal of Computational Science, 5(5): 719-731. http://dx.doi.org/10.1016/j.jocs.2014.04.004 [3] QosCosGrid middleware and tools - www.qoscosgrid.org [4] VAPOR Visualisation and Analysis Platform - www.vapor.ucar.edu [5] Śniegowski, P., Błazewicz, M., Grzelachowski, G., Kuczyński, T., Kurowski, K. & Ludwiczak, B. (2012). Vitrall: Web-Based Distributed Visualization System for Creation of Collaborative Working Environments, Lecture Notes in Computer Science, 7203, 2012, pp. 337-346. DOI: 10.1007/978-3-642-31464-3_34 [6] Vitrall in TEFIS project: http://tv.pionier.net.pl/Default.aspx?id=1831 [7] Vitrall in CoolEmAll project: http://youtu.be/PyBF8a0ej3M
        Speaker: Dr Kurowski Kurowski (Poznan Supercomputing and Networking Center)
      • 12:10
        Towards a Structural Biology Work Bench 20m
        Structural biologists now target larger, multi-component biological objects. In parallel, the focus is shifting from the macromolecules produced by simpler prokaryotic organisms to the macromolecules from higher organisms. Researchers now use multiple techniques and visit multiple experimental facilities/infrastructures to collect their data. Structural biologists are each expert in one or more techniques but they now often need to use complementary techniques in which they are less expert. The various individual experimental infrastructures have developed different solutions to their requirements. There are some technique-specific pipelines that are largely automated for data analysis and/or structure determination, but integrated management of structural biology data from different techniques is lacking. Repositories exist for the final structural data, but the provenance and integrity of such data are often an issue and metadata is often incomplete. The best way to acquire accurate metadata is to integrate data management infrastructure with data processing infrastructure. There are no common strategies to address or support the storage of structural biology raw data, after the end of the SB project/grant within which the data were generated. The presentation will discuss ways to address these challenges.
        Speaker: Chris Morris (STFC)
        Slides
    • 12:30 13:15
      Lunch break 45m
    • 13:15 14:00
      EGI-GÉANT Symposium: Cloud providers panel Turingzaal

      Turingzaal

      CWI Conference Centre

      PANELITS:
      CESNET: Miroslav Ruda,
      IBERGRID: Enol Fernández,
      GRNET: Panos Louridas,
      SWITCH: Simon Leinen

      Convener: Andres Steijaert (SURFnet)
    • 14:00 15:00
      EGI-GEANT Symposium: User support, engagement and next steps Turingzaal

      Turingzaal

      CWI Conference Centre

      Convener: Mr Roberto Sabatino (DANTE)
      • 14:00
        User engagement in GÉANT 20m
        Speaker: Mr Roberto Sabatino (DANTE)
        Slides
      • 14:20
        The GÉANT Open Calls 20m
        Speaker: Annabel Grant (GÉANT)
      • 14:40
        User engagement in EGI 20m
        Speaker: Dr Gergely Sipos (EGI.EU)
        Slides
    • 15:00 15:45
      EGI-GEANT Symposium: Interactive panel - about the users Turingzaal

      Turingzaal

      CWI Conference Centre

      Panellists:
      Michel Jouvin, LAL Orsay, France
      Afonso Duarte, ITQB, Portugal
      Eugene Krissinel, STFC, UK

      Convener: Dr Gergely Sipos (EGI.EU)
      slides
    • 15:45 16:00
      EGI-GEANT Symposium: conclusions Turingzaal

      Turingzaal

      CWI Conference Centre

      Convener: Dr Tiziana Ferrari (EGI.EU)