BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:Panel / Discussion / Q&A
DTSTART;VALUE=DATE-TIME:20181011T143500Z
DTEND;VALUE=DATE-TIME:20181011T150000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-264@indico.egi.eu
DESCRIPTION:https://indico.egi.eu/event/3973/contributions/9151/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9151/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Growing the data science community by expanding the CODATA/RDA sch
 ool model
DTSTART;VALUE=DATE-TIME:20181010T155500Z
DTEND;VALUE=DATE-TIME:20181010T160000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-133@indico.egi.eu
DESCRIPTION:Speakers: Jones\, Sarah (UG)\nVarious reports have commented o
 n the shortage of individuals skilled in Research Data Science worldwide\,
  which limits the transformative effect of the data revolution. Given the 
 extent of the shortage\, models to rapidly increase the cohort of research
 ers equipped to do data science and empower them to be ambassadors for the
 ir fields in teaching others is required.  \n \nThe CODATA-RDA School for 
 Research Data Science has established a successful two-week curriculum to 
 provide a foundational level of data science skills to Early Career Resear
 chers from a wide range of disciplinary backgrounds. The course covers the
  principles and practice of Open Science\, research data management\, usin
 g data platforms and infrastructures\, data annotation\, analysis\, statis
 tics\, visualisation and modelling techniques. Students are taught in a co
 mputer lab setting with many hands-on exercises using open source tools\, 
 allowing them to learn new technologies and return home with access to the
  software they need.\n \nSince the inaugural school in Trieste\, Italy in 
 August 2016\, annual events have continued in Italy and other regional hub
 s have been established in Latin America and Africa. In collaboration with
  the International Centre for Theoretical Physics (ICTP) and its sister si
 tes\, we are primarily bringing data schools to researchers from Lower and
  Middle Income Countries\, with the intention of reducing the digital divi
 de. There is however\, a big demand for these schools across Europe\, Nort
 h America and Australasia too\, provoking us to consider business models t
 o increase the delivery of schools and grow the community of data scientis
 ts worldwide.\n \nThe schools have helped many individuals to take their l
 earning further. We run a student helper programme where participants retu
 rn as classroom assistants to support the tutors facilitate hands-on exerc
 ises. This has offered new perspectives and increased the insights gained\
 , enhancing their learning. (See: https://researchdata.springernature.com/
 users/81866-sara-el-jadid/posts/29719-enriching-my-learning-by-helping-oth
 ers and https://researchdata.springernature.com/users/81847-marcela-alfaro
 -cordoba/posts/29656-my-journey-towards-open-science) Many have also gone 
 on to run their own schools locally\, increasing the reach of the schools 
 on a train-the-trainer basis. This year\, two previous students ran an Urb
 an Data Science school in India\, applying the lessons from the CODATA sch
 ool to their peers. (See: https://shailygandhi.github.io/Urban-Data-Scienc
 e-Curriculum-Development)   \n \nThis paper will report on the use of the 
 CODATA school model by others and our plans to expand this. A one-week sch
 ool was run in Australia this June\, taking the course materials as a base
 . In order to scale up this provision\, we are establishing a set of requi
 rements and a process for others to replicate the content (e.g. to retain 
 agreed core elements of the curriculum\, to adopt recommended teaching sty
 les\, to use open tools so participants retain access etc). A fee structur
 e is also being proposed to endorse / badge these affiliated schools as fo
 llowing the CODATA/RDA model\, and to provide a sustainability model for t
 he core LMIC provision.\n\nhttps://indico.egi.eu/event/3973/contributions/
 9154/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9154/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ELIXIR AAI Advanced Features
DTSTART;VALUE=DATE-TIME:20181010T103000Z
DTEND;VALUE=DATE-TIME:20181010T104500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-55@indico.egi.eu
DESCRIPTION:Speakers: Prochazka\, Michal (Masaryk University)\nELIXIR AAI 
 has been in production since 2016\, and it is continuously extended with n
 ew features based on requirements of end users. In the talk\, we will pres
 ent some of the features that have been made available recently to the ELI
 XIR community. Permission API enables service providers and users to lever
 age advanced authorization for access to sensitive human data. Another fea
 ture is an implementation of the Bona Fide Researcher concept in ELIXIR AA
 I together with automatically or manually assigned user’s affiliation. S
 ervice providers can use those data to do the authorization based on the u
 ser affiliation. Allowing access to cloud machines via SSH or VNC has long
  been an issue when a federated identity is the only authentication option
  available. ELIXIR AAI provides a user-friendly mechanism based on utilizi
 ng of QR codes\, which can be used in non-web environments to provide auth
 entication and delegation for computational/cloud/storage services.\n\nhtt
 ps://indico.egi.eu/event/3973/contributions/9189/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9189/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Applications of the AARC Blueprint Architecture - Migration to a I
 dP-SP-proxy in the DARIAH AAI
DTSTART;VALUE=DATE-TIME:20181010T104500Z
DTEND;VALUE=DATE-TIME:20181010T110000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-132@indico.egi.eu
DESCRIPTION:Speakers: Hübner\, David (DAASI International GmbH)\nThe DARI
 AH Research Infrastructure (RI) provides\, among other things\, digital se
 rvices for researchers in arts and humanities. It offers an authentication
  and authorisation infrastructure (AAI)\, the DARIAH AAI\, to enable resea
 rchers to login into these services with their own campus account\, provid
 ing a Single Sign-On experience. The DARIAH AAI supplies additional inform
 ation\, such as group memberships specific to the DARIAH community\, which
  can be used by services for authorisation decisions. Historically\, the D
 ARIAH AAI is composed of different components: \n\n - a self-service inter
 face which allows for the registration of DARIAH\n   accounts\; \n - the D
 ARIAH IdP\, which serves as both an Identity Provider (IdP) for\n   these 
 DARIAH accounts\, as well as an attribute authority (AA) that\n   releases
  DARIAH-specific attributes for users authenticating via\n   their home or
 ganisation‘s IdP\; \n - a group membership management system for both th
 ese types of\n   accounts\; \n - and the DARIAH service providers (SP) tha
 t each need to query the AA\n   and check for DARIAH-specific attributes.\
 n\nThe Blueprint Architecture (BPA)\, which was developed in the EC-funded
  AARC (Authentication and Authorisation for Research and Collaboration) pr
 oject\, recommends an IdP-SP-proxy\, which serves as a gateway between ser
 vice providers of the research infrastructure and identity providers and a
 ttribute sources. This approach takes away a lot of the complexity service
 s would have to deal with in a traditional full mesh federation\, and allo
 ws for a central place for policy decisions. It thus offers a scalable sol
 ution to problems such as aggregation of attributes from different sources
 \, and account linking.\n\nIn order to allow services to connect to the DA
 RIAH AAI in a much simpler fashion\, and to allow for interoperability wit
 h other e- and research infrastructures\, and to create the foundation for
  new features\, such as account linking in the future\, the DARIAH AAI was
  recently extended by an AARC BPA-compliant IdP-SP-proxy component. Since 
 the DARIAH AAI is already largely based on Shibboleth products\, we decide
 d to implement this proxy solution based on Shibboleth\, as opposed to Sim
 pleSAMLphp or SATOSA\, which offer proxy functionality by default. While t
 his solution integrates nicely into the existing DARIAH AAI ecosystem\, it
  provided some technical challenges in actually turning the Shibboleth pro
 ducts into an IdP-SP-proxy.\n\nThis talk will illustrate the main advantag
 es of\, and experiences with the adoption of the AARC BPA from the point o
 f view of the DARIAH research community and showcase our technical solutio
 n based on Shibboleth (i.e. how Shibboleth IdP and SP can be used to build
  a proxy component). We can also give insight into how to migrate from an 
 existing AAI to a proxy-based infrastructure\, while ensuring backwards co
 mpatibility with legacy use cases.\n\nhttps://indico.egi.eu/event/3973/con
 tributions/9155/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9155/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Security Incident Management in the EOSC era Part-2
DTSTART;VALUE=DATE-TIME:20181011T133000Z
DTEND;VALUE=DATE-TIME:20181011T150000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-131@indico.egi.eu
DESCRIPTION:Speakers: Kouril\, Daniel (CESNET)\, Crooks\, David (UG)\, Gro
 ep\, David (NIKHEF)\, Gabriel\, Sven (NIKHEF)\, Kaila\, Urpo (CSC)\, Brill
 ault\, Vincent (CERN)\nThe security training proposed here would be split 
 into two sessions\, focusing on different areas of incident handling. An i
 mportant area that will be highlighted is the close collaboration of exper
 ts necessary for the successful resolution of a security incident in the E
 OSC era\n\nThe first session targets the more technically oriented attende
 es. Here\, after an introduction to forensics\, the participants will have
  to analyse images provided by a security team of a FedCloud site. The res
 ults of the investigations will be used as input for the second session\, 
 where the case will be handled within a role-play involving the various se
 rvice providers active in the EOSC-Hub project\, including identity provid
 ers\, SIRTFI\, the service catalogue\, and the infrastructures coordinated
  by EGI and EUDat.\n\nThe goals of this training are twofold. Firstly\, th
 e collaboration of project members with a managerial background and those 
 with a technical background will be explored. The second goal is to examin
 e the existing set of policies and procedures to challenge them and identi
 fy possible issues. It is hoped that this will help to prioritize the secu
 rity related activities within the EOSC-hub project.\n\nhttps://indico.egi
 .eu/event/3973/contributions/9156/
LOCATION:Lisbon Room C103
URL:https://indico.egi.eu/event/3973/contributions/9156/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Persistent Identifiers in use: Exchanging ideas about new developm
 ents in the field of PID services
DTSTART;VALUE=DATE-TIME:20181009T103000Z
DTEND;VALUE=DATE-TIME:20181009T104500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-135@indico.egi.eu
DESCRIPTION:Speakers: Fankhauser\, Eliane (DANS-KNAW)\, Christine\, Fergus
 on (EMBL-EBI)\nPersistent identifiers (PIDs) like DOIs for articles or ORC
 iDs for researchers are a core component of open science as they improve d
 iscovery\, navigation\, retrieval\, and access of research resources. FREY
 A\, a 3-year EU-funded project\, aims to extend the PID infrastructure by 
 cross-linking PID services\, facilitating the development of new PID types
 \, and creating community of practice. The engagement with the stakeholder
 s and the wider PID community is an important means with which to exchange
  knowledge and get feedback about the development of new PID types and ser
 vices. Currently\, FREYA is establishing the PID Forum consisting of a use
 r community whose members collectively oversee the development and deploym
 ent of new services. Anyone with an interested in PIDs is invited to join 
 this session\, exchanging ideas and contributing to the discussions.\n\nAt
  this World Café Session\, the PID Forum will be introduced\; some of the
  work that has been done in the first few months of this project will be p
 resented and discussed with the audience in a workshop. The workshop will 
 focus on two current FREYA activities: (i) mapping the identifier landscap
 e and (ii) understanding how stakeholders operate within the landscape. Bo
 th of these activities we would like to discuss with and get feedback abou
 t from the user community. FREYA has recently surveyed the current identif
 ier landscape and would like to share key findings with the user community
 . Moreover\, FREYA would like feedback from the community on user stories 
 that have already collected. Questions like “Is there broader value to b
 e gained from addressing the user story?” or “What is needed to delive
 r the value identified in the user story?” will be addressed. Finally\, 
 FREYA is eager to connect with any stakeholders in the user community to l
 earn about their user stories and identify gaps where research resources c
 ould be better connected and services extended or built.\n\nhttps://indico
 .egi.eu/event/3973/contributions/9158/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9158/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Security Incident Management in the EOSC era Part-1
DTSTART;VALUE=DATE-TIME:20181011T103000Z
DTEND;VALUE=DATE-TIME:20181011T120000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-128@indico.egi.eu
DESCRIPTION:Speakers: Kouril\, Daniel (CESNET)\nThe security training prop
 osed here would be split into two sessions\, focusing\non different areas 
 of incident handling. An important area that will be\nhighlighted is the c
 lose collaboration of experts necessary for the successful\nresolution of 
 a security incident in the EOSC era\n\nThe first session targets the more 
 technically oriented attendees.\nHere\, after an introduction to forensics
 \, the participants will have to\nanalyse images provided by a security te
 am of a FedCloud site.\nThe results of the investigations will be used as 
 input for the second\nsession\, where the case will be handled within a ro
 le-play involving the\nvarious service providers active in the EOSC-Hub pr
 oject\, including identity\nproviders\, SIRTFI\, the service catalogue\, a
 nd the infrastructures coordinated by\nEGI and EUDat.    \n\nThe goals of 
 this training are twofold. Firstly\, the collaboration of project\nmembers
  with a managerial background and those with a technical background will\n
 be explored. The second goal is to examine the existing set of policies an
 d\nprocedures to challenge them and identify possible issues. It is hoped 
 that this\nwill help to prioritize the security related activities within 
 the EOSC-hub\nproject.\n\nhttps://indico.egi.eu/event/3973/contributions/9
 185/
LOCATION:Lisbon Room C103
URL:https://indico.egi.eu/event/3973/contributions/9185/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Persistent identifiers and their services in digital infrastructur
 es
DTSTART;VALUE=DATE-TIME:20181009T104500Z
DTEND;VALUE=DATE-TIME:20181009T110000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-134@indico.egi.eu
DESCRIPTION:Speakers: Lambert\, Simon (STFC)\nThe reliable identification 
 and location of digital objects that play a role in research is a fundamen
 tal requirement for digital infrastructures in general and the European Op
 en Science Cloud (EOSC) in particular. And not only digital objects need i
 dentification: real-world entities such as people (researchers)\, funding 
 bodies and research equipment need to be identified and linked up with oth
 er entities to address the use cases that are common to so many discipline
 s and research environments: facilitating reuse of data and other research
  outputs\, assessing impact and contributing to long-term preservation.\n\
 nPersistent identifiers also play a vital role in implementing the FAIR pr
 inciples\, enabling stability of reference\, publication and dissemination
  of services and access to resources.\n\nAlready some persistent identifie
 rs and their supporting infrastructures are very well established. DOIs an
 d ORCIDs have gained enormous traction and are indispensable elements of t
 he research landscape. Work is proceeding in several forums to develop ide
 ntifiers for further entities. The question arises: how to bring all this 
 together into a coherent and sustainable foundation for the vast distribut
 ed ecosystem of data and services that is the EOSC.\n\nThe FREYA project\,
  funded under the EC’s Horizon 2020 programme (project number 777523\; s
 ee https://www.project-freya.eu)\, aims to develop the infrastructure for 
 persistent identifiers as a core component of open science\, in the EU and
  globally. FREYA will develop new PID services\, new PID types and validat
 e in a diverse set of applications.\n\nFREYA has a vision of three key con
 cepts needed to achieve its goals:\n\n•    The technical framework (PID 
 Graph). The PID Graph connects and integrates PID systems\, representing a
  map of the relationships across a network of PIDs and serving as a basis 
 for new services. It will need to address common formats and metadata\, in
 teroperability between PID providers\, interlinking and harvesting.\n\n•
     A community forum (PID Forum\, a stakeholder community whose members c
 ollectively oversee the development and deployment of the PID Graph\; it w
 ill be strongly linked to the Research Data Alliance (RDA).\n\n•    a go
 vernance model (PID Commons)\, concerned with the sustainability and growt
 h of the PID infrastructure resulting from FREYA beyond the lifetime of th
 e project itself\, defining the roles\, responsibilities and structures fo
 r good self-governance based on consensual decision-making.\n\nThe present
 ation will introduce the concepts of FREYA\, providing an update on the cu
 rrent thinking\, illuminated by examples from the project’s pilot applic
 ations\, and consider how it will affect the further development of the EO
 SC.\n\nhttps://indico.egi.eu/event/3973/contributions/9159/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9159/
END:VEVENT
BEGIN:VEVENT
SUMMARY:eInfraCentral – Helping users focus on being users
DTSTART;VALUE=DATE-TIME:20181010T163500Z
DTEND;VALUE=DATE-TIME:20181010T164000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-26@indico.egi.eu
DESCRIPTION:Speakers: Angelis\, Jelena (European Future Innovation System 
 (EFIS) Centre)\neInfraCentral is a coordination and support action funded 
 under the EU’s Horizon 2020 framework programme. Its mission is to ensur
 e that by 2020 **a broader and more varied set of users benefits from Euro
 pean e-Infrastructures**. eInfraCentral is one of the key initiatives **dr
 iving implementation of the European Open Science Cloud.**\n\nThe talk wil
 l consist of three parts:\n\n**1. The challenge for research communities:*
 *  Due to a fragmented e-infrastructure landscape\, end-users\, such as re
 searchers\, innovators or industry actors\, often are unaware of the e-inf
 rastructure services available in Europe that could aid their work. Simila
 rly\, service providers and data producers have difficulty reaching out to
  potential users due to the lack of coordination and harmonisation across 
 various e-infrastructures. Even if users find out about the availability o
 f a certain e-service\, it is difficult to gather further information and 
 compare it with other existing services. Service providers also lack user 
 feedback on the ways they could improve their offerings. This leads to ine
 fficient funding patterns through the emergence of overlapping efforts and
  as such\, slower rates of open innovation due to the lack of competition 
 in the field.\n\n\n**2. eInfraCentral brings the solution:**\neInfraCentra
 l is one of the core initiatives in the implementation of the European Ope
 n Science Cloud\, actively contributing to the building of the EOSC servic
 e catalogue and portal. eInfraCentral creates a unified online service cat
 alogue where users can search\, browse\, compare and access e-services. Us
 ers can also rate services\, helping service providers to improve their of
 ferings\, which is also aided by the availability of usage statistics on t
 he service level. The eInfraCentral’s standard Service Description Templ
 ate and catalogue were designed via an open and guided discussion with the
  e-Infrastructure community. This joint approach to defining and monitorin
 g e-infrastructures services helps increase their uptake and enhances unde
 rstanding of where improvements can be made in delivering and professional
 ising services. Moreover\, eInfraCentral also facilitates the development 
 of a shared language to describe services across the e-infrastructure comm
 unity\, fostering cooperation between infrastructure projects\, communitie
 s and initiatives. EInfraCentral helps initiate new service offerings and 
 to engage with a broader set of users and needs\, thus speeding up the cre
 ation of innovation through Open Science. \n\n**3. Call to action:**\nThe 
 audience will be invited to engage with eInfraCentral in a number of ways\
 , such as i) exploring the updated version of the eInfra Portal and leavin
 g feedback that will help the project team improve it\; ii) learning about
  eInfraCentral through the poster and website\, and iii) following project
  developments by signing up to the newsletter and engaging with it through
  social media updates.\n\nThe eInfraCentral team believes that the audienc
 e could greatly benefit from learning about eInfraCentral as many of the c
 onference participants could utilise the project outcomes – the portal\,
  the service catalogue and standard Service Description Template that will
  be fed into the development of the EOSC – in their daily work\, both fr
 om the service provider and end-user/researcher side.\n\nhttps://indico.eg
 i.eu/event/3973/contributions/9163/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9163/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adaptive\, Trustworthy\, Manageable\, Orchestrated\, Secure\, Priv
 acy-assuring\, Hybrid Ecosystem for REsilient Cloud Computing - ATMOSPHERE
DTSTART;VALUE=DATE-TIME:20181010T103000Z
DTEND;VALUE=DATE-TIME:20181010T104500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-20@indico.egi.eu
DESCRIPTION:Speakers: Blanquer\, Ignacio (UPVLC)\nTrust is considered as a
  key challenge for applications dealing with data on cloud services\, whic
 his built on the basis of guarantees\, previous successful experiences\, t
 ransparency and accountability. It is neither absolute nor constant. A ser
 vice will have some degree of trust\, which could be sufficient for its sp
 ecific usage within a particular context. Trust is an attribute which is h
 ard to build\, but it is easy to be lost. It requires a priori certificati
 on and continuous verification and assurance. Evaluating trust involves ma
 ny metrics (e.g.\, scalability\, availability\, performance\, robustness\,
  security\, privacy assurance\, dependability\, etc.)\, but there is curre
 ntly a lack of technologies and frameworks to build trust on cloud and Big
  Data applications\, both from the self-evaluation and the dynamic adaptat
 ion perspectives.\n\nTo cover the above gap in this research area\, we pre
 sent the Adaptive\, Trustworthy\, Manageable\, Orchestrated\, Secure\, Pri
 vacy-assuring\, Hybrid Ecosystem for REsilient Cloud Computing (2017-2019)
  (hereinafter “ATMOSPHERE”)\, which is a 24-month Research and Innovat
 ion Action\, funded by the European Commission under the H2020 Programme a
 nd the Secretary of Politics of Informatics (SEPIN) of the Brazilian Minis
 try of Science\, Technology\, Innovation and Communication (MCTIC). ATMOSP
 HERE aims at designing and developing a framework and a platform to implem
 ent trustworthy cloud services on a federated intercontinental hybrid reso
 urce pool. \n\nTo achieve cloud computing trust services\, ATMOSPHERE focu
 ses on providing four components:\n- A dynamically reconfigurable federate
 d infrastructure  providing isolation\, high-availability\, Quality of Ser
 vice and flexibility for hybrid resources\, including virtual machines and
  containers.\n- Trustworthy Distributed Data Management services that maxi
 mise privacy when accessing and processing sensitive data.\n- Trustworthy 
 Distributed Data Processing services to deploy adaptive applications for D
 ata Analytics\, providing high-level trustworthiness metrics for computing
  fairness and explainability properties.  \n- An trustworthiness evaluatio
 n and monitoring framework\, to compute trustworthiness measures from the 
 metrics provided by the different layers\, and able to trigger adaptation 
 measures when needed.\n\nThe different trustworthiness properties identifi
 ed need to be considered at different layers:\n- The federated cloud platf
 orm will provide isolation\, stability and Quality of Service guarantees. 
 The cloud platform will enable the dynamic reconfiguration of resource all
 ocation to applications running on federated networks on an intercontinent
 al shared pool. \n- The Trustworthy Distributed Data Management services w
 ill provide privacy risk analysis of the processing of sensitive data by p
 roprietary algorithms on enclaves\, guaranteeing that neither the applicat
 ion developer sees the data nor the data owner sees the processing code. \
 n- The Trustworthy Distributed Data Processing services will provide a Vir
 tual Research Environment to compute the fairness (i.e. the bias towards e
 thically affected data such as sex\, race\, education\, etc.) of the Data 
 Analytics models\, and the explainability of such models\, maximising tran
 sparency.\n- The Trustworthy evaluation and monitoring framework will prov
 ide quantitative scores of the trustworthiness of an application running o
 n the ATMOSPHERE platform. \n\nMore information about ATMOSPHERE can be fo
 und in the website (https://www.atmosphere-eubrazil.eu/)\, in Twitter (@At
 mosphereEUBR) and LinkedIn (https://www.linkedin.com/in/atmosphere/).\n\nh
 ttps://indico.egi.eu/event/3973/contributions/9165/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9165/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Session Intro
DTSTART;VALUE=DATE-TIME:20181011T133000Z
DTEND;VALUE=DATE-TIME:20181011T133500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-21@indico.egi.eu
DESCRIPTION:Speakers: Holsinger\, Sy (EGI.eu)\nDigital Innovation Hubs is 
 a concept developed by the European Commission under the Digital Single Ma
 rket as a mechanism for private companies to collaborate with public secto
 r institutions in order to access technical services\, research data\, and
  human capital. There is a network of Digital Innovation Hubs in place acr
 oss Europe\, already supporting sectors such as manufacturing\, internet o
 f things\, cybersecurity or cognitive computing. The EC aims to support a 
 Pan-European network of DIHs and has earmarked 500M€ from Horizon 2020 b
 udget to support the development of DIHs\, of which 300M€ for WP2018-202
 0.\n\nThe EOSC-hub Digital Industry Hub (DIH) will enrich the network by b
 ringing private companies into the European Open Science Cloud through pil
 oting concrete business cases.\nThe EOSC-hub DIH builds on individual publ
 ic e-Infrastructures business engagement programmes and outreach activitie
 s in place for several years. The added value brought through a joint effo
 rt is in packaging a wider variety of services and expertise into a more c
 oherent offer that would otherwise have to be accessed individually or com
 piled on their own. In addition to supporting individual companies\, one o
 f the key activities of the EOSC-hub DIH is to connect with regional and p
 an-European networks of Digital Innovation Hubs.\n\nTherefore\, this sessi
 on is designed to 1.) showcase the EOSC-hub Digital Industry Hub (DIH) str
 ucture and engagement model\, promote the availability of services for ind
 ustry and highlight the variety of business pilots that will be starting t
 o produce results and create new value 2.) gather existing European DIHs a
 nd initiatives to facilitate a closer collaboration with the European Open
  Science Cloud and further implement the EC objective of creating a pan-Eu
 ropean network of DIHs.\n\nhttps://indico.egi.eu/event/3973/contributions/
 9166/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9166/
END:VEVENT
BEGIN:VEVENT
SUMMARY:IBERGRID towards EOSC
DTSTART;VALUE=DATE-TIME:20181010T163000Z
DTEND;VALUE=DATE-TIME:20181010T164500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-29@indico.egi.eu
DESCRIPTION:Speakers: David\, Mario (LIP)\nIBERGRID was born out of the Ib
 erian Common Plan for distributed infrastructures released in 2007\, but t
 he origins can be traced back to the Portuguese and Spanish participation 
 in joint projects since 2002. Since then\, IBERGRID has been federating in
 frastructures from Iberian research & academic organisations mainly focuse
 d on grid\, cloud computing and data processing.\nThe IBERGRID infrastruct
 ure comprises 12 computing and data centers in Spain and Portugal. A numbe
 r of replicated services guarantees integrity and resilience. The infrastr
 ucture has provided 984 million processing hours since 2006 to support the
  HEP experiments and several user communities. This includes 19 million ho
 urs on biomedical applications and ~6 million hours on computational chemi
 stry. Strictly on cloud support\, more than 216\,000 Virtual Machines have
  been instantiated providing more than 2 million cloud processing hours to
  LifeWatch in the last year.\nOn the R&D side\, service integration activi
 ties are taking place in numerous areas. An example is OPENCoastS\, a serv
 ice to provide on-demand circulation forecast systems as a service for the
  Atlantic coasts. The service is deployed at the computing site NCG-INGRID
 -PT\, part of the EGI Federation\, but it is being integrated into EOSC-hu
 b as a Thematic Service in collaboration with LIP\, LNEC\, INCD\, UNICAN\,
  CNRS\, and CSIC.\nOn the software development side\, IBERGRID is contribu
 ting in many areas. CSIC has developed OpenStack support for VOMS authoriz
 ation and authentication\, cloud pre-emptible instances (OPIE) as well as 
 CPU Cloud accounting. The Technical University of Valencia developed and m
 aintains the Infrastructure Manager (IM)\, a key service to support the in
 stantiation of tailored clusters now part of the EOSC-hub service catalogu
 e.\nSupport to user-level container execution has been developed and is ma
 intained by the IBERGRID software teams at LIP. Udocker is an extremely su
 ccessful software product – more than 310 stars in GitHub: which is bein
 g recommended in many computing centers around the world as the best solut
 ion for users to execute containers\, without requiring the intervention o
 f system administrators.\nSoftware Quality Assurance has generated an enor
 mous amount of activity in the Iberian area. LIP\, CSIC\, CESGA and UPVLC 
 are in charge of ensuring the quality of the UMD software deployed by EGI.
  The Accounting Portal of EGI is maintained & developed by CESGA for the E
 GI community.\nOrganized since 2007\, the IBERGRID conference series is a 
 main opportunity to gather the community and share experiences from the us
 er\, infrastructure\, policy and research perspectives.\nIBERGRID looks in
 to the future EOSC with optimism. From the user support side the main asse
 ts are a very consolidated user-base\, and well-reputed user engineering a
 nd support teams. From the technical point of view\, IBERGRID counts on wo
 rldwide-recognised teams\, with expertise and technical background to addr
 ess the specific requirements from scientific communities in the EOSC era.
 \nIBERGRID is a key Operations Centre of the EGI Federation. The resources
  made available by IBERGRID sites have been instrumental in supporting the
  four largest scientific collaborations based at the Large Hadron Collider
  (ALICE\, ATLAS\, CMS\, LHCb).\n\nhttps://indico.egi.eu/event/3973/contrib
 utions/9170/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9170/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Federated Identity Management for Research
DTSTART;VALUE=DATE-TIME:20181010T160500Z
DTEND;VALUE=DATE-TIME:20181010T161000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-129@indico.egi.eu
DESCRIPTION:Speakers: Short\, Hannah (CERN)\nGranting researchers access t
 o our Digital Infrastructures is a fundamental step in Serving the User Ba
 se - this year’s conference theme. However\, providing a secure\, user f
 riendly\, reliable Authentication and Authorisation Infrastructure (AAI) i
 s not a walk in the park for Research Communities. Challenges range from a
 ttribute release\, to operational support\, to non web access\, with many 
 Communities looking outside to technology providers and generic e-Infrastr
 uctrues to find a sustainable solution for their critical components.\n\n2
 018 saw over 20 Research Fields come together and expose their common requ
 irements for Federated Identity from the wider community. These requiremen
 ts\, and a related set of recommendations\, can be found at [https://fim4r
 .org/documents/][1] and are already being incorporated into the road maps 
 of future projects. \n\nWe present an overview of the insights collected b
 y the FIM4R Research Communities and look to the future. How will the reco
 mmendations help to shape the evolution of Federated Identity Management?\
 n\n\n  [1]: https://fim4r.org/documents/\n\nhttps://indico.egi.eu/event/39
 73/contributions/9186/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9186/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data2Paper: Giving Researchers Credit for their Data
DTSTART;VALUE=DATE-TIME:20181009T154500Z
DTEND;VALUE=DATE-TIME:20181009T160000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-58@indico.egi.eu
DESCRIPTION:Speakers: Jefferies\, Neil (Jemura Ltd)\nData papers cover met
 hodological detail that is not otherwise captured and published in traditi
 onal journal articles and/or dataset metadata. As such\, they can improve 
 the findability and reusability of the underlying dataset but it also addr
 esses some deeper underlying concerns. A number of disciplines are experie
 ncing a “crisis of reproducibility” as a result of the inadequacy of i
 nformation provided by traditional papers and data publication alone\, lea
 ding to increased retractions and reduced credibility. At the same time\, 
 the lack of an avenue for publishing negative results from failed methodol
 ogical approaches leads to unnecessary repeated efforts at a time when fun
 ders are pressing for increased efficiency in the use of experimental reso
 urces.  \n\nArising from the Jisc Data Spring Initiative\,[1] a team of st
 akeholders (publishers\, data repository managers\, coders) has developed 
 a simple ‘one-click’ process for submitting data papers related to mat
 erial in a DataCite/ORCID compliant repository. DataCite and ORCID informa
 tion is transferred from a data repository to the cloud-based Data2Paper a
 pp based on the Fedora/Samvera platform. In the app\, the text of the data
  paper is combined with existing metadata drawn from DataCite and ORCID to
  generate a package suitable for automated transfer into a journal submiss
 ion platform without further user interaction. By reusing metadata that ha
 s already been previously entered/curated\, the process is both simplified
  and made less error prone.\n\nCurrently\, a small number of repositories 
 have developed specific connections to a small number of journals but the 
 cost of maintaining those links is not scalable in the longer term. Data2P
 aper aims to provide a single connection point for a partner journal or re
 pository and manage the process of metadata and paper submission. In addit
 ion\, Data2Paper supports submission to preprint archives either in conjun
 ction with a (possibly later) journal submission or as a publication route
  in its own right.  \n\nData2Paper represents a logical extension of the R
 DM workflow in EOSC services that currently ends with the deposit of data 
 in a suitable repository and the generation of a DataCite DOI with accompa
 nying metadata. It also integrates with the OpenAIRE SCHOLIX hub to detect
  completion of the publication process\, or to encourage authors to chase 
 publishers if necessary!\n\nThe presentation will discuss the history of t
 he project\, including the results of an initial feasibility study\, along
  with a demonstration of the current pilot implementation with targeted gr
 oups. We will outline the current work being done to transition to an oper
 ating service with a sustainable business model and consider how the servi
 ce might develop in the future in conjunction with various other activitie
 s in the area\, such as the Research Graph\, RDA areas of activity (Data J
 ournals Publishing Policy\, Credit and Attribution\, and Exposing Data Man
 agement Plans)\, issues of impact\, reproducibility\, FAIR Data\, persiste
 nt identifiers and new metrics by various national and international bodie
 s.\n\nhttps://indico.egi.eu/event/3973/contributions/9188/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9188/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Research infrastructures for climate prediction: Current data-cent
 ric challenges
DTSTART;VALUE=DATE-TIME:20181009T084500Z
DTEND;VALUE=DATE-TIME:20181009T093000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-8@indico.egi.eu
DESCRIPTION:Speakers: Gutierrez\, Jose Manuel (CSIC and IPCC author)\nWeat
 her and climate prediction and high-performance computing have gone hand i
 n hand in the last few decades. Current activities in this field rely on d
 ifferent digital infrastructures needed both for running computationally e
 xpensive global and regional climate models (HPC infrastructures)\, and fo
 r storing and making available the resulting scientific data and metadata 
 (GRID infrastructures). For instance\, the Earth System Grid Federation (E
 SGF) is a key international effort building on different national infrastr
 uctures (e.g. ENES in Europe\, http://portal.enes.org) to provide a distri
 buted data platform enabling free world wide access to climate data (movin
 g from Peta- to Exa-scale). ESGF provides archiving and access services fo
 r the multi-model multi-scenario climate projections obtained in successiv
 e Climate Model Intercomparison Projects (CMIPs and CORDEX)\, which are th
 e basis for climate change studies (including the IPCC reports). These stu
 dies typically require accessing and post-processing huge amounts of data\
 , for instance to harmonize and postprocess climate change information for
  a particular region and\, therefore\, require new data-centric infrastruc
 tures facilitating postprocessing services (including machine learning). S
 ome ongoing initiatives are exploring the use of cloud services to deploy 
 efficient data processing services\, based on a data-as-a-service approach
 . An example is the Data and Information Access Services (DIAS) being deve
 loped by Copernicus in Europe. In this talk I will introduce the main inte
 rnational ongoing collaborations on climate prediction (focusing on the IP
 CC - Intergovernmental Panel on Climate Change) and describe the current c
 hallenges posed by the new data-centric approach on the existing digital i
 nfrastructures in this field.\n\nhttps://indico.egi.eu/event/3973/contribu
 tions/9174/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9174/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CloudiFacturing: the first wave of manufacturing SMEs supported by
  DIHs
DTSTART;VALUE=DATE-TIME:20181011T140500Z
DTEND;VALUE=DATE-TIME:20181011T142000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-263@indico.egi.eu
DESCRIPTION:Speakers: Lovas\, Robert (MTA SZTAKI)\nThe mission of H2020 Cl
 oudiFacturing consortium is to optimize production processes and producibi
 lity of manufacturing SMEs using Cloud/HPC-based modelling and simulation.
  The supported experiments are leveraging online factory data and advanced
  data analytics. In this way\, he CloudiFactoring project partners includi
 ng Digital Innovation Hubs (DIHs) contributes to the competitiveness and r
 esource efficiency of SMEs\, ultimately fosters the vision of Factories 4.
 0 and the circular economy.\n\nIn CloudiFacturing\, more than 20 cross-bor
 der application experiments will be conducted in three waves. Seven experi
 ments comprising the first wave have been already supported\, while the pa
 rticipation in the second and third waves are organized via Open Calls. Th
 e experiments run across national borders. In order to increase the impact
  of the experiments\, the project relies on its DIH network\, and each exp
 eriment is accompanied by a dedicated DIH.\n\nThe presentation will summar
 ize the experiences with the first wave\, the current achievements of invo
 lved DIHs\, and the future plans/collaboration opportunities to maximize t
 he impact with the assistance of DIHs.\n\nhttps://indico.egi.eu/event/3973
 /contributions/9177/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9177/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Development of the new Research Infrastructure for Europe’s Natu
 ral Science Collections using novel building blocks in EOSC
DTSTART;VALUE=DATE-TIME:20181011T134500Z
DTEND;VALUE=DATE-TIME:20181011T140000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-121@indico.egi.eu
DESCRIPTION:Speakers: Addink\, Wouter (Naturalis Biodiversity Center)\n[Di
 SSCo][1]\, a Distributed System of Scientific Collections\, is a Research 
 Infrastructure (RI) included in the ESFRI 2018 Roadmap with over hundred s
 elf-sustaining partners in Europe aiming at providing unified physical and
  digital (data) access to the approximately 1.5 billion biological and geo
 logical specimens in collections distributed across Europe. DiSSCo will tr
 ansform the currently scattered provision of collection data across the co
 ntinent into one set of services providing unified specimen data at the sc
 ale\, quality and FAIRness ((Findable\, Accessible\, Interoperable\, Reusa
 ble) required for excellent research. It will repackage specimen data as D
 igital Specimen Digital Objects (DSDOs) to integrate and link these with d
 ata from other domains in the future Internet of FAIR Data and Services (I
 FDS) supporting the European Open Science Cloud (EOSC).\n \nIn the Europea
 n landscape of environmental Research Infrastructures\, the effectiveness 
 of services that aim at aggregating\, monitoring\, analysing and modelling
  geo-diversity information relies on the primary description of the bio- a
 nd geo-diversity. It also relies on the availability of this primary refer
 ence data that today is scattered and disconnected. Many RIs in environmen
 t and other fields have links to biodiversity\, and biodiversity loss is m
 any times mentioned as one of the biggest societal challenges. DiSSCo prov
 ides the required bio-geographical\, taxonomic and species trait data at t
 he level of precision and accuracy required to enable and speed up researc
 h towards achieving the Targets of the Sustainable Development Goals for L
 ife on Earth\, Life below Water and Climate Action.\n \nNovel building blo
 cks in EOSC are required for the development and successful operation of D
 iSSCo  to deliver data at the economies of scale and scope needed. Example
 s of such building blocks are portable research data packaging formats\, a
  distributed file system like IPFS (InterPlanetary File System) that can s
 cale\, verification and audit mechanisms to control FAIRness and what need
 s to be stored\, plus novel index\, discovery and linkage mechanisms. RDA 
 (Research Data Alliance) and groups like C2Camp (a [Go-FAIR Implementation
  Network][2]) are already working on recommendations and guidelines and te
 st implementations in this area towards an infrastructure of Digital Objec
 ts\, but further development of TDWG standards\, practices developed in th
 e CETAF\, Consortium of European Taxonomic Facilities network and novel te
 chnological approaches for e.g. large scale digitisation are also needed t
 o deliver data at the economies of scale and scope needed.\nIn the present
 ation\, we:\n\n - discuss technical barriers for interoperability and poss
 ible action\n   lines to overcome these including practices and technologi
 es to\n   underpin the FAIR data principles\; \n - outline the unified DiS
 SCo API (Application Programming Interface) services to    \n   provide da
 ta suitable for thematic services in environmental Research   \n   Infrast
 ructures like LifeWatch\, eLTER (European Long-Term Ecosystem and socio- \
 n   ecological Research Infrastructure) as well as RIs in other domains su
 ch as\n   E-RIHS (European Research Infrastructure for Heritage Science) i
 n the\n   field of social sciences\;  \n - explain the DiSSCo strategy to 
 align project outcomes and standards development \n   towards a common uni
 fied research infrastructure.\n\n  [1]: http://www.dissco.eu\n  [2]: https
 ://www.go-fair.org/implementation-networks/\n\nhttps://indico.egi.eu/event
 /3973/contributions/9178/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9178/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EOSC-hub Commercialisation support services
DTSTART;VALUE=DATE-TIME:20181011T135500Z
DTEND;VALUE=DATE-TIME:20181011T140500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-261@indico.egi.eu
DESCRIPTION:Speakers: Varandas\, Nuno (F6S)\nhttps://indico.egi.eu/event/3
 973/contributions/9179/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9179/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thematic Services integration in EOSC-Hub
DTSTART;VALUE=DATE-TIME:20181010T140500Z
DTEND;VALUE=DATE-TIME:20181010T150000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-260@indico.egi.eu
DESCRIPTION:Speakers: Bonvin\, Alexandre (eNMR/WeNMR (via Dutch NGI))\, Ol
 iveira\, Anabela (National Laboratory for Civil Engineers)\, Spiga\, Danie
 le (INFN)\, Fiore\, Sandro (CMCC Foundation)\nhttps://indico.egi.eu/event/
 3973/contributions/9180/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9180/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Delivering added value services for deep learning in the EOSC
DTSTART;VALUE=DATE-TIME:20181010T153000Z
DTEND;VALUE=DATE-TIME:20181010T154500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-124@indico.egi.eu
DESCRIPTION:Speakers: Lopez Garcia\, Alvaro (CSIC)\nMuch hope has been rec
 ently placed on deep learning as a machine learning technique that enables
  scientists to develop novel hypotheses and analyse large and complex data
 sets. Deep learning techniques have emerged from two major technological d
 evelopments. First\, the evolution of the internet has led to the creation
  and global availability of large datasets. Second\, through large-scale c
 omputing power\, in particular with readily available GPU resources\, opti
 mization of large-scale networks with several layers of highly interconnec
 ted nodes has become feasible\n\nDeep learning in scientific practice offe
 rs opportunities and challenges. The following three uses cases serve to i
 llustrate this: Model training and re-training\, model transfer\, and mode
 l use and sharing.\n\nModel training: For model raining a scientist may wa
 nt to address a scientific problem or task by developing a new deep learni
 ng model on a complex scientific dataset. Apart from advanced machine lear
 ning expertise that is needed to design a suitable network architecture\, 
 the scientist is faced with a variety of nontrivial technological challeng
 es. First\, training of deep learning models is highly compute intensive\,
  thus\, the scientist needs access to adequate computing resources. Second
 \, training of effective deep learning models requires access to very larg
 e datasets that need to be transferred close to the computing resources.\n
 \nModel re-training: Deep convolutional neural networks (CNNs) are used to
  classify images into predefined taxonomic categories. CNNs decompose an i
 mage into a hierarchy of increasingly informative features. The features a
 t the lower levels represent colors\, contours\, etc.\, whereas the featur
 es on the higher levels represent domain entities such as plant leaves or 
 structures of biological cells or tissues. Parts of a CNN model that has l
 earned to classify plant structures may be re-trained to classify cell or 
 tissue structures. This is called transfer learning. The fundamental techn
 ological needs of re-training are similar to those needed for training a m
 odel from scratch\, with the additional task of model transfer\, including
  the transfer of relevant software libraries and transfer and integration 
 of data.\n\nModel use and sharing: A major scientific benefit of already e
 xisting deep learning models lies in sharing a model across the relevant s
 cientific communities. This facilitates the scientific debate about the kn
 owledge captured by the model and allows the community to use the model fo
 r relevant scientific tasks. Sharing and using a trained deep learning mod
 el with the scientific community may be realized as a web application. But
  in order to offer the model as a service\, the model typically has to be 
 transferred from a development environment towards a production environmen
 t\, which is capable of offering the service in a sustainable way.\n\nIn t
 his presentation we will showcase how the DEEP-Hybrid-DataCloud is develop
 ing services that will enable next generation e-Infrastructures to support
  machine learning and in particular deep learning applications covering th
 e three aforementioned cases\, and how these solutions can be used to brin
 g knowledge closer to the users and citizens in the framework of the Europ
 ean Open Science Cloud.\n\nhttps://indico.egi.eu/event/3973/contributions/
 9181/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9181/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Distributed Compute Protocol: Credit-based monetisation of idle co
 mpute
DTSTART;VALUE=DATE-TIME:20181011T142000Z
DTEND;VALUE=DATE-TIME:20181011T143500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-265@indico.egi.eu
DESCRIPTION:Speakers: Desjardins\, Daniel (Kings Distributed Systems Ltd.)
 \nModern day research requires extensive computing power. Researchers are 
 competing for limited resources\, either in availability or cost. The Dist
 ributed Compute Protocol (DCP) connects existing compute resources to rese
 archer projects. Compute providers receive Distributed Compute Credits (DC
 C) in exchange for computing those projects. Credits can then be used to d
 eploy new compute projects\, or sold in DCP's global marketplace. By soaki
 ng up otherwise idle compute resources\, DCP aims to support researchers a
 nd industry at a fraction of the cost of current commercial cloud computin
 g services\, disrupting existing market powers and accelerating compute-en
 abled research\, innovation and discovery.\n\nhttps://indico.egi.eu/event/
 3973/contributions/9183/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9183/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Open Science training hub FOSTER Plus - new resources and cour
 ses
DTSTART;VALUE=DATE-TIME:20181009T151500Z
DTEND;VALUE=DATE-TIME:20181009T153000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-57@indico.egi.eu
DESCRIPTION:Speakers: Brinken\, Helene (Georg-August University Göttingen
 )\, Correia\, Maria Antónia (University of Minho)\nThe EU-funded project 
 FOSTER Plus (2017-2019) (www.fosteropenscience.eu) offers different traini
 ng opportunities to support researchers to move beyond simply being aware 
 of Open Science (OS) approaches to being able to apply them in their daily
  workflows. The existing FOSTER portal is becoming an OS training hub\, wh
 ere users can find training materials\, advanced-level and discipline-spec
 ific courses and resources that build capacity for the practical implement
 ation of OS and promote a change in culture.\n\nThe project developed a To
 olkit consisting of ten new OS online courses (https://www.fosteropenscien
 ce.eu/toolkit) addressing key OS topics to enable researchers putting OS i
 nto practice. The courses do not provide comprehensive coverage of all pos
 sible issues that may fall under a given topic but rather provide focused\
 , practical and\, where relevant\, discipline specific examples to try and
  answer some of the burning questions researchers may have about practicin
 g OS. Courses include interactive content to ensure the training is engagi
 ng and that capability can be assessed for issue of a badge upon completio
 n.\n\nThe courses developed include: What is OS?\; Best practices\; Ethics
  & data protection\; Open access publishing\; Open peer review\; Managing 
 & sharing research data\; Open source software & workflows\; OS & innovati
 on\, Sharing preprints and Licensing. In addition to these stand-alone cou
 rses\, there are learning pathways (www.fosteropenscience.eu/badges) throu
 gh the content to help researchers to hone their skills in specific areas\
 , such as the reproducible research practitioner\, the responsible data sh
 arer\, the open peer reviewer\, the open access author and the open innova
 tor. Furthermore\, the project provides a learning management system to fa
 cilitate moderated OS courses.\n\nWe are reusing and reshaping training co
 ntent deposited within the FOSTER portal during the first phase of the pro
 ject (2014-2016) and working with our discipline specific partners represe
 nting the arts and humanities\, social sciences\, and life sciences to pro
 vide relevant examples. All content is openly licensed and easy to downloa
 d. \n\nApart from creating new courses\, FOSTER follows a train the traine
 r approach to multiply training forces. The project provides trainings\, i
 nfrastructure and materials to support people seeking to organize OS train
 ing in their own institutions. We  initiated an OS trainer bootcamp and th
 e writing of an OS training handbook to equip future trainers with methods
 \, instructions\, exemplary training outlines and inspiration for their ow
 n OS trainings. Additionally\, the project gives recommendations for OS tr
 aining\, provides the infrastructure to conduct moderated courses and to u
 pload or download materials for re-use. Users can also promote their train
 ing events in a calendar and maintain their trainer profiles. These profil
 es are discoverable via a trainers directory and enable users looking for 
 a speaker or advice to contact OS trainers from their region or with a spe
 cific expertise directly. The FOSTER portal is a hub for people who want t
 o learn about OS as well as for people delivering OS training. \n\nThis pr
 oject has received funding from the European Union’s Horizon 2020 resear
 ch and innovation programme under grant agreement No.741839.\n\nhttps://in
 dico.egi.eu/event/3973/contributions/9191/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9191/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Frictionless Data Exchange Across Research Data\, Software and Sci
 entific Paper Repositories
DTSTART;VALUE=DATE-TIME:20181009T110000Z
DTEND;VALUE=DATE-TIME:20181009T111500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-51@indico.egi.eu
DESCRIPTION:Speakers: Knoth\, Petr (KMi\, The Open University)\nA single s
 cientific repository is\, if considered by itself\, of limited value. Real
  benefits come from the ability to exchange information effectively and in
  an interoperable way\, enabling the development of a wide range of global
  cross-repository services. However\, exchanging metadata and content acro
 ss scientific repositories is mostly based on a 15-year-old technology\, s
 ymbolized by the OAI-PMH protocol. \n\nThis protocol is:\n\n 1. is unsuita
 ble when there is a need to exchange large quantities of metadata\,\n 2. s
 uffers from inconsistent implementations across providers and \n 3. was on
 ly designed for metadata transfer\, omitting the much needed support for c
 ontent exchange.\n\nIn light of these issues\, the COAR Next Generation Re
 positories Working Group recommends the adoption of ResourceSync across re
 pository platforms. As a result\, it is important that we fully understand
  how ResourceSync performs against OAI-PMH. This work is being conducted u
 nder the umbrella of the European Open Science Cloud Pilot project from wh
 ich we received funding to run experimental pilot to provide a fast and hi
 ghly scalable exchange of data across repositories. \n\nThe work will asse
 ss how scholarly communication resources\, i.e. research datasets\, scient
 ific manuscripts (research papers\, theses\, monographs\, etc.) and scient
 ific software\, can be effectively\, regularly and reliably exchanged acro
 ss systems using the ResourceSync protocol. \n\nThe underlying aim of this
  work is set to provide an argument and evidence for modernising existing 
 legacy communication mechanisms routinely used by thousands of research re
 positories. This will be achieved by running a set of experiments/benchmar
 ks comparing OAI-PMH with ResourceSync along a set of dimensions\, scenari
 os and implementation setups\, including:\n\n**Architectural**\n - 1-to-1 
 synchronization\n - 1-to-many synchronization (master copy or mirror) expe
 riment\n - many-to-1 synchronization (aggregator)\n\n**Conceptual**\n - Ba
 seline synchronization\n - Metadata\n - Metadata and content\n - Increment
 al synchronization\n - Selective synchronization (PMH Sets\, RS capability
  lists)\n\nWe will also compare/evaluate the efficacy of ResourceSync agai
 nst OAI-PMH in terms of:\n  - speed (time)\n  - complexity (steps required
  to complete)\n  - reliability (recall)\n  - freshness (e.g. average time 
 gap between syncs)\n\nThe evaluation will also consider different implemen
 tation set ups\, such as sequential vs parallelized implementation of a Re
 sourceSync client. The proposed talk will concentrate on presenting the fi
 rst set of results form the evaluation.\n\nhttps://indico.egi.eu/event/397
 3/contributions/9192/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9192/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Building interoperable systems for SeaDataNet community
DTSTART;VALUE=DATE-TIME:20181009T161500Z
DTEND;VALUE=DATE-TIME:20181009T163000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-52@indico.egi.eu
DESCRIPTION:Speakers: Ariyo\, Chris (CSC)\nThe SeaDataNet project (https:/
 /www.seadatanet.org/) offers a robust and state-of-the-art Pan-European in
 frastructure to harmonise metadata and data from marine data centers in Eu
 rope\, and offers the technology to make these data accessible. In the exi
 sting SeaDataNet common data index service (http://seadatanet.maris2.nl/v_
 cdi_v3/search.asp)\, data is residing at the data centers and is offered o
 n demand across the user requests. However\, the process is quite slow and
  also it was not easy to evaluate the quality of data\, as data sets are d
 irectly uploaded by the data centers. To overcome this problem\, the SeaDa
 taNet community has partnered with the EUDAT CDI in the SeaDataCloud proje
 ct to move its data to the EUDAT cloud storage and offer data directly fro
 m the cloud. Moreover\, the community wants to perform quality checks on t
 he data residing in the cloud before making it available for users. \n\nTo
  implement the new upgraded system the existing SeaDataNet systems and EUD
 AT services have to be interoperable. This abstract discusses the solution
 s chosen for making different existing systems interoperable and the new i
 nfrastructure developed for the SeaDataNet common data index service. REST
  APIs are chosen to enable interaction between EUDAT services and communit
 y’s existing systems. Defining REST interfaces facilitated the understan
 ding of different systems and helped in realizing the seamless communicati
 on between different systems. B2STAGE REST APIs (https://www.eudat.eu/b2st
 age) are used for all the interactions between the systems\, such as uploa
 ding data to EUDAT cloud storage\, downloading data from EUDAT cloud stora
 ge\, performing transformations on the data in the cloud\, etc. Moreover\,
  in order to perform additional actions on the data in the cloud\, such as
  checking the quality of data\, performing transformations on data and for
  analyzing the data sets the existing EUDAT services are extended with new
  components. \n\nThe EUDAT B2HOST (https://www.eudat.eu/services/userdoc/b
 2host) is extended to provide a container cluster that supports automatic 
 data management in the cloud. The container cluster is managed using the E
 UDAT B2STAGE service\, which allows systems to automatically run different
  tools on top of the data by interacting with its API. The technologies us
 ed for realizing data management in cloud are are Docker containers\, RANC
 HER container platform\, RabbitMQ and Elasticsearch\, Logstash\, and Kiban
 a (ELK stack). The ability to offer automated data management in cloud cou
 ld be valuable for other research communities as well. Moreover\, the tech
 nical solutions chosen for developing the SeaDataNet common data index sys
 tem could be used as reference solutions for building interoperable system
 s across different communities.\n\nhttps://indico.egi.eu/event/3973/contri
 butions/9195/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9195/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EOSC Digital Innovation Hub (DIH): Digitizing Industry through EOS
 C-hub
DTSTART;VALUE=DATE-TIME:20181011T134500Z
DTEND;VALUE=DATE-TIME:20181011T135500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-259@indico.egi.eu
DESCRIPTION:Speakers: Holsinger\, Sy (EGI.eu)\nhttps://indico.egi.eu/event
 /3973/contributions/9196/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9196/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Building the EOSC together: The role of eInfraCentral\, EOSC-hub a
 nd OpenAIRE-Advance
DTSTART;VALUE=DATE-TIME:20181010T153000Z
DTEND;VALUE=DATE-TIME:20181010T154500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-101@indico.egi.eu
DESCRIPTION:Speakers: Angelis\, Jelena (European Future Innovation System 
 (EFIS) Centre)\nThe concept behind promoting a joint presentation of the t
 hree projects – eInfraCentral\, EOSC-hub\, OpenAIRE-Advance – is a fit
 ting approach to help drive forward and implement the EOSC and interlink *
 *People\, Data\, Services and Training\, Publications\, Projects and Organ
 isations.** \n\n**1. The challenge for research communities**\n**People & 
 Training:** Due to a fragmented e-infrastructure landscape\, end-users\, s
 uch as researchers\, innovators or industry actors\, often are unaware of 
 the e-infrastructure services available in Europe that could aid their wor
 k. Similarly\, service providers and data producers have difficulty reachi
 ng out to potential users due to the lack of coordination and harmonisatio
 n across various e-infrastructures in order to support them in core EOSC-r
 elated activities such as open science. Even if users find out about the a
 vailability of a certain e-service\, it is difficult to gather further inf
 ormation and compare it with other existing services. Service providers al
 so lack user feedback on the ways they could improve their offerings. This
  leads to inefficient funding patterns through the emergence of overlappin
 g efforts and as such\, slower rates of innovation due to the lack of comp
 etition in the field.\n\n**2. Bringing the solution to EOSC**\n**Projects\
 , Publications & Services:** eInfraCentral\, EOSC-hub and OpenAIRE-Advance
  are the core initiatives in the implementation of the EOSC\, actively con
 tributing to the building of the EOSC service catalogue and portal. eInfra
 Central creates a unified online service catalogue where users can search\
 , browse\, compare and access e-services. The **eInfraCentral standard Ser
 vice Description Template (SDT) and catalogue** will provide the foundatio
 n for the catalogue of services to be accessed via the EOSC Portal to be l
 aunched in November. The SDT contains the prerequisites and attributes tha
 t are essential for the creation of customer-centric service descriptions.
  In addition\, eInfraCentral facilitates the development of a shared langu
 age to describe services\, fostering cooperation between infrastructure pr
 ojects\, communities and initiatives as well as sharing and reusing schola
 rly communication outputs (e.g. publications\, research data\, software) t
 o support reproducible and transparently assessable science. \n\n**3. Join
 tly implementing the EOSC Portal**\n**Data\, Services & Training:** The th
 ree projects will outline their collaboration around the building of the E
 OSC Portal. It is important to clarify that each project brings different 
 elements and will use their previous outputs to further the implementation
  of EOSC. The presentation will highlight how eInfraCentral’s already ex
 isting service description template\, service catalogue\, and portal will 
 help build the EOSC Portal\, along with the EOSC-hub marketplace. The pres
 enters will also distinguish EOSC and the eInfraCentral catalogue\, as the
 re is a difference in their scope. \n\n**People & Organisations:** The tea
 ms of eInfraCentral\, EOSC-hub and OpenAIRE-Advance believe that the DI4R 
 audience could greatly benefit from learning about the collaboration betwe
 en these projects by clarifying any confusions around the development of t
 he catalogue(s) of services and portal(s). Our session will conclude with 
 an open discussion with the audience to understand what their **value-prop
 osition** is and what they can bring to the table in helping build the EOS
 C together.\n\nhttps://indico.egi.eu/event/3973/contributions/9247/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9247/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ellip: a collaborative workplace for EO Open Science
DTSTART;VALUE=DATE-TIME:20181009T160000Z
DTEND;VALUE=DATE-TIME:20181009T161500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-106@indico.egi.eu
DESCRIPTION:Speakers: Rossi\, Cesare (Terradue)\, Brito\, Fabrice (Terradu
 e)\, Caumont\, Herve (Terradue)\nEarth observations from satellites produc
 e vast amounts of data. In particular\, the new Copernicus Sentinel missio
 ns are playing an increasingly important role as a reliable\, high-quality
  and free open data source for scientific\, public sector and commercial a
 ctivities. Latest developments in Information and Communication Technology
  (ICT) facilitate the handling of such large volumes of data\, and Europea
 n initiatives (e.g. EOSC\, DIAS) are flourishing to deliver on it. In this
  context\, Terradue is moving forward an approach resolutely promoting an 
 Open Cloud model of operations\, along with Cloud Services (the new ‘Ell
 ip’ solutions) for cross-domain cooperation and applied innovation\, sup
 porting users with a collaborative work environment on the Platform.\n\nWi
 th solutions to transfer EO processing algorithms to Cloud infrastructures
 \, Terradue Cloud  Platform is optimising the connectivity of data centres
  with integrated discovery and processing methods. This is for example the
  case with NextGEOSS\, the European Data Hub and Platform\, a EC contribut
 ion in support of the Group on Earth Observations initiatives and communit
 ies\, or the Geohazards Exploitation Platform\, an R&D activity funded by 
 ESA. Implementing a Hybrid Cloud model\, and using Cloud APIs based on int
 ernational standards\, the Platform Terradue fulfils its growing user need
 s by leveraging capabilities of several Public Cloud providers. Operated a
 ccording to an “Open Cloud” strategy\, it involves partnerships comply
 ing with a set of best practices and guideline:\n\n - Open APIs. Embrace C
 loud bursting APIs that can be easily plugged into the Platform’s codeba
 se\, so to expand the Platform offering with Providers offering complement
 ary strategic advantages for different user communities.\n - Developer com
 munity. Support and nurture Cloud communities that collaborate on evolving
  open source technologies\, including at the level of the Platform enginee
 ring team\, when it comes to deliver modular extensions.\n - Self-service 
 provisioning and management of resources. The Platform’s end-users are a
 ble to self-provision their required ICT resources and to work autonomousl
 y.\n - Users rights to move data as needed. By supporting distributed inst
 ances of its EO Data management layer\, the Platform delivers the required
  level of data locality to ensure high performance processing with optimiz
 ed costs\, and guarantees that value added chains can be built on top of i
 ntermediate results.\n - Federated Cloud operations. The Platform’s coll
 aborative environment and business processes support users to seamlessly d
 eploy apps and data from a shared marketplace and across multiple cloud en
 vironments.\n\nMoreover\, Terradue has learned from past activities (2012-
 2017) to manage users communities in many scientific domains\, and to supp
 ort their collaborative work in accessing Open Data\, using Open source so
 ftware\, and contributing research products as part of the Open Science pr
 inciples. Ellip is the new Terradue Cloud Platform\, a development stemmed
  by this learning\, that incorporates open notebook science (based on the 
 Jupyter Notebook open-source application) for the design\, integration\, t
 esting\, deployment and monitoring of scalable EO data processing chains.\
 n\nhttps://indico.egi.eu/event/3973/contributions/9248/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9248/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Interdisciplinary research data management service for the whole u
 niversities and research institutions in Japan that emphasizes research in
 tegrity
DTSTART;VALUE=DATE-TIME:20181010T161000Z
DTEND;VALUE=DATE-TIME:20181010T161500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-164@indico.egi.eu
DESCRIPTION:Speakers: Komiyama\, Yusuke (National Institute of Informatics
 )\nThis research describes the development progress as of 2018 of GakuNin 
 RDM (1)\, a nationwide research data management (RDM) service promoted by 
 the Cabinet Office of the Japanese government. GakuNin (2) means the acade
 mic access management federation in Japan. Also\, RDM is an acronym for re
 search data management.\nFirst\, in Japan\, the results of public fund res
 earch are becoming public in principle. Besides\, researchers are obliged 
 to submit a data management plan to save research data for ten years from 
 the viewpoint of research integrity. Although infrastructure and guideline
 s for unified research data management do not yet exist\, it is requested 
 by the government that academic institutions should develop them as soon a
 s possible.\n\n  Second\, The National Institute of Informatics (NII) of R
 esearch Organization of Information and Systems (ROIS) in Japan provides r
 esearch data infrastructure to 850 domestic academic institutions. For exa
 mple\, SINET (3) is 100 Gbps high-speed network for science that is connec
 ted to GÉANT and Internete2. GakuNin is authentication federation that it
  has corresponded eduGAIN\, GakuNin Cloud is Cloud consulting service for 
 a university\, CiNii (4) Research is a discovery service of academic infor
 mation for research data\, JAIRO Cloud is SaaS of an institutional reposit
 ory that is based on the repository software WEKO (5). OpenAIRE is harvest
 ing all research article metadata from the institutional repository databa
 se by NII. \n\n  Third\, The Research Center for Open Science and Data Pla
 tform (RCOS) of NII began developing GakuNin RDM to respond to the request
  for support of research data management from academic institutions IT cen
 ters\, libraries\, university research administrators\, legal intellectual
  property departments and boards since 2016. We have adopted the Open Scie
 nce Framework (OSF) (6) for the core system of GakuNin RDM. In open scienc
 e promotion in Japan\, it is focused on preventing scientific misconduct. 
 In particular\, we extended the function of stamping commercial timestamps
  for file operations and provided operation logs to store aggregated resea
 rch data. Also\, We have developed functions for administrators to customi
 ze the user interface for each university and research institution. It inc
 ludes functions to control the use of GakuNin RDM add-ons\, create usage s
 tatistics reports and announce them to administrators and functions for an
 nouncing to end users by institution managers. Furthermore\, we developed 
 several add-ons not found in OSF and strengthened GakuNin RDM as research 
 data infrastructure. For example\, an add-on to WEKO that is used more tha
 n 500 organizations in Japan\, a cooperation add-on between JupyterHub\, a
  data analysis platform\, and workflow tool Galaxy. In this research\, we 
 introduce Japanese research data management service and discuss whether we
  can collaborate with European research data infrastructure.\n\n\n**Refere
 nces**\n\n  (1): https://doi.org/10.1109/IIAI-AAI.2017.144\n\n  (2): https
 ://doi.org/10.1109/SAINT.2010.14\n\n  (3): https://doi.org/10.1109/ICUFN.2
 016.7536928\n\n  (4): http://dl.acm.org/citation.cfm?id=1670638.1670658\n\
 n  (5): https://doi.org/10.1007/978-3-319-23207-2_40\n\n  (6): https://doi
 .org/10.5195/jmla.2017.88\n\nhttps://indico.egi.eu/event/3973/contribution
 s/9198/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9198/
END:VEVENT
BEGIN:VEVENT
SUMMARY:DARE as a platform to support Climate Data Analytics using Cloud I
 nfrastructures
DTSTART;VALUE=DATE-TIME:20181011T140000Z
DTEND;VALUE=DATE-TIME:20181011T141500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-147@indico.egi.eu
DESCRIPTION:Speakers: Page\, Christian (CERFACS)\nSupporting data analytic
 s in climate research with respect to data access is a challenge due to in
 creasing data volumes\, especially for end users\, as the whole climate da
 ta archive is expected to reach a volume of 30 Pb in 2018 and up to 2000 P
 b in 2022. Several international and European initiatives have emerged and
  provide standalone solutions that offer potential for interoperability. T
 he DARE e-science platform (http://project-dare.eu) is designed for effici
 ent and traceable development of complex experiments and domain-specific s
 ervices on the Cloud.\n\nIn Europe\, the IS-ENES (https://is.enes.org) con
 sortium has developed a platform\, that is a component of the ENES CDI (Cl
 imate Data Infrastructure)\, to ease access to climate data for the climat
 e impact community (C4I: https://climate4impact.eu). One of the important 
 aspect of the C4I platform is that it enables users to perform on-demand d
 ata analysis calculations through its backbone based on a collection of OG
 C WPS (Web Processing Service). These\, coupled with authorization mechani
 sms based on access tokens\, enable the delegation of the calculations ont
 o distributed infrastructures and the controlled management of the results
 . \n\nThese characteristics have been further extended with provenance int
 egration\, especially to obtain the traceable calculation of climate impac
 t indicators\, in the context of the FP7-CLIPC project. A solution based o
 n a standard representation (W3C-PROV) and a set of lineage management and
  workflows tools that will scale to other computational use cases\, and th
 at will be interoperable with ongoing European initiatives. In the DARE pr
 oject\, the provenance system will be also built on top of W3C-PROV\, ensu
 ring interoperability.\n\nDARE will also integrate services from the EUDAT
  CDI\, enabling generic access and cross-domain interoperability\, as well
  as providing compliance and integration with the future EOSC platform. As
  DARE will use containerization technologies\, it will be easily deployed 
 on heterogeneous architectures.\n\nA scientific pilot has been designed wi
 thin the DARE project for the ENES community (climate domain). The objecti
 ves are to enable delegation of on-demand computational-intensive calculat
 ions to the DARE platform. In the presented Use Case\, on-demand data anal
 ytics will be initiated on the IS-ENES C4I platform by end users of climat
 e data\, in a seamless fashion. A schematics of the architecture and Use C
 ase will be presented\, along with initial development status.\n\nhttps://
 indico.egi.eu/event/3973/contributions/9202/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9202/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Skills for dealing with research software as an element of open sc
 ience
DTSTART;VALUE=DATE-TIME:20181009T153000Z
DTEND;VALUE=DATE-TIME:20181009T154500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-67@indico.egi.eu
DESCRIPTION:Speakers: Scheliga\, Kaja (Helmholtz Association)\nSoftware pl
 ays a crucial role in the research lifecycle. Moreover\, software is\, alo
 ngside text and data\, an essential element of open science. In this sense
 \, the FAIR (findable\, accessible\, interoperable\, and reusable) princip
 les apply not only to data but also to research software. In conjunction w
 ith text and data\, making research software FAIR contributes to making re
 search output comprehensible\, verifiable\, reproducible\, and reusable.  
 \nThe process of making research software FAIR involves many aspects rangi
 ng from software development issues and documentation to legal aspects lik
 e licensing. The skills of all those involved play a major role here.\nDev
 eloping\, expanding and contextualising skills for dealing with research s
 oftware is an important contribution to increase awareness for the importa
 nce of software in the research process and to establish research software
  as an element of open science.\n\n**Developing** IT skills in higher educ
 ation is important across disciplines. For qualification works (Bachelor\,
  Master\, PhD thesis) where research software development plays a role\, i
 ntegrating expertise from scientific disciplines and the field of computer
  science need to go hand in hand. Moreover\, cooperation between higher ed
 ucation bodies\, such as between universities and research institutions\, 
 makes education paths more interoperable. Especially for researchers who d
 evelop software as part of their research activity but have no IT backgrou
 nd\, providing introductory courses to software development and dealing wi
 th research software throughout the research life cycle is an important st
 arting point. Formats include\, for instance\, seminars\, workshops\, phd 
 schools as well as online courses. \n\n\n**Expanding** skills concerns bot
 h expert scientists in order to deepen their knowledge about dealing with 
 research software and IT specialists in order to master the stat-of-the-ar
 t techniques and tools as well as to gain a better understanding of discip
 line specific knowledge. Formats include workshops\, hacky hours\, hackath
 ons and software carpentry. \n\n**Contextualising** the skills by means of
  a technical and human infrastructure is vital. By providing a technical i
 nfrastructure to (collaboratively) develop\, test\, review\, publish and a
 rchive research software\, research institutions and universities can crea
 te an environment that encourages researchers to apply FAIR principles to 
 research software. By fostering professional networks and communities of p
 ractice the formal acquisition of skills is complemented by a practice-bas
 ed exchange of knowledge and experiences. Finally\, providing career oppor
 tunities that take the multifaceted skills needed for dealing with researc
 h software into account are a means to increase incentives and rewards for
  efforts put into dealing with research software. \n\nIn this presentation
  we want to discuss approaches that can foster open science with a focus o
 n skills for dealing with research software. We want to provide general ar
 guments and give specific examples from initiatives in the Helmholtz Assoc
 iation in Germany.\n\nhttps://indico.egi.eu/event/3973/contributions/9206/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9206/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Scientific Scavenger Hunt: Improve your discovery skills
DTSTART;VALUE=DATE-TIME:20181009T103000Z
DTEND;VALUE=DATE-TIME:20181009T120000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-33@indico.egi.eu
DESCRIPTION:Speakers: Schramm\, Maxi (Open Knowledge Maps)\, Kraker\, Pete
 r (Open Knowledge Maps)\nThe open science revolution has dramatically incr
 eased the accessibility of scientific knowledge. But what about discoverab
 ility? Discovery is in many ways the departure point of research\; whether
  you are starting out in your PhD\, initiating a research project or ventu
 ring into a different discipline: in many cases\, you want to get an overv
 iew of an unknown field of research and the most relevant projects therein
 . The quality of this overview often decides whether research gets reused 
 or duplicated\, whether collaborations are formed or such opportunities ar
 e missed.  \n\nHowever\, with 2.5 million papers published every year\, an
 d thousands of research projects launched every day\, discovery becomes in
 creasingly difficult. Traditional approaches involving search engines prov
 iding long\, unstructured lists of scientific outputs are not sufficient. 
 We can also see this reflected in the numbers: the vast majority of datase
 ts are not reused\, and even in application-oriented disciplines such as m
 edicine\, only a minority of results ever gets transferred to practice.\n\
 nBut not to worry\, open science is here to help: new and innovative tools
  for exploring scientific knowledge are bridging the gap between accessibi
 lity and discoverability.\n\nIn this workshop\, you will learn to improve 
 your discovery skills with two open science tools enabling visual discover
 y: Open Knowledge Maps (https://openknowledgemaps.org/search)\, which prov
 ides knowledge maps of research topics in any discipline\, and VIPER (http
 s://openknowledgemaps.org/viper)\, which builds on the EOSC via OpenAIRE t
 o enable visual discovery of research projects. You will learn how to get 
 an overview of a scientific field\, to identify relevant concepts and to s
 eparate relevant from irrelevant content with respect to your information 
 need. \n\nThis training will be given in the form of an innovative\, hands
 -on format: the Scientific Scavenger Hunt. The Scientific Scavenger Hunt i
 s a fun and fast-paced mix between a pub quiz and a virtual scavenger hunt
 . In groups\, participants try and complete tasks on knowledge maps within
  a given time limit. They follow hints on knowledge maps that lead you to 
 the correct answer. On the way\, they learn what makes a guerilla archivis
 t and why the city of Athens is almost synonymous with insomnia in some co
 mmunities. And they may even win a prize in the end!\n\nWe have already co
 nducted this workshop around the world. More than a 1000 participants have
  participated in this fun\, hands-on activity at events such as the Open S
 cience Fair and OpenCon\, and we would love to bring it to DI4R.\n\n*More 
 information on Open Knowledge Maps:\nOpen Knowledge Maps is based on the p
 rinciples of open science: we share our source code\, content and data und
 er an open license. As a community-driven initiative\, we are developing o
 ur services together with our advisors\, collaboration partners and users.
  Currently\, more than 30\,000 users from all around the world leverage ou
 r openly accessible discovery tool for their research\, writing and studie
 s per month. For more information\, please visit https://openknowledgemaps
 .org*\n\nhttps://indico.egi.eu/event/3973/contributions/9254/
LOCATION:Lisbon Auditorium B203
URL:https://indico.egi.eu/event/3973/contributions/9254/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Scientific Panel "E-infrastructure: what is it and how does it hel
 p you?"
DTSTART;VALUE=DATE-TIME:20181010T084500Z
DTEND;VALUE=DATE-TIME:20181010T100000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-192@indico.egi.eu
DESCRIPTION:Speakers: Ryan\, Sinead (Trinity College Dublin)\nPannelists: 
 Alexandre Bovin\, Sorina Camarasu-Pop\, Wolfgang zu Castell\, Matthew Dove
 y\, Andy Goetz\, Erik Huizer\, Kristel Michielsen\n\nhttps://indico.egi.eu
 /event/3973/contributions/9210/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9210/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EUXDAT e-Infrastructure for Sustainable Development
DTSTART;VALUE=DATE-TIME:20181010T140000Z
DTEND;VALUE=DATE-TIME:20181010T141500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-114@indico.egi.eu
DESCRIPTION:Speakers: Nieto De Santos\, Francisco Javier (Atos Research & 
 Innovation)\, Michalakopoulos\, Spiros (Atos)\nEUXDAT proposes an e-Infras
 tructure for sustainable development. The project partners form a cross-do
 main group of agricultural experts together with software engineers and te
 chnology experts. Agriculture\, land monitoring and energy efficiency are 
 addressed\, to support planning policies\, as opposed to simply increasing
  current productivity.\n\nOne of the major challenges to achieve our goals
  is the management and processing of huge amounts of heterogeneous data\, 
 with the added requirement of data and computational scalability\, given t
 hat the amounts of data will only increase\, and so will the complexity of
  processing it. \n\nThe EUXDAT e-Infrastructure builds on existing compone
 nts\, and provides an advanced frontend for users to develop applications.
  The frontend provides monitoring information\, visualization\, various pa
 rallelized data analytic tools\, and data and processes catalogues\, enabl
 ing Large Data Analytics-as-a-Service. A large set of data connectors will
  be supported\, including unmanned aerial vehicles (drones)\, Copernicus d
 ata\, and field sensors\, for scalable analytics.\n\nThe infrastructure re
 sources are based on HPC and Cloud\, however the choice and usage of physi
 cal resources are transparent to the user. EUXDAT aims at optimizing data 
 and resources usage\, by on the one hand supporting data management linked
  to data quality evaluation\, and on the other proposing a hybrid orchestr
 ation of task execution\, by identifying whether the best target is an HPC
  center or a Cloud provider. The latter will be achieved by using monitori
 ng and profile information and deciding based on trade-offs related to cos
 t\, data constraints\, efficiency and availability of resources.\n\nThroug
 hout the development of the 3-year project\, EUXDAT will be in contact wit
 h scientific communities\, in order to identify new trends and datasets\, 
 which will help guide the evolution of the e-Infrastructure. The project a
 ims to result in an integrated e-Infrastructure which will encourage and f
 acilitate end users to create new applications for sustainable development
 .\n\nhttps://indico.egi.eu/event/3973/contributions/9212/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9212/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Enabling Reproducible Computing on the EPOS ICS-D
DTSTART;VALUE=DATE-TIME:20181011T141500Z
DTEND;VALUE=DATE-TIME:20181011T143000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-117@indico.egi.eu
DESCRIPTION:Speakers: Spinuso\, Alessandro (KNMI)\nThe EPOS-IP project is 
 implementing solutions to enable user-driven reproducible computations exp
 loiting the large wealth of data\, data products and software discoverable
  through its Centralised Integrated Core Services (ICS)-C catalogue. The a
 ctual data is accessible through the web services that are managed by geog
 raphically distributed and interdisciplinary RIs organised in Thematic Com
 munities called “Thematic Core Services”. The variety of methodologies
  and interoperability requirements between data and software suggests the 
 need for identifying and implement general use cases supported by flexible
  and scalable e-Science solutions. These must be integrated in the EPOS ar
 chitecture with the preliminary objective of assisting the users in basic 
 tasks\, such as allocation of computational and storage resources and data
 -staging\, incrementally accommodating more complex computational scenario
 s and reusable workflows.\n\nWe will present the approach envisaged for th
 e integration of processing functionalities within the EPOS ICS portal. It
  will allow users to develop and execute new data-intensive methods and wo
 rkflows within dedicated processing environments that are implemented as J
 upyter notebooks and that are associated with contextual workspaces. Users
  of the EPOS ICS portal will select the data to be staged from one of thei
 r workspace\, after having populated it with search results of interest ob
 tained from the ICS catalogue.\nSuch service requires the data to be stage
 d to remote computational facilities that adopts software containerisation
  and infrastructure orchestration technologies (Docker Swarm\, Kubernetes)
  to dynamically allocate and prepare the needed resources. These will be h
 eterogeneous and managed by national and European e-Infrastructures that w
 ill constitute the EPOS Distributed Integrated Core Services (ICS-D). We e
 nvisage that\, beyond staging\, many common operations could be encoded as
  configurable scientific workflows that will automatically preprocess the 
 data before repurposing it to the researcher for further analysis\, sugges
 ting the need of a workflow as a service (WaaS) interface. Once data is st
 aged and preprocessed\, users can then define and evaluate their own metho
 ds via traditional scripting or still adopting advanced workflow technolog
 ies.\n\nThanks to containerisation\, special attention is dedicated to por
 tability and reproducibility of the processing environments\, thereby allo
 wing user to explicitly save\, trace and access the different stages of th
 eir progress. Moreover\, we will illustrate the approach for the adoption 
 and integration of scientific workflow tools (CWL\, dispel4py)\, that incl
 ude validation and monitoring services. These are implemented on top of a 
 provenance model and management system (S-ProvFlow)\, that allows the expl
 oration of large lineage collections describing the obtained results. The 
 system offers access to multi-layered\, context-rich provenance informatio
 n through interactive tools.\n\nWe will discuss the importance of the comm
 unication of such service with the EPOS ICS-C catalog and how it will cont
 ribute to produce and ultimately deliver research data that comply to the 
 FAIR principles (Findable\, Accessible\, Interoperable and Reusable). The 
 activities will be also presented in the scope of the cooperation with ong
 oing H2020 initiative such the newly funded project DARE (Delivering Agile
  Research Excellence on European e-Infrastructures).\n\nhttps://indico.egi
 .eu/event/3973/contributions/9213/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9213/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Driving data analysis through the Jupyter Notebook at European XFE
 L
DTSTART;VALUE=DATE-TIME:20181010T163000Z
DTEND;VALUE=DATE-TIME:20181010T164500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-89@indico.egi.eu
DESCRIPTION:Speakers: Beg\, Marijan (European XFEL GmbH)\nComputational sc
 ience based on simulation or experimental data typically requires data ana
 lysis to extract insight from potentially large data sets. In this project
 \, we explore the suitability of the Jupyter Notebook to drive the process
 ing chain from raw data to figures used in publications and reports.\n\nCo
 mputational Science is emerging as a key tool in academia and industry. Fo
 r example\, in the field of magnetism\, simulations of nano structures hav
 e become well established and are used widely. In photon science\, the ana
 lysis of experimental data is essential and central to understanding corre
 lations\, adjusting experiment parameters\, and exploiting an instrument's
  full potential.\n\nWith regards to this increasing importance in science\
 , there is increasing concern about the reproducibility of scientific resu
 lts obtained mainly from computational data analysis [1]. Ideally\, any sc
 ientist should be able to recreate\, for example\, central figures in publ
 ications. This requires keeping track and publishing of all steps taken du
 ring the analysis\, including tracking of all experiment and simulation ru
 ns\, data and simulation results used from each\, and all metadata\, param
 eters and processing steps.\n\nWe study the utility of the Jupyter Noteboo
 k as a virtual research environment for this common scenario through both 
 data analysis in computational modelling of magnetic devices and Photon Sc
 ience. For the former\, we have taken a well established micromagnetic sim
 ulation package [2] based on C++ and added a Python interface [3] to allow
  convenient control of the package through the Jupyter Notebook [4].\n\nOf
  particular interest for both application domains is that within the Jupyt
 er Notebook\, we can carry out simulation\, data analysis\, and specialise
 d post-processing within a single document\, making the work more easily r
 eproducible and distributable. A special case is the creation of figures i
 n publications: by creating each central figure in a publication within a 
 Jupyter Notebook\, we can publish the notebooks together with the manuscri
 pt\, and thus make the key data elements of the publication reproducible.\
 n\nEmerging developments such as the European Science Cloud (EOSC) demand 
 that the whole computational analysis process can be driven remotely. The 
 method of driving computational science through the Jupyter Notebook provi
 des the remote execution elegantly: by hosting the Jupyter Notebook server
  where the data and simulation capability is\, and connecting the user's w
 eb browser with the Jupyter Notebook server via HTTPS\, we avoid common pr
 oblems experienced with remote desktops or X forwarding. Driving computati
 onal analysis through the Jupyter Notebook can provide a flexible cloud-en
 abled data analysis infrastructure.\n\nThis project is part of the Jupyter
 -OOMMF activity in the OpenDreamKit [5] project and we acknowledge the fin
 ancial support from Horizon 2020 European Research Infrastructures project
  (676541). The work is also supported by the EPSRC CDT in Next Generation 
 Computational Modelling EP/L015382/1.\n\n[1] M. Baker\, Nature 533\, 452 (
 2016).  \n[2] M. J. Donahue and D. G. Porter\, OOMMF User’s Guide\, Vers
 ion 1.0\, Interag. Rep. NISTIR 6376\, NIST Gaithersburg (1999).  \n[3] M. 
 Beg et al. AIP Advances 7\, 056025 (2017).  \n[4] https://github.com/joomm
 f  \n[5] https://opendreamkit.org\n\nhttps://indico.egi.eu/event/3973/cont
 ributions/9214/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9214/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sustainable Research Software – Managing a Common Problem of SSH
  Infrastructures
DTSTART;VALUE=DATE-TIME:20181010T160000Z
DTEND;VALUE=DATE-TIME:20181010T160500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-111@indico.egi.eu
DESCRIPTION:Speakers: Thiel\, Carsten (CESSDA ERIC)\, Kalman\, Tibor (GWDG
 )\nResearch software enabling digital scholarly tools and services are maj
 or building blocks of open science in today’s research environment. This
  is apparent also for Research Infrastructures (RI) in the Social Sciences
  and Humanities (SSH) community.\nThese RIs\, such as *CESSDA*\, *CLARIN* 
 and *DARIAH*\, have been set up to support scholars in their research and 
 have a long tradition of supporting open science\, particularly through th
 eir FAIR data management solutions. One of the major challenges emerging f
 rom the operation of these digital RIs is the sustainable management of th
 e research software used to build the components. While general consensus 
 reigns about the need to apply state-of-the-art software engineering princ
 iples and industry standards to the development and maintenance of softwar
 e and services\, the implementation proves hard.\nContinuing from a joint 
 workshop in 2017 we are currently undertaking measures to align existing e
 fforts towards a common understanding of technical requirements and recomm
 endations. This includes the Software Maturity Modelling developed by CESS
 DA\, the Software Quality Guidelines developed by CLARIAH and the Technica
 l Reference originating from DARIAH.\nBuilding upon these technical founda
 tions\, we also want to help promoting software best practices in teaching
  and education\, ideally as part of curricula\, to widen awareness of soft
 ware quality requirements throughout the research community and their soft
 ware engineers. While adding further requirements to software projects inv
 ariably leads to increased development cost and time\, re-usability of sof
 tware and thus reproducibility of the results must become an everyday rese
 arch practice. Just as classic publications and increasingly research data
 sets are subject to quality assurance\, the softwares used to create them 
 must be as well in order to fully support the research process to advance 
 scholarly and scientific knowledge through open science.\nThis cooperation
  is being streamlined under the umbrella EURISE Network\, where research i
 nfrastructures meet research software engineers\, to strengthen the combin
 ed foundations for future collaborations of e-infrastructures and the emer
 ging EOSC. We present the current state of this initiative and explain ong
 oing efforts towards a common set of guidelines and evaluation criteria. W
 e explain why and how our emphasis on improving software quality will ulti
 mately benefit openness and re-usability of science and research data.\n\n
 https://indico.egi.eu/event/3973/contributions/9215/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9215/
END:VEVENT
BEGIN:VEVENT
SUMMARY:OpenAIRE service for Research Communities: Open Science as-a-Servi
 ce
DTSTART;VALUE=DATE-TIME:20181011T133000Z
DTEND;VALUE=DATE-TIME:20181011T135000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-112@indico.egi.eu
DESCRIPTION:Speakers: Manghi\, Paolo (Istituto di Scienza e Tecnologie del
 l'Informazione - CNR)\, Principe\, Pedro (University of Minho)\nOpenAIRE-C
 onnect fosters transparent evaluation of results and facilitates reproduci
 bility of science for research communities by enabling a scientific commun
 ication ecosystem supporting exchange of artefacts\, software\, packages o
 f artefacts\, and links between them across communities and across content
  providers. To this aim\, OpenAIRE-Connect is introducing and implementing
  the concept of Open Science as a Service (OSaaS) on top of the existing O
 penAIRE infrastructure (www.openaire.eu)\, by delivering out-of-the-box\, 
 on-demand deployable tools in support of Open Science. \nOpenAIRE-Connect 
 is realizing and leveraging the uptake of two new services that build on a
 nd extend the existing OpenAIRE technical and networking infrastructure\, 
 to stimulate a technical and cultural shift towards a scholarly communicat
 ion ecosystem supporting more effective/transparent evaluation and reprodu
 cibility of research results. The first service serves research communitie
 s to (i) publish research artefacts (packages and links)\, and (ii) monito
 r their research impact. The second service engages and mobilizes content 
 providers\, and serves them with facilities enabling notification-based ex
 change of research artefacts\, to leverage their transition towards Open S
 cience paradigms. Both services will be served on-demand according to the 
 OSaaS approach\, hence be re-usable by different disciplines and providers
 \, each with different practices and maturity levels. \nThis World Cafe se
 ssion will present the new OpenAIRE service for Research Communities\, sho
 wcasing real use cases from five pilot communities (i: Neuroinformatics fr
 om France Life Imaging national infrastructure\; ii: European Marine Scien
 ce from Pangaea and Atlas community\; iii: Cultural Heritage and Digital H
 umanities from the PARTHENOS research infrastructure\; iv: Fisheries and a
 quaculture management from the BlueBridge and MARBEC infrastructures\; and
  v: Environment & Economy from the national/EU node of the United Nations 
 Sustainable Development Solutions Network)\, addressing community based so
 lutions\, and will also demonstrate the features available for research in
 itiatives and infrastructures. The session will discuss the future challen
 ges and the next steps to extend the OSaaS tools for research communities.
 \nThe OpenAIRE Research Community Dashboard\, to be presented and discusse
 d during the World Café session\, is the service that offers access to a 
 virtual space (a graph) including metadata descriptions of all products re
 levant to the community as well as links between such products\; the graph
  is built by i) scientists depositing their products (via Zenodo) or claim
 ing products and links (associating a DOI to the community\, specifying a 
 link between products) or (ii) by services collecting product metadata and
  links from a number of content providers\, ranging from publications repo
 sitories to data repositories and repositories of other kinds of products.
 \n\nhttps://indico.egi.eu/event/3973/contributions/9218/
LOCATION:Lisbon Auditorium B203
URL:https://indico.egi.eu/event/3973/contributions/9218/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Urban TEP – Analysis of Multi-Source Data for Innovative Urb
 an Monitoring
DTSTART;VALUE=DATE-TIME:20181011T133000Z
DTEND;VALUE=DATE-TIME:20181011T134500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-82@indico.egi.eu
DESCRIPTION:Speakers: Bachofer\, Felix (German Aerospace Center (DLR))\nSe
 ttlements and urban areas represent the cores of human activity and develo
 pment. Besides climate change\, urbanization represents one of the most re
 levant developments related to the human presence on the planet. Both glob
 al trends challenge our environmental\, societal and economic development.
  In this context\, the availability of and access to accurate\, detailed a
 nd up-to-date information will impact decision making processes all over t
 he world. The suite of Sentinel Earth Observation (EO) satellites in combi
 nation with their free and open access data policy contributes to a spatia
 lly and temporally detailed monitoring of the Earth’s surface. At the sa
 me time a multitude of additional sources of open geo-data is available 
 – e.g. from national or international statistics or land surveying offic
 es\, volunteered geographic information or social media. However\, the cap
 ability to effectively and efficiently access\, process\, and jointly anal
 yze the mass data collections poses a key technical challenge. \n\nThe Urb
 an Thematic Exploitation Platform (U-TEP)\, funded by the European Space A
 gency (ESA)\, is developed to provide end-to-end and ready-to-use solution
 s for a broad spectrum of users (experts and non-experts) to extract uniqu
 e information/ indicators required for urban management and sustainability
 . The key components of the system are an open\, web-based portal\, which 
 is connected to distributed high-level computing infrastructures and provi
 ding key functionalities for \n\ni)    high-performance data access and pr
 ocessing\, \n\nii)    modular and generic state-of-the art pre-processing\
 , analysis\, and visualization\, \n\niii)    customized development and sh
 aring of algorithms\, products and services\, and\n \niv)    networking an
 d communication. \n\nU-TEP aims at opening up new opportunities to facilit
 ate effective and efficient urban management and the safeguarding of livab
 le cities by systematically exploring the unique EO capabilities in Europe
  in combination with the big data perspective arising from the constantly 
 growing sources of geo-data. The capabilities of participation and sharing
  of knowledge by using new media and ways of communication will help to bo
 ost interdisciplinary applications with an urban background. The services 
 and functionalities are supposed to enable any interested user to easily e
 xploit and generate thematic information on the status and development of 
 the environment based on EO data and technologies.\n\nThe innovative chara
 cter of U-TEP platform in terms of available data and processing and analy
 sis functionalities attracted already a large user community (>300 institu
 tions from >40 countries) of diverse users (i.a. from science\, public ins
 titutions\, NGOs\, industry).\n\nhttps://indico.egi.eu/event/3973/contribu
 tions/9219/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9219/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Research with Sensitive Personal Data in the EOSC
DTSTART;VALUE=DATE-TIME:20181010T140000Z
DTEND;VALUE=DATE-TIME:20181010T150000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-83@indico.egi.eu
DESCRIPTION:Speakers: Azab\, Abdulrahman (UIO)\, Aryio\, Chris ()\, Iozzi\
 , Maria Francesca (SIGMA)\, Baxter\, Rob (University of Edinburgh)\, Varma
 \, Susheel (EMBL EBI)\nIn the present era of digital data explosion\, the 
 Open Science paradigm\, together with FAIR principles\, offer scientists a
 nd technology providers a new vision and methods for enhancing research th
 rough the fostering of cross-disciplinary access and (re)use of data and d
 ata technologies. Data sharing and data cross-linking is the basis for inn
 ovative research in many fields of knowledge including health and medical 
 science\, but this research often involves personal or sensitive data that
  must be handled with due consideration of the legitimate right to persona
 l privacy. Whilst several solutions have been developed to facilitate rese
 arch involving sensitive data in compliance with privacy regulations\, the
 re is still the need to implement platforms and protocols that effectively
  allow cross border\, inter-disciplinary research on personal sensitive da
 ta. Mechanisms for authentication and vetting\, authorization\, register d
 ata sharing\, and analysis of aggregated datasets (possibly from different
  sources) are all still open questions\, both at technological and policy 
 level. Solutions require coordinated efforts\, transversally involving ser
 vice providers\, scientist and communities.\nThe session will bring togeth
 er service providers and communities dealing with sensitive data. We will 
 explore state of the art solutions currently in use in regional or communi
 ty specific settings and now also offered in the EOSC. We will investigate
  solutions developed and/or adopted by large advance community to work and
  share on personal sensitive data transversally across research infrastruc
 tures. We will identify gaps between science communities needs and the cur
 rent offering of e-infrastructures (regional-based or community-based). We
  will discuss the requirements for an effective research infrastructure fo
 r sensitive data that allows data use and re-use within the EOSC.  We will
  explore possible strategies to enhance interoperability between the exist
 ent e-infrastructure\, with the goal eventually of enabling cross-border\,
  Europe-wide user scenarios in the EOSC framework. The roles of funders an
 d policy makers in this enabling process will be also discussed.\n\nhttps:
 //indico.egi.eu/event/3973/contributions/9220/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9220/
END:VEVENT
BEGIN:VEVENT
SUMMARY:De-provisioning in context of AAI
DTSTART;VALUE=DATE-TIME:20181010T162500Z
DTEND;VALUE=DATE-TIME:20181010T163000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-81@indico.egi.eu
DESCRIPTION:Speakers: Licehammer\, Slavek (CESNET)\nDe-provisioning of use
 rs’ data from the end service is an important yet often neglected aspect
  of the whole AAI lifecycle. Services need to be notified when the user le
 ft the organization/project so that they can initiate the clean-up process
 es. De-provisioning becomes a big issue especially in the context of manag
 ement of private data (incl. GDRP) and services which hold users’ persis
 tent data. Moreover\, the de-provisioning mechanisms can be used as a part
  of security incidents mitigation process to disable or suspend compromise
 d accounts. In the lightning talk\, we will emphasize the critical aspects
  of de-provisioning processes and demonstrate the requirements on particul
 ar use-cases.\n\nhttps://indico.egi.eu/event/3973/contributions/9222/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9222/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Enlighten Your Research – travels around the world
DTSTART;VALUE=DATE-TIME:20181011T110000Z
DTEND;VALUE=DATE-TIME:20181011T111500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-119@indico.egi.eu
DESCRIPTION:Speakers: Schäfer\, Leonie (DFN e.V.)\, Hester\, Mary (SURFne
 t BV)\nEnlighten Your Research​ (EYR) is a program designed to increase 
 the use and awareness of e-infrastructure resources in various fields of r
 esearch. The goal of this EYR is to provide access and support for network
 \, compute\, and storage resources to meet the growing data needs of resea
 rch\, in addition to inspiring new and understanding existing collaboratio
 ns between Europe and another major regions of the world such as India wit
 h the NKN Network or the Eastern European Partnership (EaP) countries. \n 
 \nThe first EYR programme was started by SURFnet\, the Dutch research and 
 education network\, to disseminate the adoption of point-to-point network 
 connections for research collaborations. Over a couple of iterations of th
 e Dutch EYR programme\, and trying to further meet the needs of researcher
 s\, resources from other e-infrastructures (such as high performance compu
 ting hours\, or programming expertise to process researchers’ data)\,  w
 ere also included in the programme ‘awards’. The idea of the EYR progr
 ammes has now been taken up by GEANT to foster international research coll
 aborations and to promote the use of GÉANT’s global links connecting Eu
 ropean e-infrastructure resources.\n\nThis Lightning Talk will feature the
  challenges of running the international programme Enlighten Your Research
  (EYR) as regional editions with the objective to initiate challenging int
 ernational research collaborations in Networking and Data-Intensive Resear
 ch and to foster cooperation between the pan-European e-infrastructure GÉ
 ANT and NREN Associations from other regions of the world.\n\nhttps://indi
 co.egi.eu/event/3973/contributions/9223/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9223/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Composition and Deployment of Complex Container-Based Application 
 Architectures on Multi-Clouds
DTSTART;VALUE=DATE-TIME:20181010T111500Z
DTEND;VALUE=DATE-TIME:20181010T113000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-84@indico.egi.eu
DESCRIPTION:Speakers: Alic\, Andy S (UPVLC)\nCloud computing has been esta
 blished in recent years as the key technology to offer on-demand access to
  computing and storage resources. This has been exemplified by both public
  Cloud providers and on-premises Cloud Management Platforms such as OpenNe
 bula and OpenStack\, out of which federated large-scale Cloud infrastructu
 res to support scientific computing have been established\, such as the EG
 I Federated Cloud. Indeed\, the European Open Science Cloud (EOSC) is fore
 seen to consist of a federating core which provides seamless access to a w
 ide range of publicly funded services supplied at national\, regional\, an
 d institutional levels for science and innovation. These last years have w
 itnessed the rise of the OASIS TOSCA (Topology Orchestration for the Speci
 fication of Cloud Applications) standard\, adopted by several European pro
 jects. This standard allows one to specify the components that underpin an
  application architecture using a high-level YAML-based language which can
  be extended to include additional components to satisfy the requirements 
 of a wide variety of applications. However\, the recent advances in comput
 ing have revealed two major trends that can greatly benefit the applicatio
 n delivery and the computational performance:  Linux containers and GPU co
 mputing.\n\nTo this aim\, the Horizon 2020 DEEP-Hybrid DataCloud project i
 s developing innovative services to facilitate the composition and deploym
 ent of complex cloud application architectures across multiple Clouds (bot
 h private and public ones). Therefore\, we describe in this presentation t
 he adoption of a visual composition approach of TOSCA templates (based on 
 Alien4Cloud)\, in order to facilitate the widespread adoption of the stand
 ard\, and its integration with the INDIGO-DataCloud Orchestrator\, which i
 s already part of the EOSC-HUB service catalogue. With this approach\, the
  user can visually compose complex applications that involve\, for example
 \, the dynamic deployment of a container orchestration platform on an IaaS
  Cloud site that executes a highly-available Docker-based application to f
 acilitate application delivery. The users can also deploy an Apache Mesos 
 cluster with GPU support that contains a deep learning application for the
  recognition of certain plant species\, offered as a service to a communit
 y of users. This introduces unprecedented flexibility\, from visual compos
 ition\, to the automated application delivery\, using a graphical interfac
 e that is already integrated with an Orchestrator layer that performs reso
 urce provision from multiple Clouds and application configuration.\nThe in
 tegration of easy-to-use graphical interfaces builds a bridge between the 
 users and the orchestration services.It also represents a step forward to 
 foster the adoption of innovative computing services that are hidden from 
 the user\, as they can focus on the high-level description of the services
  requirements and definition\, instead of working on their technical imple
 mentation.\n\nhttps://indico.egi.eu/event/3973/contributions/9225/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9225/
END:VEVENT
BEGIN:VEVENT
SUMMARY:OPENCoastS On-demand Operational Coastal Circulation Forecast Serv
 ice
DTSTART;VALUE=DATE-TIME:20181011T143000Z
DTEND;VALUE=DATE-TIME:20181011T144500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-85@indico.egi.eu
DESCRIPTION:Speakers: Oliveira\, Anabela (National Laboratory for Civil En
 gineers)\, Teixeira\, Joana (LNEC - Laboratório Nacional de Engenharia Ci
 vil)\, Rogeiro\, Joao (LNEC - Laboratório Nacional de Engenharia Civil)\n
 Seas and oceans are important drivers for the European economy and they ne
 ed to be preserved and developed in a sustainable way. OPENCoastS provides
  on-demand coastal circulation forecasts systems that are useful in resear
 ch and in many other areas of human activity. The forecast systems can be 
 setup by the end-users for a given region of interest of the European Atla
 ntic coast. They run daily and predict  water levels\, 2D velocities and w
 ave parameters for periods between 48 to 72 hours. The service was develop
 ed by the Portuguese National Civil Engineering Laboratory (LNEC) in 2010 
 as WIFF (Water Information Forecast Framework) and uses the SCHISM modelin
 g system.\nThe deployment of forecast systems requires strong knowledge of
  coastal processes and IT\, along with access to significant computational
  and storage resources. The OPENCoastS service offers a user friendly web 
 interface and a back-end that takes care of all complexity. This approach 
 reduces the barriers to the adoption and use of  coastal circulation forec
 asts systems making them available to a much broader audience.\nThe system
  has been producing 48-hour forecasts on a daily basis for the Portuguese 
 coast and is running on High-Throughput Compute and storage resources prov
 ided by the Portuguese National Distributed Computing Infrastructure (INCD
 ). In the context of EOSC-hub\, LNEC is working with LIP\, INCD\, Universi
 ty of Cantabria and University of La Rochelle to open the service to users
  from other European countries so they can also benefit from this innovati
 ve service. \nTo cope with the internationalisation the system is being en
 hanced to include federated AAI\, resilient scheduling to distributed comp
 uting resources\, accounting\, data management and long-term data storage.
  The system is now integrated with the EGI-checkin for federated AAI enabl
 ing simpler user authentication. The front-end has been split into compone
 nts that can be instantiated in IaaS cloud systems such as the EGI fedclou
 d\, the use of INDIGO orchestration for cloud services deployment is plann
 ed. For increased compute capacity and resilience the simulations can be s
 cheduled to the EGI High Throughput Computing service via a DIRAC scheduli
 ng system also provided by EGI within EOSC-hub. To provide independence an
 d encapsulation the application components are encapsulated in Linux conta
 iners. The use of EUDAT services for long-term storage and/or data preserv
 ation is also being considered.  \nIn this presentation we will describe t
 he details and challenges of adapting and deploying a complex  application
  service that is both compute and data intensive\, and exploits multiple c
 omputing paradigms such as cloud computing and high throughput computing a
 cross multiple locations taking advantage of pan-European services made av
 ailable by several infrastructures and technology providers.\n\nhttps://in
 dico.egi.eu/event/3973/contributions/9226/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9226/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Challenges in building Virtual Research Environments
DTSTART;VALUE=DATE-TIME:20181010T163000Z
DTEND;VALUE=DATE-TIME:20181010T163500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-160@indico.egi.eu
DESCRIPTION:Speakers: Buurman\, Merret (German Climate Computing Centre (D
 KRZ))\nVirtual Research Environments (VRE) are trending. As data and proce
 ssing become bigger\, more distributed and more collaborative\, more and m
 ore research communities call for a VRE to execute data-drived science on 
 the cloud.\n\nThe advantages are obvious: Processing is no longer bound by
  the user's laptop's computing power and memory. Large datasets do not hav
 e to be downloaded to local disk before they can be processed. This is an 
 advantage especially for researchers from institutions or locations where 
 access to good hardware\, large network bandwidth or performant computing 
 facilities are difficult to obtain.\n\nTo be attractive and useful to user
 s\, VREs need to provide efficient access to interesting datasets. In conj
 unction with Open Data\, accessed efficiently through a VRE\, they can be 
 an catalyst for Open Science. If designed with this intention\, processing
  results can easily be shared and openly published in their turn. Similarl
 y\, the processing workflows can often be made available to and reproducib
 le by others. This encourages the FAIRness of not only the data\, but the 
 processing services.\n\nDeveloping such a VRE holds some challenges\, main
 ly because of the multitude of actors and tools. Within this lighting talk
 \, examples from the geosciences will be used to highlight specific challe
 nges and solutions.\n\nOn a desktop\, every researcher puts together their
  own collection of resources\, tools and applications. A VRE generally tri
 es to replace that desktop environment by an online environment. As such i
 t is usually aimed at a larger group of different users and thus has to ca
 ter to varied needs.\n\nThe tools that researchers use for their work exis
 t already and need to be incorporated into the VRE. They may be quite dive
 rse\, of diverse programming languages\, frameworks\, etc. As re-implement
 ing the tools is of course not an option\, a way must be found that allows
  to efficiently integrate diverse existing applications into a common VRE 
 and to keep the VRE extensible for future services to be included.\n\nAnot
 her challenge is the development mode. Often\, VREs are not commercial sof
 tware products developed by commercial software companies\, but they are d
 eveloped in research communities or in consortia between research institut
 ions and research infrastructure providers. This leads to distributed\, he
 terogeneous development teams\, with additional efforts for manage and hav
 e effective communications.\n\nClosely related to this is the funding sche
 me. Not being commercial products\, the development and the operation of a
  VRE need to be funded through other mechanisms. Typical ones are those of
  H2020\, national or even cross-institutional fundings. Such programs usua
 lly fund development efforts\, but often operations and hardware acquisiti
 on are not sufficiently funded.\n\nhttps://indico.egi.eu/event/3973/contri
 butions/9304/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9304/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data Challenges at the Square Kilometre Array (SKA)
DTSTART;VALUE=DATE-TIME:20181009T111500Z
DTEND;VALUE=DATE-TIME:20181009T113000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-161@indico.egi.eu
DESCRIPTION:Speakers: Hessling\, Hermann (Univ. of Applied Sciences (HTW) 
 Berlin)\nThe Square Kilometre Array (SKA) will be a radio telescope distri
 buted over two continents: In South Africa approx. 190 parabolic antennas 
 will be built\, in Australia more than 100\,000 dipole antennas. The compu
 ting at SKA has to cope with next-generation big data analytics challenges
 : So many data will be taken that only a tiny fraction can be stored in lo
 ng term archives. Extracting the relevant astronomical information out of 
 huge data streams has to be done nearly in real-time. Due to the complexit
 y of the workflows an enormous computing power is needed. Moreover\, the f
 antastic resolution of the antennas result finally in 3D-images of the uni
 verse\, which may become as large as one petabyte: Traditional computing a
 rchitectures are not designed for analyzing objects of such size.\n\nThe s
 ignals from the antennas are "interfered" in local stations and sent to tw
 o computing centers in South Africa and Australia\, respectively. The ante
 nnas generate a 24/7-stream of "raw data" of the order of 2 Pb/s\, which i
 s more than the global internet traffic (~360 Tb/s\, Cisco 2016). In both 
 computing centers the incoming data are analyzed iteratively by complicate
 d workflows to reduce the data volumes strongly. The outcome of the  centr
 al data centers are called *science data products* and will be transported
  to a few “Regional Centres“. In Europe there will be one virtual Regi
 onal Centre that is physically distributed over the European SKA member st
 ates. The community of astronomers can access SKA data only via the Region
 al Centers.  \n\nThe project AENEAS (Advanced European Network of E-infras
 tructures for Astronomy with the SKA) is developing a design for the Europ
 ean Regional Center. The talk will give an overview of the current status 
 of AENEAS. \n\nHuge data objects (~1 PB / object) can only be analyzed suf
 ficiently fast if they are stored "in-memory". This needs a radical change
  in the design of computing infrastructures away from a "processor-centric
  computing" to a "memory-driven computing". The talk will give an overview
  of the results of two recent workshops\, where the need of a paradigm shi
 ft was discussed by big data analytics experts:\n\n- *Exascale Data Center
 *\, Berlin\, Jan. 30\, 2018\n- *Memory-driven Computing for Big Data Analy
 tics*\, Berlin\, May 30\, 2018\n\nsee http://bigdata.htw-berlin.de. \n\nFi
 nally it will be indicated that the big data analytics challenges at SKA a
 re not just a "do it bigger and do it faster business" (G. Longo). Almost 
 all data (more than 99.999 % of the raw data) are already rejected\, befor
 e a human researcher will have had the chance to start his analysis. This 
 needs in particular a development of highly parallelizable machine learnin
 g techniques\, which are currently not available. Suitable statistical pro
 cedures are needed for evaluating the quality of the remaining data\, as d
 one exemplary in high-energy physics at the Large Hadron Collider (LHC). M
 oreover\, developing a scalable distributed memory-driven computing infras
 tructure is an interdisciplinary challenge\, where scientists of different
  disciplines and industry have to cooperate.\n\nhttps://indico.egi.eu/even
 t/3973/contributions/9305/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9305/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Metadata in Astronomy - LOFAR
DTSTART;VALUE=DATE-TIME:20181011T112500Z
DTEND;VALUE=DATE-TIME:20181011T114000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-254@indico.egi.eu
DESCRIPTION:Speakers: Oonk\, Oonk (SURFsara BV)\nhttps://indico.egi.eu/eve
 nt/3973/contributions/9232/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9232/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Opening
DTSTART;VALUE=DATE-TIME:20181009T080000Z
DTEND;VALUE=DATE-TIME:20181009T084500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-7@indico.egi.eu
DESCRIPTION:Speakers: Heitor\, Manuel (Minister of Science and Higher Educ
 ation of Portugal)\nIncluding a "first-comers" introduction to the organis
 ing RIs and DI4R\n\nhttps://indico.egi.eu/event/3973/contributions/9233/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9233/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Documenting Heritage Science: A CIDOC CRM-based System for Modelli
 ng Scientific Data
DTSTART;VALUE=DATE-TIME:20181009T160000Z
DTEND;VALUE=DATE-TIME:20181009T161500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-92@indico.egi.eu
DESCRIPTION:Speakers: Castelli\, Lisa (INFN)\nThe paper presents a complet
 e system for documenting scientific data produced in heritage sciences\, b
 ased on a data model intended to generate valuable information and suitabl
 e metadata for them to be stored\, accessed\, queried\, shared and reused 
 in various contexts and research scenarios.\n \nThe system is built around
  the concept of a general meta-model\, flexible enough to provide descript
 ions\, in a formal language\, of all the entities and issues encountered d
 ocumenting heritage sciences results. It is inspired by CIDOC CRM principl
 es for data modelling and maintains a full compatibility with CIDOC CRM an
 d its extensions\, especially CRMsci\, CRMdig and CRMpe. The use of a wide
  set of thesauri and vocabularies for the standard and unambiguous descrip
 tion of all the entities will guarantee internal coherence at data level. 
 Thus\, our system is capable of identifying and modelling physical and dig
 ital objects\, events\, activities and actors\, i.e. people and teams invo
 lved in the various research events and of providing straightforward conne
 ction and interoperability with the general documentation of cultural heri
 tage\, tightly linking scientific analyses to their heritage context.\n\nT
 he metadata model supported by our system will also stand at the very foun
 dations of DIGILAB\, a digital infrastructure developed by the E-RIHS Euro
 pean initiative to facilitate virtual access to tools\, services and data 
 for heritage research. DIGILAB infrastructure will rely on a network of fe
 derated repositories and will enable finding and accessing data through an
  advanced semantic search system operating on a registry containing metada
 ta describing individual datasets. Thus\, our data model is designed to ma
 ke DIGILAB compliant with the EU policies and strategies concerning scient
 ific data\, including the FAIR data principles\, the Open Research Data po
 licy\, and the EOSC strategy. It will guarantee data interoperability and 
 will foster re-use of information and services to process the data accordi
 ng to specific research questions and use requirements.\n\nFirst tests hav
 e been carried out on datasets resulting from various scientific analyses 
 carried out by different research institutions\, including the Italian Nat
 ional Council of Researches (CNR)\, the Istituto Superiore per la Conserva
 zione ed il Restauro (ISCR)\, the Opificio delle Pietre Dure (OPD) and the
  National Institute of Nuclear Physics (INFN). A specific subset of inform
 ation derived from the activity of the INFN-CHNet network (Cultural Herita
 ge Network of the Italian National Institute for Nuclear Physics) are repo
 rted in this paper. The heterogeneity of the network analytical techniques
  examined and encoded\, has ensured a good test bench for the developed mo
 del\, proving its effectiveness and delineating a solid path for its futur
 e developments.\n\nThe encoding of scientific information by means of our 
 system demonstrates the validity of this approach to different cases and o
 ffer an overview of the whole model and of how information encoded by mean
 s of its classes and properties will benefit the implementation of the DIG
 ILAB infrastructure.\n\nhttps://indico.egi.eu/event/3973/contributions/923
 4/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9234/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Introduction
DTSTART;VALUE=DATE-TIME:20181009T131500Z
DTEND;VALUE=DATE-TIME:20181009T132500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-245@indico.egi.eu
DESCRIPTION:Speakers: Manola\, Natalia (University of Athens\, Greece)\nht
 tps://indico.egi.eu/event/3973/contributions/9237/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9237/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Panel session and Questions
DTSTART;VALUE=DATE-TIME:20181010T111500Z
DTEND;VALUE=DATE-TIME:20181010T120000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-244@indico.egi.eu
DESCRIPTION:Speakers: Van Nieuwerburgh\, Inge (University of Gent)\n1.    
  Funder perspective:  Joao Nuno Ferreira\, FCT\, Portugal. The speaker wil
 l outline their situation and need for OS\, including the need to monitor 
 content output. Why are synchronized information systems important for fun
 ders?\n\n2.     National perspective: Mojca Kotar\, University of Ljubljan
 a. The speaker will outline their situation and need for OS. The reality o
 f implementing OS at the national level. What are the needs and what infra
 structures do we need to have in place and to converge at national level. 
 Where does the national setting fit into EOSC. Mojca\n\n3.     RI or resea
 rch manager perspective:  João Dias\, ISCTE-IUL\, Portugal.  The speaker 
 will outline their situation and need for OS and how to implement it withi
 n research groups\, at a local level\, and what are the barriers.\n\nhttps
 ://indico.egi.eu/event/3973/contributions/9238/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9238/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The experience from the EOSC HLEG and the EOSCpilot open consultat
 ion
DTSTART;VALUE=DATE-TIME:20181009T151500Z
DTEND;VALUE=DATE-TIME:20181009T153500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-247@indico.egi.eu
DESCRIPTION:Speakers: Campos\, Isabel (CSIC)\nhttps://indico.egi.eu/event/
 3973/contributions/9239/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9239/
END:VEVENT
BEGIN:VEVENT
SUMMARY:INFRAEOSC-02-2019: Prototyping new innovative services for EOSC
DTSTART;VALUE=DATE-TIME:20181010T081500Z
DTEND;VALUE=DATE-TIME:20181010T084500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-241@indico.egi.eu
DESCRIPTION:Speakers: Burgueño Arjona\, Augusto (Head of Unit "eInfrastru
 cture & Science Cloud"\, Directorate General for Communications Networks\,
  Content and Technology\, European Commission)\, Tzenou\, Georgia (Program
 me Officer\, Unit "eInfrastructure & Science Cloud"\, Directorate General 
 for Communications Networks\, Content and Technology\, European Commission
 )\nThe forthcoming call for proposals INFRAEOSC-02-2019 aims at designing 
 and prototyping novel innovative digital services\, that cover diverse asp
 ects of the research data cycle\, and will be accessible through the EOSC 
 portal. The services will address current gaps in the offering\, foster in
 terdisciplinary research and serve the evolving needs not only of research
 ers but also of industry and the public sector. Consortia should consider 
 innovative models of collaboration and incentive mechanisms for a user ori
 ented open science approach. The participation of SMEs in the consortia is
  encouraged. The call will open on 16 October 2018\, with deadline on 29 J
 anuary 2019\, and a total budget of 28.5 Meuro. \n\nMore information: \n\n
 https://ec.europa.eu/research/participants/portal/desktop/en/opportunities
 /h2020/topics/infraeosc-02-2019.html\n\nhttps://indico.egi.eu/event/3973/c
 ontributions/9241/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9241/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The HOW
DTSTART;VALUE=DATE-TIME:20181010T105500Z
DTEND;VALUE=DATE-TIME:20181010T111500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-243@indico.egi.eu
DESCRIPTION:Speakers: Manghi\, Paolo (Istituto di Scienza e Tecnologie del
 l'Informazione - CNR)\nDifferent stakeholders have different needs in moni
 toring and supporting Open Science. There is a need for monitoring systems
 \, which brings a lot of challenges. Paolo will also present the content a
 cquisition policy of OpenAIRE.\n\nhttps://indico.egi.eu/event/3973/contrib
 utions/9243/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9243/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The WHO
DTSTART;VALUE=DATE-TIME:20181010T103000Z
DTEND;VALUE=DATE-TIME:20181010T105500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-242@indico.egi.eu
DESCRIPTION:Speakers: Mendez\, Eva (University of Madrid)\nThis talk will 
 touch on scholarly communication workflows and how they interplay with OS.
  It will present why is OS important in an institution\, who OS workflows 
 apply to and what are the realities for institutions to integrate OS into 
 their day to day workflows. For an institution there is an additional need
  for understanding of the scientific output of an institution. This can on
 ly be done with open infrastructures and standards. The talk will also hig
 hlight the importance of managing the long-tail of research.\n\nhttps://in
 dico.egi.eu/event/3973/contributions/9244/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9244/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A single data ecosystem as framework for a joint infrastructure da
 ta world
DTSTART;VALUE=DATE-TIME:20181009T151500Z
DTEND;VALUE=DATE-TIME:20181009T153000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-102@indico.egi.eu
DESCRIPTION:Speakers: Kuchinke\, Wolfgang (Heinrich-Heine University Duess
 eldorf)\nThere exists a need for better solutions for cross-domain data sh
 aring and research collaboration\, especially the need to process a multit
 ude of different data types created in different contexts. To address this
  challenge the Research Data Alliance (RDE) together with GEDE (Group of E
 uropean Data Experts) have recently suggested a Data Object (DO) architect
 ure to create a network of DOs linked to persistent identifiers (PID) poin
 ting to associated metadata descriptions connecting multitudes of data rep
 ositories. This DO architecture will enable a global data environment that
  may cause a fundamental changes in data practices for more efficient data
  processing\, data sharing and simpler data process automation\, and will 
 certainly effect the area of infrastructure service provision.\n\nInspired
  by biology\, the concept of the ecosystem provides a framework to help to
  comprehend the inter-wined and highly interdependent nature of increasing
 ly complex data infrastructures involved in cross-domain research and Open
  Science. To maximize the impact of DO architecture on the data environmen
 t\, we need to overcome the fragmentation in data ecosystem concepts to wo
 rk with only a single data ecosystem for the emerging global data infrastr
 ucture. Though\, different kinds of data ecosystems have already been desc
 ribed\, like ecosystems for big data\, climate data\, biomedical data\, op
 en data and personal data\, we propose to apply the concept of a single da
 ta ecosystem for the whole data world covering all different data types us
 ed in research with the DO architecture as its main structural component.\
 n\nThe advantage is that such a concept of a single\, global data ecosyste
 m may allow unique access to novel ideas and analysis methods that may be 
 useful to further the development and evolution of European research infra
 structures and e-infrastructures and improve their operations in the globa
 l context. Because research questions have become bigger and more complex 
 the need for cross-domain data sharing has to be addressed. Seeing each in
 frastructure as a separate ecosystem is too restrictive\; one should use a
  single\, extended data ecosystem that covers open and protected\, big dat
 a and micro data\, as well as data from all different research domains.  E
 cosystems are no fixed structures\, but are changing and evolving\, going 
 through cycles of growth and reorganization. In this way\, they can repres
 ent a framework to generate ideas to be applied to data ecosystems and the
 ir infrastructure components to study their functioning\, further developm
 ent and sustainability. To support the analysis of the data ecosystem we a
 dapt concepts from ecological analysis\, like using as basic components da
 ta generators\, data consumers\, data flows\, data re-users\, and feedback
  loops. \n\nIn summary\, the consequent exploitation of the concept of a s
 ingle data ecosystem may produce novel solutions for the depicted limitati
 ons in existing data sharing processes\, because the interoperation of res
 earch infrastructures and e-infrastructures is a prerequisite for streamli
 ning cross-disciplinary processes and global cooperation that will become 
 easier and more transparent by the proper application of the DO architectu
 re as part of a global data infrastructure.\n\nhttps://indico.egi.eu/event
 /3973/contributions/9245/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9245/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Federated engine for information exchange (Fenix)
DTSTART;VALUE=DATE-TIME:20181010T143000Z
DTEND;VALUE=DATE-TIME:20181010T144500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-100@indico.egi.eu
DESCRIPTION:Speakers: Fiameni\, Giuseppe (CINECA - Consorzio Interuniversi
 tario)\nThe neuroscience community has to cope with various data sources e
 ach with their specific formats\, modalities\, spatial and temporal scales
  (i.e. from multi-electrode array measurements to brain simulations) and w
 ith no fixed relationship between them. Thus\, the scientific approaches a
 nd workflows of this community are typically a moving target\, which is mu
 ch less the case in other disciplines\, e.g.\, high-energy physics. Furthe
 rmore\, the community is experiencing an increasing demand of computing re
 sources to process data. However\, at present\, solutions to federate diff
 erent data sources and couple them with high-end computing capabilities do
  not exist\, or are very limited.\n\nFenix (https://fenix-ri.eu/) is based
  on a consortium of five European supercomputing and data centres (BSC\, C
 EA\, CINECA\, CSCS\, and JSC)\, which agreed to deploy a set of infrastruc
 ture services (IaaS) and integrated platform services (iPaaS) to allow the
  creation of a federated infrastructure and to facilitate access to scalab
 le compute resources\, data services\, and interactive compute services. T
 he implementation of the Fenix infrastructure is guided by the following c
 onsiderations:\n\n - It is based on a co-design approach with a set of div
 erse domain specific use cases which guides both the design of the archite
 cture and its validation.\n - Data need to be brought in close proximity t
 o the processing resources at different infrastructure service providers t
 o take advantage of high bandwidth with data repositories and services. \n
  - Federating multiple data resources shall enable easy replication of dat
 a at multiple sites to improve resilience\, availability as well as access
  performance of data.\n - Services are being implemented in a cloud-like m
 anner that is compatible with the work cultures in scientific computing an
 d data science. Specifically\, this entails developing interactive computi
 ng capabilities next to extreme-scale computing and data platforms of the 
 participating data centres.\n - The level of integration should be kept as
  low as possible to reduce operational dependencies between the sites (to 
 avoid\, e.g.\, the need for coordinated maintenance and upgrades) and to a
 llow for the local infrastructures to evolve following different technolog
 y roadmaps.\n\nBased on the above principles\, the Fenix federated infrast
 ructure includes these main components:\n\n - Scalable Compute Services\;\
 n - Interactive Compute Services\;\n - Active Data Repositories based on f
 ast memory and active storage tiers\;\n - Archival Data Repositories for l
 ong term preservation\; and     \n - Information/catalogue services to all
 ow findability and recovery of data.\n\nThe major advantages of the Fenix 
 federated architecture are: the use case driven design\, the scalability o
 f the services\, the easy extensibility which will allow in the future to 
 move to new state of the art solutions or to enable workflows for other sc
 ientific communities.\n\nThe first steps towards realisation of the Fenix 
 infrastructure will be done within the Interactive Computing E-Infrastruct
 ure (ICEI) project\, funded by the EC within the Human Brain Project (HBP\
 , https://www.humanbrainproject.eu/). The users of the HBP will be the pri
 me consumers of the resources provided through the infrastructure. Additio
 nal resources will be provided to European researchers at large via PRACE 
 (http://www.prace-ri.eu/).\n\nhttps://indico.egi.eu/event/3973/contributio
 ns/9246/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9246/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Addressing Energy Wall for Exascale Computing: Whole System Design
  implementation at CINES for Energy Efficient HPC
DTSTART;VALUE=DATE-TIME:20181010T161500Z
DTEND;VALUE=DATE-TIME:20181010T163000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-107@indico.egi.eu
DESCRIPTION:Speakers: BOYER\, Eric (CINES (Centre Informatique National de
  l'Enseignement Supérieur)\, FRANCE)\nCINES has initiated the deployment 
 of the “Whole System Design for Energy Efficient HPC” solution on its 
 3\,5 Pflops production system Tier1 (OCCIGEN). This solution developed wit
 hin the PRACE-3IP PCP (joint Pre-Commercial Procurement involving CINECA\,
  CSC\, EPCC\, GENCI and JUELICH) is a the result of  R&D services for impr
 ovement of the energy efficiency of HPC systems\, to address the energy wa
 ll of Exascale Computing. As such PRACE PCP combines elements of conventio
 nal hardware procurement with the provision of funding for research and pr
 oduct development. It was setup to procure and develop highly energy effic
 ient HPC systems available for general use\, i.e. able to run real applica
 tions\, and to be operated within a conventional HPC computing centre but 
 nevertheless achieve very high total-system energy efficiency. In addition
  to the technical goals the PCP intended to develop the HPC vendor eco-sys
 tem within the European Economic Area (EEA) and as such it is expected to 
 result in commercially viable products. As a result\, ATOS integrated in i
 ts roadmap an energy optimization oriented suite developed during PCP (BEO
 \, BDPO\, HDEEVIZ\, SLURM Energy saving plugins) are part of Atos-Bull Sup
 ercomputer Suite  (SCS5 R2) available since Q1 2018.\n\nWhile hosting one 
 of the PRACE-3IP PCP prototypes\, CINES has collaborated with EoCoE (Energ
 y Oriented Center of Excellence) and PRACE 4IP WP7 (application enabling a
 n optimization) to assess and provide guidance to the PCP R&D development 
 from ATOS.\n\nCINES has setup a monitoring architecture and tools to compl
 ement fine grain monitoring by coarse grain datacenter data collection and
  analysis.\n\nThe implementation in production environment of a “Whole S
 ystem Design for Energy Efficient HPC” is a key element to build the ste
 ps\, in collaboration with GENCI of a new paradigm for application and HPC
  efficiency\, changing from time-to-solution towards energy-to-solution op
 timisation. The global collection of energy and resource consumption\, is 
 a key repository of application behaviour and profile for data analysis an
 d provide guidance for upcoming procurements\, such as PPI4HPC (2019/2020)
 \, CINES next Tier1 (2020) and provide input for EuroHPC platforms (2022/2
 023).\n\nhttps://indico.egi.eu/event/3973/contributions/9249/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9249/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Open Science for the Neuroinformatics community
DTSTART;VALUE=DATE-TIME:20181009T134500Z
DTEND;VALUE=DATE-TIME:20181009T140000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-38@indico.egi.eu
DESCRIPTION:Speakers: POP\, Sorina (CNRS)\nOpenAIRE-Connect is a European 
 project which aims at providing services enabling uniform exchange of rese
 arch artefacts (literature\, data\, and methods)\, with semantic links bet
 ween them\, across research communities and content providers in scientifi
 c communication.\n\nThe Neuroinformatics community in OpenAire-Connect is 
 represented by members of the France Life Imaging (FLI) collaboration. Som
 e of the FLI members are also connected to INCF\, the International Neuroi
 nformatics Coordinating Facility\, to integrate solutions at a global leve
 l. In this context\, we aim at leveraging OpenAire-Connect services and gi
 ve our community members the possibility to easily publish and exchange re
 search artefacts from FLI platforms\, such as VIP (for processing) and Sha
 noir (for data management). This will enable open and reproducible science
 \, since literature\, data\, and methods can be linked\, retrieved\, and r
 eplayed by all the members of the community. \n\nVIP (Virtual Imaging Plat
 form) is a web portal (https://vip.creatis.insa-lyon.fr) for the simulatio
 n and processing of massive data in medical imaging. By effectively levera
 ging the computing and storage resources of the EGI e-infrastructure\, VIP
  offers its users high-level services enabling them to easily execute medi
 cal imaging applications. VIP has\, in June 2018\, more than 1000 register
 ed users and about 20 applications open to all its users.\n\nShanoir is an
  open source neuroinformatics platform designed to share\, archive\, searc
 h and visualize neuroimaging data. It provides a user-friendly secure web 
 access and offers an intuitive workflow to facilitate the collecting and r
 etrieving of neuroimaging data from multiple sources. Shanoir comes along 
 many features such as anonymization of data\, support for multi-center cli
 nical studies on subjects or group of subjects.\n\nBy leveraging OpenAire-
 Connect services and integrating them into VIP and Shanoir\, we aim at pro
 viding the neuroinformatics community with open Science tools to enhance t
 he impact of science and research.\n\nhttps://indico.egi.eu/event/3973/con
 tributions/9253/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9253/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Have a CoP of T in our café!
DTSTART;VALUE=DATE-TIME:20181009T151500Z
DTEND;VALUE=DATE-TIME:20181009T164500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-35@indico.egi.eu
DESCRIPTION:Speakers: Whyte\, Angus (UE)\, Leenarts\, Ellen (DANS)\, La Ro
 cca\, Giuseppe (EGI.eu)\, Kuchma\, Iryna (EIFL)\nThe session is organised 
 by a group of people who coordinate training programmes of research and e-
 infrastructures and who took the initiative of starting a Community of Pra
 ctice (CoP) for training coordinators and training managers. Through the C
 oP we aim to map out the training activities of various pan-European\, EOS
 C-related initiatives and strengthen their training capacity by improved a
 lignment\, sharing experiences and good practices\, initiating cross-infra
 structure training activities. ARDC\, CESSDA\, DARIAH\, EGI\, ELIXIR\, EOS
 Cpilot\, EOSC-hub\, EUDAT\, FOSTER\, FREYA\, GÉANT\, OpenAIRE and PRACE a
 lready expressed interest in participating in the CoP.\n \nThe workshop is
  follow-up of a training workshop that was organised by the EUDAT training
  team in January in Porto and a presentation at the RDA in March. The Caf
 é is an ideal format to discuss some of the questions living in the group
  offline\, share experiences of what has worked and what has not worked\, 
 share ideas and strategy to help the multi-domain knowledge transfer acros
 s borders. Over the coming years there will be new challenges to capture c
 oming from the needs of cross-domain data-driven science. The unprecedente
 d access to data and the computational ability to process it will produce 
 new accelerated breakthroughs.\n \nDuring this session\, we’ll focus on 
 exciting new developments\, we will address urgent gaps and\, in the end\,
  we will try to highlight common strategies that can be adopted for improv
 ing the knowledge transfer within the group. \n\nAgenda:\n\n1. Introductio
 n of the Community of Practice and to the session - Iryna Kuchma\, OpenAIR
 E\n2. Open badges: What are they and how are they used? - Giuseppe La Rocc
 a\, EGI Foundation \n3. Skills and competences frameworks\, including the 
 EOSCpilot consultation on its Skills and Capability Framework – Angus Wh
 yte\, DCC\n4. How to make training materials discoverable? - Ellen Leenart
 s\, DANS \n5. Making an impact that matters – Irina Mikhailava\, GÉANT
  \n6. Organising Summer Schools and reflecting on the approach \n\nWhat
 ’s in it for you? You should join this session when you want to meet you
 r fellow training coordinators and become part of the Community – let’
 s have a “CoP of T” together!\n\nhttps://indico.egi.eu/event/3973/cont
 ributions/9259/
LOCATION:Lisbon Auditorium B203
URL:https://indico.egi.eu/event/3973/contributions/9259/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Applications Database: New features for user communities
DTSTART;VALUE=DATE-TIME:20181010T113000Z
DTEND;VALUE=DATE-TIME:20181010T114500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-34@indico.egi.eu
DESCRIPTION:Speakers: Chatziangelou\, Marios (IASA)\nThe EGI Applications 
 Database (AppDB) is a central service that stores and provides information
  about software solutions in the form of native software products and virt
 ual appliances\, about the scientists involved\, and about publications de
 rived from the aforementioned solutions. Furthermore\, through its VMOps D
 ashboard\, it enables users to deploy and manage Virtual Machines on the E
 GI Cloud infrastructure. \n\n**Persistent Identifiers and OpenAIRE integra
 tion**\n\nThe AppDB’s development process has always been focused on pro
 viding a solid user experience\, by adding new and improving on existing f
 eatures. In this light\, AppDB has recently been extended with *support fo
 r persistent identifiers* (PIDs)\, via GRNet’s HANDLE.NET service\, for 
 each registered solution. This makes sharing\, documenting\, and referenci
 ng solutions easier and more consistent\, both for end users\, as well as 
 between services. As an example of the latter\, this new feature allows fo
 r tighter\, two-way integration with OpenAIRE. AppDB has been working on i
 mproving its existing integration of offering OpenAIRE data about projects
  and organizations within its portal and on exporting data about software 
 solutions and virtual appliances back to OpenAIRE through its new OAI-PMH 
 service.\n\n**Improvements to VA management and VM operations**\n\nAs clou
 d-related services are rapidly proliferating\, a versatile\, friendly user
  experience is capital to their success. Up until now\, the AppDB portal r
 equired that users maintain the information for each release of a virtual 
 appliance\, manually. This may become cumbersome to VA authors that use au
 tomated services or continuous integration processes to develop and build 
 new VAs. In order to be able to integrate with such automated release flow
 s\, a continuous delivery policy has been introduced. When this policy is 
 enabled and configured for a VA\, it allows the AppDB backend to monitor f
 or new virtual appliance releases and automatically publish them in the Ap
 pDB registry\, without requiring any user interaction through the portal. 
 Moreover\, with respect to VM operations through the VMOps dashboard\, som
 e of AppDB latest developments have been focused on giving users *more con
 trol over the resources acquired by their deployed VMs*. Among other featu
 res\, users can now request and release public IP addresses and attach new
  block storages at point of the VM lifecycle. Finally\, with the upcoming 
 use of OpenID connect\, users may authenticate to AppDB’s backend servic
 es and access their deployed VMs without the need of intermediate proxy ce
 rtificate.\n\n**Consolidation of backend services**\n\nStable and well-tun
 ed backends are crucial to a satisfactory end result to frontend services 
 such as the AppDB portal and its VMOps dashboard. To this end\, a new info
 rmation system service has been developed to harvest and correlate infrast
 ructure information from resource providers and other external services. I
 ts goal is to unify and satinize infrastructure information and provide si
 mple query interfaces from a single access point. Furthermore\, as OCCI is
  becoming obsolete and difficult to maintain for the resource providers\, 
 efforts are being made to populate VM image access information through eac
 h available Cloud Management Framework (CMF) native API\, instead of relyi
 ng on OCCI semantics.\n\nhttps://indico.egi.eu/event/3973/contributions/92
 60/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9260/
END:VEVENT
BEGIN:VEVENT
SUMMARY:dCache: storage for XFEL scientific use-cases and beyond
DTSTART;VALUE=DATE-TIME:20181010T133000Z
DTEND;VALUE=DATE-TIME:20181010T134500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-94@indico.egi.eu
DESCRIPTION:Speakers: Schuh\, Michael (DESY)\, Fuhrmann\, Patrick (DESY)\n
 The dCache project provides open-source storage software deployed\ninterna
 tionally to satisfy ever more demanding scientific storage\nrequirements. 
  Its multifaceted approach provides an integrated way of\nsupporting diffe
 rent use-cases with the same storage\, from high\nthroughput data ingest\,
  through wide access and easy integration with\nexisting systems.  In supp
 orting new communities\, such as medical\nresearch\, photon science/XFEL a
 nd microbiology\, dCache is evolving to\nprovide new features and access t
 o new technologies.\n\nWhatever the use case\, for federated storage to wo
 rk well some knowledge\nfrom each storage system must exist outside that s
 ystem. This is needed\nto allow coordinated activity. To support such scen
 arios dCache provides\na stream of internally generated events. In this ap
 proach the storage\nsystems (rather than the clients) become the coordinat
 ing service\,\nnotifying interested parties of key events.\n\nStorage even
 ts are also useful in other contexts: catalogues are\nnotified whenever da
 ta is uploaded or delete\, tape becomes more\nefficient because analysis c
 an start immediately after the data is on\ndisk\, caches can be "smart" fe
 tching new datasets pre-emptively and\nremoving cached content when the so
 urce is deleted.\n\nIn this paper we will present work done at DESY in bui
 lding a\nlow-latency\, compute cloud facility for various XFEL workflows. 
  This\nwas achieved by combining dCache storage events with various Open S
 ource\nprojects\, such as Apache Kafka\, Apache OpenWhisk and Kubernetes. 
  The\nresulting "serverless" cloud service is similar to AWS Lambda or Goo
 gle\nCloud Functions.  It allows the infrastructure to deploy additional\n
 resources automatically\, seamlessly scaling to match the demand.\n\nhttps
 ://indico.egi.eu/event/3973/contributions/9261/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9261/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EUDAT B2FIND
DTSTART;VALUE=DATE-TIME:20181011T105000Z
DTEND;VALUE=DATE-TIME:20181011T111000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-252@indico.egi.eu
DESCRIPTION:Speakers: Martens\, Claudia (Deutsches Klimarechenzentrum / Ge
 rman Climate Computing Center)\nhttps://indico.egi.eu/event/3973/contribut
 ions/9262/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9262/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Project “UNEKE”: composing storage infrastructures for researc
 h data a roadmap for higher education institutions
DTSTART;VALUE=DATE-TIME:20181010T141500Z
DTEND;VALUE=DATE-TIME:20181010T143000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-61@indico.egi.eu
DESCRIPTION:Speakers: Brenger\, Bela (RWTH Aachen University)\nWhile vario
 us scientific communities started to develop and establish mature infrastr
 uctures\, (e.g. repositories) to support researchers’ data management\, 
 other research areas are still facing the challenge of establishing suitab
 le infrastructures. Thus\, researchers in these disciplines rely on techni
 cal opportunities offered by their local research institution (Becker et a
 l.\, 2012). In order to support researchers’ adequate research data mana
 gement\, research institutions carried out surveys to investigate research
 er’s requirements. While most of these investigations are often restrict
 ed to individual institutes or have small sample sizes (Rudolph et al.\, 2
 015)\, literature could show\, that research institutions are facing two t
 ypes of barriers\, that must be taken into account when establishing suita
 ble infrastructures. Those are technical barriers (e.g.  infrastructure\, 
 security) as well as non-technical barriers (e.g. ethics\, management) (Wi
 lms et al.\, 2018). \nTherefore\, we present the results of the research p
 roject UNEKE\, which aims is find out more about the technical and non-tec
 hnical requirements of several research areas. In this work\, we present t
 he results of a qualitative research investigation including focus group i
 nterviews of 91 researchers from different research areas. For this explor
 atory\, qualitative approach\, 12 focus group workshops with 91 employees 
 from University Duisburg-Essen and RWTH Aachen University were conducted i
 n late 2018. \nThis allowed us to gain insights into attitudes\, thoughts 
 and experiences that researchers hold about RDM and how this affects daily
  conduct with RDM tools and infrastructures. We expected that research dat
 a itself and its handling might be highly specific for different research 
 areas. In order to monitor disciplinary differences\, the participants wer
 e divided into groups of researchers from natural sciences\, engineering\,
  life sciences\, humanities and social sciences. These focus groups were c
 onducted at both universities and structured into a introduction and follo
 wing distribution into smaller subgroups of 2-4 researchers. In these subg
 roups the question: “What needs should be considered when developing and
  introducing an infrastructure for research data management?” was discus
 sed. Results of these discussions were then compiled\, presented to\, and 
 discussed by the entire group. After the discussion\, the group structured
  the topics to create a thematic mapping. These mappings build the base fo
 r the categorical system that is being developed within UNEKE.\nFirst resu
 lts show that requirements are field specific and that the set of categori
 es resulting from the analysis is similar at both participating universiti
 es\, thus indicating its validity. While the field specific requirements a
 re often technical\, non-technical ones such as governance guidelines and 
 training show significant overlap. \n\nBecker\, J.\, Knackstedt\, R.\, Lis
 \, L.\, Stein\, A. and Steinhorst\, M. (2012) ‘Research Portals: Status 
 Quo and Improvement Perspectives’\, International Journal of Knowledge M
 anagement\, 8(3)\, pp. 27–46.\nRudolph\, D.\, Thoring\, A. and Vogl\, R.
  (2015) ‘Research Data Management: Wishful Thinking or Reality?’\, PIK
  - Praxis der Informationsverarbeitung und Kommunikation\, 38(3–4)\, pp.
  113–120.\nWilms\, K.\, Stieglitz\, S.\, Buchholz\, A.\, Vogl\, R. and R
 udolph\, D. (2018) ‘Do Researchers Dream of Research Data Management?’
 \, Proceedings of the 51st Hawaii International Conference on System Scien
 ces\, pp. 4411–4420.\n\nhttps://indico.egi.eu/event/3973/contributions/9
 264/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9264/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EOSC-hub and eInfraCentral cooperation framework
DTSTART;VALUE=DATE-TIME:20181010T140000Z
DTEND;VALUE=DATE-TIME:20181010T140500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-258@indico.egi.eu
DESCRIPTION:Speakers: Sanchez\, Jorge (JNP)\nhttps://indico.egi.eu/event/3
 973/contributions/9265/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9265/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Analysis of National Nodes as foundation for the European Open Sci
 ence Cloud
DTSTART;VALUE=DATE-TIME:20181010T161500Z
DTEND;VALUE=DATE-TIME:20181010T163000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-179@indico.egi.eu
DESCRIPTION:Speakers: Holmgren\, Sverker (VR-SNIC)\nBoth the European Open
  Science Cloud (EOSC) and the European Data\nInfrastructure (EDI) are envi
 saged as federated initiatives that will be\nbuilt on top of country-level
  counterparts in order to succeed. The\ne-Infrastructure Reflection Group 
 (e-IRG) addressed this point already\nin its 2016 Roadmap and recommended 
 that national governments and\nfunding agencies should reinforce their eff
 orts to:\n\n1) embrace e-Infrastructure coordination at the national level
  and build\nstrong national e-Infrastructure building blocks\, enabling co
 herent and\nefficient participation in European efforts\;\n\n2) together a
 nalyse and evaluate their national e-Infrastructure funding\nand governanc
 e mechanisms\, identify best practices\, and provide input to\nthe develop
 ment of the European e-Infrastructure landscape.\n\nAlso in the Competitiv
 eness Council conclusions (28/29 May 2018) the\nMember States are encourag
 ed to “invite their relevant communities\, such\nas e-infrastructures\, 
 research infrastructures\, Research Funding\nOrganisations (RFO’s) and R
 esearch Performing Organisations (RPO’s)\, to\nget organised so as to pr
 epare them for connection to the EOSC.”\n\nHowever\, the current situati
 on across several Member States (MS) and\nAssociated Countries (AC) is tha
 t there are different speeds and levels\nof access and integration to the 
 European initiatives.\n\nTo proceed\, it is imperative that these differen
 ces are identified early\non and specific actions are taken at national an
 d European levels. e-IRG\nis working to address this challenge\; the first
  step has been to collect\ninformation from each MS/AC about the current s
 tatus of their\ne-Infrastructure\, based on a survey addressed to the nati
 onal\nministries. The second step is conducting an analysis which will be 
 the\ncore of e-IRG’s next policy document.\n\nIn the survey the word e-I
 nfrastructure is assumed to cover various\n'layers' or components\, in par
 ticular: networking\, computing\, data and\ntools & services. The question
 s focus on acquiring information about the\norganizations responsible for 
 providing e-infrastructure services\, their\ngovernance model\, their fund
 ing methods\, and their access policies. We\nalso collected information on
  national domain-specific e-Infrastructures\nor other domain areas of part
 icular interest in each country and whether\nthey use the horizontal e-Inf
 rastructure services.\n\nThe scope of the presentation is thus to present 
 the preliminary\nanalysis of the survey results\, along with a first set o
 f\nrecommendations for the different stakeholders\, namely e-Infrastructur
 e\nproviders\, funders\, policy makers and users and get some initial feed
 back.\n\nWe have clustered the countries we have received replies from bas
 ed on\nthe existence of few\, several or many providers at a national leve
 l. The\nresults show that there is fragmentation in the national providers
  in\nseveral countries. It can also be seen that fragmentation of service\
 naccess and provision exists even in countries with advanced\ne-infrastruc
 ture services. Also\, as in some cases we identified\ndifferences that exi
 st in the number of providers in each domain\n(network\, computing\, data 
 or other) for every cluster we proceed to\nfurther categorization based on
  the number of organizations with similar\nservice domains.\n\nhttps://ind
 ico.egi.eu/event/3973/contributions/9268/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9268/
END:VEVENT
BEGIN:VEVENT
SUMMARY:HNSciCloud – Large-scale data processing and HPC for science wit
 h T-Systems hybrid cloud
DTSTART;VALUE=DATE-TIME:20181010T104500Z
DTEND;VALUE=DATE-TIME:20181010T110000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-178@indico.egi.eu
DESCRIPTION:Speakers: Mar\, de la\, Jurry (T-Systems International GmbH)\n
 As the result of joint R&D work with 10 of Europe’s leading public resea
 rch organisations\, led by CERN and funded by the EU\, T-Systems provides 
 a hybrid cloud solution\, enabling science users to seamlessly extend thei
 r existing e-Infrastructures with one of the leading European public cloud
  services based on OpenStack – the Open Telekom Cloud.\nWith this new ap
 proach large-scale data-intensive and HPC-type scientific use cases can no
 w be run more dynamically\, reaping the benefits of the on-demand availabi
 lity of commercial cloud services at attractive costs. \nOver the course o
 f the last year\, the prototyping and piloting has confirmed\, that scienc
 e users can get seamless\, performing\, secure and fully automated access 
 to cloud resources over the GÉANT network\, simplified by the identity fe
 deration with eduGAIN and Elixir AAI. Users can work in a cloud-native way
 \, maintaining existing toolsets or choose from a large and fast-growing c
 ommunity other OpenStack and S3-compatible tools\, e.g. Ansible and Terraf
 orm to run and manage applications. Users remain in full control and have 
 access to all native functions of the cloud resources\, either through web
  browser\, APIs or CLI. Cloud Management Platforms or Broker solutions are
  not needed\, but may be added if further abstraction is required.\nThe ex
 tensive service menu of Open Telekom Cloud – based on OpenStack – is o
 pening up new functionality and performance for scientific use cases with 
 build-in support for e.g. Docker\, Kubernetes\, MapReduce\, Data Managemen
 t\, Data Warehouse and Data Ingestion services. The services can be combin
 ed with a wide range of compute and storage options. Compute can consist o
 f any combination of containers\, virtual\, dedicated or bare metal server
 s. Server-types can be optimized for disk-intensive\, large-memory\, HPC o
 r GPU applications. The extensive network and security functions enable us
 ers to maintain a private and secure environment\, whereby access to servi
 ces can make full use of 10G networking. The is extended with the new Hybr
 id service\, providing the user with a dedicated fully managed on-premise 
 cloud as complement to the public cloud service.\nThe presentation will gi
 ve an overview of the performance and scale of use cases that have been su
 ccessfully deployed. It will address how large-scale data can be processed
  at new performance levels with hundreds of containers and how data can be
  processed in an intelligent way by pre-fetching the data or leaving the d
 ata remote at the existing infrastructure\, making use of the state-of-the
 -art Onedata Data Management solution from Cyfronet. Furthermore\, the res
 ults of the new high level of transparency and budget control developed wi
 ll be demonstrated. \nTen of Europe’s leading public research organisati
 ons led by CERN launched the Helix Nebula Science Cloud (HNSciCloud) Pre-C
 ommercial Procurement to establish a European hybrid cloud platform that w
 ill support the high-performance\, data-intensive scientific use-cases of 
 this “Buyers Group” and of the research sector at large.\n\nhttps://in
 dico.egi.eu/event/3973/contributions/9269/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9269/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Open\, Effective and Innovative tools to support researchers in Wo
 rldwide Infrastructures
DTSTART;VALUE=DATE-TIME:20181010T153000Z
DTEND;VALUE=DATE-TIME:20181010T165500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-68@indico.egi.eu
DESCRIPTION:Speakers: Lopez Garcia\, Alvaro (CSIC)\nModern science is incr
 easingly becoming computational. Therefore\, for the future advance of sci
 ence it will be indispensable to provide scientists with the proper comput
 ational tools\, breaking down the technological barrier they have been fac
 ing so far.\n\nDue to the advent of the cloud computing model and orchestr
 ation tools\, the resources once identified as sites in the e-Infrastructu
 res have become “liquid” and highly dynamic. Sites can be created\, de
 stroyed\, attached and detached from the infrastructure with few mouse cli
 cks\, at a time rate inconceivable only few years ago. Nowadays\, use case
 s requiring sites with a customized configuration that need to interact wi
 th the rest of the infrastructure are becoming more and more frequent. Rel
 evant examples are the use of resources temporarily available in HPC cente
 rs\, or the creation of diskless sites to cope with peak user activity. To
  address these computational needs\, new functionalities in the field of t
 he data management and new paradigms involving hybrid computational resour
 ces have to be developed and implemented.\n\nAs an example\, the vision of
  bringing hybrid cloud solutions into applications is further pushed by ad
 ditional use case scenarios\, such as moving data from closely shielded HP
 C systems towards more open cloud systems\, or applying advanced machine l
 earning algorithms on top of large data streams (e.g. in intrusion detecti
 on systems). \n\nThese recent advances offer potential solutions to the te
 chnological challenges represented by intensive computing use cases. Conta
 iner technology allows moving entire computer applications over the intern
 et so that they can be executed on various hardware platforms. Appropriate
  orchestrator appropriate orchestrator solutions able able to run applicat
 ions on a hybrid cloud environment (i.e different infrastructures and envi
 ronments included GPUs) are now now available. The development and the ado
 ption of new solutions for the data lifecycle management\, the federation 
 of storage resources with standard protocolsand smart caching technologies
  will explicitly reduce data movements and improve access latency.\nMoreov
 er\, new storage models based on policy driven data management and Quality
  of Service\, the metadata handling and manipulation and the data processi
 ng during ingestion will enable the data distribution depending on specifi
 c and complex policies aimed to speedup the analysis exploiting various st
 orage types.\n\nThis World Cafe session will cover the mentioned issues\, 
 showing technological advances in operation from a user perspective. In pa
 rticular we aim to show how a user community could benefit from the servic
 es that are released from the DEEP-Hybrid DataCloud and the eXtreme DataCl
 oud EU funded projects to better implement their user stories\, with more 
 powerful and easy to exploit approach.\n\nIn order to make the European Op
 en Science Cloud (EOSC) become a viable vision\,  those services are expec
 ted to become a reliable part of the final solutions available in the EOSC
  Service Catalogue and made available to researchers.\n\nTemptative agenda
 :\n\n - Understanding modern research requirements\n - Advanced services o
 n Hybrid DataClouds\n - Advanced services on data management  for distribu
 ted e-infrastructures\n - Common use cases scenarios\n - Solutions adopted
   by external communities\n - Discussion\n\nhttps://indico.egi.eu/event/39
 73/contributions/9270/
LOCATION:Lisbon Auditorium B203
URL:https://indico.egi.eu/event/3973/contributions/9270/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Metadata in ICOS
DTSTART;VALUE=DATE-TIME:20181011T111000Z
DTEND;VALUE=DATE-TIME:20181011T112500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-253@indico.egi.eu
DESCRIPTION:Speakers: Vermeulen\, Alex (ICOS ERIC)\nhttps://indico.egi.eu/
 event/3973/contributions/9271/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9271/
END:VEVENT
BEGIN:VEVENT
SUMMARY:OpenAIRE Research Community Dashboard
DTSTART;VALUE=DATE-TIME:20181011T103000Z
DTEND;VALUE=DATE-TIME:20181011T105000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-251@indico.egi.eu
DESCRIPTION:Speakers: Manghi\, Paolo (Istituto di Scienza e Tecnologie del
 l'Informazione - CNR)\nhttps://indico.egi.eu/event/3973/contributions/9273
 /
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9273/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EOSC-hub Service Catalogue and the marketplace
DTSTART;VALUE=DATE-TIME:20181010T133000Z
DTEND;VALUE=DATE-TIME:20181010T134500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-256@indico.egi.eu
DESCRIPTION:Speakers: Andreozzi\, Sergio (EGI.eu)\nhttps://indico.egi.eu/e
 vent/3973/contributions/9274/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9274/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EOSC-Hub Engagement opportunities
DTSTART;VALUE=DATE-TIME:20181010T134500Z
DTEND;VALUE=DATE-TIME:20181010T140000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-257@indico.egi.eu
DESCRIPTION:Speakers: Ferrari\, Tiziana (EGI.eu)\nhttps://indico.egi.eu/ev
 ent/3973/contributions/9275/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9275/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Panel discussion
DTSTART;VALUE=DATE-TIME:20181011T114000Z
DTEND;VALUE=DATE-TIME:20181011T120000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-255@indico.egi.eu
DESCRIPTION:https://indico.egi.eu/event/3973/contributions/9277/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9277/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Addressing sustainable long-term preservation wall for scientific 
 data: the European Trusted Digital Repository (ETDR) service
DTSTART;VALUE=DATE-TIME:20181009T154500Z
DTEND;VALUE=DATE-TIME:20181009T160000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-162@indico.egi.eu
DESCRIPTION:Speakers: Massol\, Marion (CINES)\nDigital data preservation s
 hould be a key feature of all research projects. Some research data are un
 ique and cannot be replaced if lost or destroyed\; scientific results can 
 be considered as trustworthy only they refer to verifiable data.\n\nIn add
 ition to a bit-stream preservation service that ensures data integrity tec
 hnically\, Trusted Digital Repositories (TDRs) are providing a quality of 
 services that preserve information over a long period of time. This requir
 es extra and certified capabilities in the area of curation\, metadata\, f
 ile formats\, long-term preservation\, diverse data access levels\, data q
 uality assessment based on the FAIR principles\, etc.\n\nDuring EUDAT and 
 EUDAT2020\, TDRs ever used to preserve research data have been assessed an
 d a European generic\, innovative and large added-value service has been d
 eveloped: the European Trusted Digital Repository (ETDR). This constellati
 on of TDRs and other service providers can offer to scientific communities
  some important securities on data reuse enabling. The three main guarante
 es taken by the ETDR are on:\n - data integrity (i.e. bit-stream preservat
 ion)\,\n - hardware and software readability (i.e. file formats\, emulatio
 n…)\,\n - and understandability of the information over time (i.e. metad
 ata\, information classification …).\n\nThe ETDR front-office will offer
  access to EUDAT\, EGI and OpenAIRE distributed data storage services. Dat
 a that needs to be preserved for the long-term will be automatically inges
 ted into the distributed ETDR back-office infrastructure. In addition\, fr
 ont-ends can be featured in discipline-specific research infrastructures o
 r researcher deposit platforms that do not yet have access to certified TD
 R service. The ETDR provides also customer support on data management incl
 uding data management planning and requirements for long-term preservation
 .\n\nWithin EUDAT2020\, Herbadrop and ICEDIG\, the ETDR has ever been used
  by about ten national institutions that belong to DiSSCo\, the e-infrastr
 ucture for natural sciences. During the EOSC-Pilot and EUDAT2020 projects\
 , a three partners association (CERN\, CINECA\, CINES) has demonstrated ge
 nericity\, scalability and accessibility of the ETDR architecture.\n\nThe 
 next years would be at the convergence of increasing the research communit
 y’s number and the ETDR network expansion.\n\nhttps://indico.egi.eu/even
 t/3973/contributions/9278/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9278/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rules of Participation: from theory to practice
DTSTART;VALUE=DATE-TIME:20181009T155500Z
DTEND;VALUE=DATE-TIME:20181009T161500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-249@indico.egi.eu
DESCRIPTION:Speakers: Sanden\, Mark (SURFsara BV)\nhttps://indico.egi.eu/e
 vent/3973/contributions/9279/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9279/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Minimal set of Rules of Participation for Service Providers and Us
 ers in EOSC
DTSTART;VALUE=DATE-TIME:20181009T153500Z
DTEND;VALUE=DATE-TIME:20181009T155500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-248@indico.egi.eu
DESCRIPTION:Speakers: Kahlem\, Pascal (ELIXIR)\nhttps://indico.egi.eu/even
 t/3973/contributions/9281/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9281/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Next Generation Data Management Services: the eXtreme DataCloud pr
 oject
DTSTART;VALUE=DATE-TIME:20181010T134500Z
DTEND;VALUE=DATE-TIME:20181010T140000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-99@indico.egi.eu
DESCRIPTION:Speakers: Costantini\, Alessandro (INFN)\, Cesini\, Daniele (I
 NFN)\nThe development of new scalable technologies for federating storage 
 resources and managing data in the current and next generation e-Infrastru
 ctures deployed in Europe\, such as the European Open Science Cloud (EOSC)
 \, the European Grid Infrastructure (EGI)\, the Worldwide LHC Computing Gr
 id (WLCG) is the aim of the eXtreme-DataCloud (XDC) H2020 funded project.\
 nThe high-level objective of the project is the semi or fully automated pl
 acement of scientific data in the Exabyte region exploiting the resources 
 made available by the modern\, cloud based\, e-Infrastructures.\nXDC is fo
 cused on providing enriched high-level data management services to access 
 heterogeneous storage resources and services. It enables scalable data pro
 cessing on distributed infrastructures using established interfaces and al
 lowing the use of legacy applications without the need for rewriting them 
 from scratch.\nThe project will address high-level topics that include: i)
  federation of storage resources with standard protocols\, ii) smart cachi
 ng solutions to access transparently data stored in remote locations\, iii
 ) policy driven data management based on Quality of Service\, iv) data lif
 ecycle management\, v) metadata handling and manipulation\, vi) data prepr
 ocessing during ingestion\, vii) optimized data management based on access
  patterns.\nThe solutions implemented by the XDC project are targeted to t
 he real life use cases provided by different scientific communities repres
 ented within the project\, such as:  astrophysics (CTA)\, Photon Science (
 European X-FEL)\, High Energy Physics (WLCG)\, Life Science (LifeWatch) an
 d Medical Science (ECRIN).\nThe XDC solutions are based on already well es
 tablished data management components like dCache\, FTS\, EOS\, the INDIGO 
 PaaS Orchestrator and ONEDATA\, just to mention some of them. These servic
 es will be enriched with new functionalities and organized in a coherent a
 rchitecture to address the user requirements. For a better understanding o
 f the nature and the scope of the project\, the high level architecture ov
 erview and related interfaces specification will be presented and describe
 d. Moreover\, implementation examples on specific use cases will be presen
 ted.\n\nhttps://indico.egi.eu/event/3973/contributions/9294/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9294/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The EOSCpilot Science Demonstrators as a demonstration of the EOSC
  in practice
DTSTART;VALUE=DATE-TIME:20181010T154500Z
DTEND;VALUE=DATE-TIME:20181010T160000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-98@indico.egi.eu
DESCRIPTION:Speakers: Fava\, Ilaria (Göttingen State and University Libra
 ry)\nThe EOSCpilot project (2017-2018) has the purpose of supporting the f
 irst phase of development of the European Open Science Cloud (EOSC). Among
  its objectives\, there is the one to develop a number of demonstrators fu
 nctioning as high-profile pilots\, that integrate services and infrastruct
 ures to show interoperability and its benefits in selected scientific doma
 ins.\nTo meet this objective\, the project selected and funded 15 Science 
 Demonstrators in different disciplines (Life Sciences\, Environmental and 
 Earth Sciences\, Energy\, Physics\, Social Sciences to name a few of them)
  to demonstrate the effectiveness of the EOSC approach: a digital environm
 ent where researchers could use federated services to perform their resear
 ch projects.\nThis presentation will showcase the experience of the Scienc
 e Demonstrators within the EOSCpilot\; in particular\, it will present the
  recommendations they provided during the project regarding the services i
 nteroperability and use of standard protocols\, friendly and user-friendly
  interfaces\, open source components\, and much more. Being pilots in the 
 pilot\, in fact\, the Science Demonstrators show the relevance and usefuln
 ess of the EOSC Services and their role in enabling data reuse\, to drive 
 the EOSC development. Responding to the Consultation Platform on the Rules
  of Participation\, from the SDs will also serve to provide the discussion
  topics during the event and will be the main takeaways of the session. Pr
 esenting these results at DI4R would be an exciting opportunity to reach o
 ut on one side to researchers\, that are the primary users of the EOSC\, a
 nd on the other to potential service providers or research infrastructures
  not yet involved\, thus enlarging the services' offer.\n\nhttps://indico.
 egi.eu/event/3973/contributions/9295/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9295/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Towards the EOSC AAI service for research communities
DTSTART;VALUE=DATE-TIME:20181010T110000Z
DTEND;VALUE=DATE-TIME:20181010T120000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-169@indico.egi.eu
DESCRIPTION:Speakers: Atherton\, Chris (GÉANT)\, Kanellopoulos\, Christos
  (GÉANT)\, Liampotis\, Nicolas (GRNET)\nThe European Open Science Cloud (
 EOSC) will provide an Authentication and Authorisation Infrastructure (AAI
 ) through which communities can gain seamless access to services and resou
 rces across disciplinary\, social and geographical borders. To this end\, 
 the EOSC-hub and the GÉANT (GN4-2) project AAIs build on existing AAI ser
 vices and provide a consistent\, interoperable system with which communiti
 es can integrate. This session will introduce the main concepts for meetin
 g research community needs for AAI access to EOSC.\n\nIt will outline how 
 the AARC Blueprint Architecture model (i) leverages eduGAIN to enable user
 s to use their own home organisation credentials to access services and\, 
 (ii) underpins community AAI services in EOSC-Hub and complementary projec
 ts. By implementing policies that are harmonised and compliant with global
  frameworks such as the REFEDS Research and Scholarship entity category an
 d Sirtfi\, communities are supported in receiving and releasing consistent
  attributes\, as well as in following good practices in operational securi
 ty\, incident response\, and traceability. Complementary to this\, users w
 ithout an account on a federated institutional Identity Provider are still
  able to use social media or other external authentication providers for a
 ccessing services. Thus\, access can be expanded outside the traditional u
 ser base\, opening services to all user groups including researchers\, peo
 ple in higher-education\, and members of business organisations and citize
 n scientists. \n\nResearch communities can use the Community AAI services 
 in EOSC-hub for managing their users and their respective roles and other 
 authorisation-related information. At the same time\, the adoption of stan
 dards and open technologies\, including SAML 2.0\, OpenID Connect\, OAuth 
 2.0 and X.509v3\, facilitates interoperability and integration with the ex
 isting AAIs of other e-Infrastructures and research communities. \n\nDevel
 opment of these technologies has been and continues to be shaped by the re
 quirements defined by the the users of the AAI services.  With the recent 
 publication of FIM4R version 2 and further requirements gathering work per
 formed through the AARC2 and EOSC-hub AAI surveys\, the question of how re
 search infrastructures respond to these requirements has become a topic of
  significant interest for many research communities.     \n\nThis will be 
 an interactive session where researchers\, research infrastructures and e-
 infrastructures present their use-cases and more in general describe the r
 esponse to the obstacles researchers face when accessing resources used in
  their daily work. You shouldn’t miss this if you are a researcher or re
 presentative of a scientific community interested in gaining access to EOS
 C federated services and resources in a secure and user-friendly way.\n\nD
 raft agenda:\n\n - How the EOSC AAI services help communities to access re
 sources\n - Introduction of the evolved view of the AARC Blueprint Archite
 cture\n - Common requirements for Federated Identity Management for Resear
 ch\n   (including findings from FIM4R version 2.0 and requirements gatheri
 ng\n   activities performed through the AARC2 and EOSC-hub AAI surveys)\n 
 - Community AAI deployments and experiences \n   - Life Science AAI\n\nhtt
 ps://indico.egi.eu/event/3973/contributions/9297/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9297/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Planning early\, following through: Data Management Planning in th
 e EOSC
DTSTART;VALUE=DATE-TIME:20181009T131500Z
DTEND;VALUE=DATE-TIME:20181009T144500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-91@indico.egi.eu
DESCRIPTION:Speakers: Hasan\, Adil (SIGMA)\, Kakaletris\, George (Athena R
 esearch & Innovation Center)\, Iozzi\, Maria Francesca (SIGMA)\, de Witt\,
  Shaun (UKAEA)\n**Background**\n\nThe Open Science paradigm strongly contr
 ibutes towards lifting the barriers that restrict access and re-use of res
 earch data. Aligned with the paradigm\, funders and agencies\, at European
  and national level\, increasingly promote the adoption of strategies of F
 AIR and open research data. Such strategies\, covering all datasets utilis
 ed or generated in the course of a research project\, are embodied in the 
 Data Management Plan (DMP). \n\n**Why are DMPs important?**\n\nData Manage
 ment Plans are important for individual researchers\, community or institu
 tional data managers and fundholders. Most H2020 proposals now require a D
 MP as specified in Article 29.3 of the Grant Agreement\, and many countrie
 s also require this for nationally funded research. The aim of a DMP is to
 :\n\n - engage researchers to plan sustainable\, result-oriented and\n   c
 ost-effective research strategies during and beyond the project\n   lifeti
 me\,\n - enable research communities to discover and utilise\n   invaluabl
 e\, trustworthy data and\, \n - allow funders assess their strategy\n   an
 d actions in a multitude of directions. When applicable\, open\n   access 
 to the data\, complemented with effective citation mechanism\,\n   guarant
 ees visibility of the scientific results\, for the benefit of\n   the rese
 archer\, of the scientific community and of the society in its\n   whole.\
 n\n**What you will learn?**\n\nThe training is aimed at people who support
  research projects or research infrastructures. Specifically you will be g
 uided through real use case scenarios and the use of two emerging DMP tool
 s to learn:\n\n - Essential background information on the data lifecycle a
 nd the rationale of a DMP\;\n - Procedures and policies to ensure high ava
 ilability and discoverability of data used/generated (e.g.\, FAIR\, GDPR)\
 ;\n - How to effectively implement the FAIR principles to ensure open and 
 reproducible science\;\n - Comply with the H2020 grant requirements and co
 mmunity best practices concerning research data management\;\n - How to re
 late to existing infrastructures\, to ultimately ensure interoperability\,
  reproducibility and long-term preservation of all artefacts\, be it data\
 , publications or software.\n\nMoreover you will gain an overview of EOSC 
 services engaged in the open research data management lifecycle (storage\,
  access\, retrieval\, archival etc) and learn how to interoperate with the
 m at an early stage. \n \n**Who is it for?**\n\nThe training is aimed at s
 upporters of research projects and research infrastructures\, and other st
 akeholders that are managing research data. The training material is devel
 oped by the EOSC-hub and OpenAIRE-Advance projects.\n\nhttps://indico.egi.
 eu/event/3973/contributions/9300/
LOCATION:Lisbon Auditorium B203
URL:https://indico.egi.eu/event/3973/contributions/9300/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Science Gateways Community Institute: Developing Strategies for Su
 stainability of Projects via Bootcamps
DTSTART;VALUE=DATE-TIME:20181010T164500Z
DTEND;VALUE=DATE-TIME:20181010T170000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-165@indico.egi.eu
DESCRIPTION:Speakers: Zentner\, Michael (Purdue University)\, Gesing\, San
 dra (University of Notre Dame)\nSustainability of academic software in gen
 eral and of virtual research environments (VREs) and of science gateways p
 articularly is a major concern for many academic projects. Solicitations f
 or funding mostly support novel developments and novel research to acceler
 ate science but little to sustain existing computational solutions. The im
 portance of software for science and its sustainability have been recogniz
 ed in the last decade though and is reflected in the founding of the [UK S
 oftware Sustainability Institute (SSI)][1] in 2010 and in the [US Science 
 Gateways Community Institute (SGCI)][2] in 2016 to support academic softwa
 re and science gateways beyond traditional funding cycles. SGCI serves use
 r communities and science gateway creators to support the growth and succe
 ss of science gateways in multiple ways and one example is the [Science Ga
 teway Bootcamp][3] organized by the SGCI Incubator service area. \nThe boo
 tcamp is a week-long\, intensive workshop for leaders and creators of gate
 ways who want to further develop and scale their work. It addresses sustai
 nability strategies from diverse angles:\n1.    Core business strategy ski
 lls as they apply to leading an online digital presence\, such as understa
 nding stakeholder and user needs\; business\, operations\, finance\, and r
 esource planning\; and project management\;\n2.    Technology best practic
 es\, including the principles of cybersecurity\; software architecture\, d
 evelopment practices\, and tools that ensure implementation of strong soft
 ware engineering methods\; usability and\n3.    Long-term sustainability s
 trategies\, such as alternative funding models\; case studies of successfu
 l gateway efforts\; licensing choices and their impact on sustainability.\
 nParticipants engage in hands-on activities to help them articulate the va
 lue of their work to key stakeholders\, to create a strong sustainability 
 plan and work closely with one another. The concept is to define actionabl
 e items for three to six months\, form cohorts who keep in contact with ea
 ch other and support each other in the continuous process of achieving sus
 tainability.\nSGCI offers two bootcamps per year in the US with a maximum 
 number of ten teams to be accepted for each event. Up-to-date\, three of s
 uch bootcamps have taken place with one planned for August 2018. Based on 
 the success of the three events and on lessons learned from these events\,
  SGCI's Incubator service area organized a mini-bootcamp of two days in Ju
 ne 2018 in Edinburgh\, UK in collaboration with SSI. Also here the feedbac
 k was mostly very positive recognizing that two days can only provide a we
 ll-thought through selection of topics in appropriate depth. \nA future go
 al is to develop further shorter bootcamps on specific topics and closely 
 collaborate on international level to be able to spread the concept furthe
 r and train the trainers to scale the support of sustainability via bootca
 mps. International observers can attend the bootcamps in the US and discus
 sions are underway with European projects to offer such bootcamps. \n\n\n 
  [1]: https://software.ac.uk/\n  [2]: https://sciencegateways.org/\n  [3]:
  https://ieeexplore.ieee.org/document/8109182/\n\nhttps://indico.egi.eu/ev
 ent/3973/contributions/9301/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9301/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Towards a common approach on KPIs from e‑Infrastructures
DTSTART;VALUE=DATE-TIME:20181011T104500Z
DTEND;VALUE=DATE-TIME:20181011T110000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-163@indico.egi.eu
DESCRIPTION:Speakers: Karagiannis\, Fotis (Independent)\nKey Performance I
 ndicators (KPIs) will play an important role in monitoring the development
  of projects and services of research infrastructures and e-Infrastructure
 s\, and their commitment to the principles of Open Science. They help meas
 ure the effectiveness of investments in infrastructure\, and can provide c
 onvincing arguments for sustainable support to funders.\n\nThe joint prese
 ntation of the EU-funded projects e-IRGSP5 and eInfraCentral will explain 
 how the projects work together on developing methodologies to collect and 
 aggregate Key Performance Indicators (KPIs) and other performance-related 
 information from European e-Infrastructures and several key projects. e-IR
 GSP5 places a strong emphasis on financial and policy indicators and is in
 terested in developing a broad overview of metrics used by e-infrastructur
 es and related projects\, whereas eInfraCentral is focused on operational 
 KPIs.\n\nFor objective criteria to exist and for any comparison to be mean
 ingful\, there needs to be an agreement among the e-infrastructures commun
 ity on a lightweight and easy-to-use framework based on reliable data and 
 meaningful metrics. Currently\, some e‑Infrastructures have existing met
 hodologies to assess their own performance. However\, due to a lack of con
 sensus on how KPIs should be categorized and calculated\, this information
  is difficult to interpret and compare.Thus\, a process to obtain\, catego
 rize and present them to the public has been defined and is being implemen
 ted\, fostering collaboration across the e-infrastructure community.\n\nAl
 ong with the metrics gathered and their initial analysis\, some state of t
 he art KPI examples will be presented. These were obtained from specific p
 rojects tasked to suggest financial and policy related KPIs\, such as e-Fi
 scal and LEARN\, as well as the GEANT and EGI Compendia. From our analysis
  of these state of the art KPIs\, we suggest a basic minimum set of genera
 l financial and policy KPIs for projects to adhere to\, and to possibly ex
 pand upon with more project-specific KPIs. In addition\, we will suggest a
  common vocabulary to define KPIs and related metrics\, in an effort to st
 andardize terminology across performance monitoring efforts in e-infrastru
 ctures.\n\nIn addition\, we will present the preliminary results of an act
 ive discussion with e-Infrastructures with the goal of developing a common
  approach that can be used to collect data and analyse KPIs. We believe th
 at European e-Infrastructure projects and the EOSC can benefit from a stru
 ctured exchange of know-how and practices on KPIs and related metrics in p
 roviding robust information to funders and policy makers on their success 
 and in developing and improving the services offered.\n\nhttps://indico.eg
 i.eu/event/3973/contributions/9307/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9307/
END:VEVENT
BEGIN:VEVENT
SUMMARY:HPC Simulation of and Simulation on Quantum Computers and Quantum 
 Annealers
DTSTART;VALUE=DATE-TIME:20181011T093000Z
DTEND;VALUE=DATE-TIME:20181011T100000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-12@indico.egi.eu
DESCRIPTION:Speakers: Michielsen\, Kristel (FZ Juelich)\nA quantum compute
 r (QC) is a device that performs operations according to the rules of quan
 tum theory. There are various types of QCs of which nowadays the two most 
 important ones considered for practical realization are the gate-based QC 
 and the quantum annealer (QA). Practical realizations of gate-based QCs co
 nsist of less than 100 qubits while QAs with more than 2000 qubits are com
 mercially available.\n\nWe present results of simulating on the IBM Quantu
 m Experience devices with 5 and 16 qubits\, on the CAS-Alibaba device with
  11 qubits and on the D-Wave 2X QA with more than 1000 qubits. Simulations
  of both types of QCs are performed by first modeling them as quantum syst
 ems of interacting spin-1/2 particles and then emulating their dynamics by
  solving the time-dependent Schrödinger equation. Our software allows for
  the simulation of a 48-qubit gate-based universal QC on the Sunway TaihuL
 ight and K supercomputers.\n\nReferences:\nK. Michielsen\, M. Nocon\, D. W
 illsch\, F. Jin\, T. Lippert\, H. De Raedt\, Benchmarking gate-based quant
 um computers\, Comp. Phys. Comm. 220\, 44 (2017)\nD. Willsch\, M. Nocon\, 
 F. Jin\, H. De Raedt\, K. Michielsen\, Gate error analysis in simulations 
 of quantum computers with transmon qubits\, Phys. Rev. A 96\, 062302 (2017
 )\nH. De Raedt\, F. Jin\, D. Willsch\, M. Nocon\, N. Yoshioka\, N. Ito\, S
 . Yuan\, K. Michielsen\, Massively parallel quantum computer simulator\, e
 leven years later\, arXiv:1805.04708\nD. Willsch\, M. Nocon\, F. Jin\, H. 
 De Raedt\, K. Michielsen\, Testing quantum fault tolerance on small system
 s\, arXiv:1805.05227\nK. Michielsen\, F. Jin\, and H. De Raedt\, Solving 2
 -satisfiability problems on a quantum annealer (in preparation)\n\nhttps:/
 /indico.egi.eu/event/3973/contributions/9311/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9311/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael Wise - SKA (Keynote 2)
DTSTART;VALUE=DATE-TIME:20181011T081500Z
DTEND;VALUE=DATE-TIME:20181011T090000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-15@indico.egi.eu
DESCRIPTION:https://indico.egi.eu/event/3973/contributions/9312/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9312/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pl@ntNet: towards the recognition of the world's flora
DTSTART;VALUE=DATE-TIME:20181010T074500Z
DTEND;VALUE=DATE-TIME:20181010T081500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-14@indico.egi.eu
DESCRIPTION:Speakers: Joly\, Alexis (Pl@ntNet)\nAutomated identification o
 f plants and animals have improved considerably in the last few years\, in
  particular thanks to the recent advances in deep learning. In 2017\, a ch
 allenge on 10\,000 plant species (PlantCLEF) resulted in impressive perfor
 mances with accuracy values reaching 90%. One of the most popular plant id
 entification application\, Pl@ntNet\, nowadays works on 18K plant species.
  It accounts for million of users all over the world and already has a str
 ong societal impact in several domains including education\, landscape man
 agement and agriculture. The big challenge\, now\, is to train such system
 s at the scale of the world’s biodiversity. Therefore\, we built a train
 ing set of about 12M images illustrating 275K species. Training a convolut
 ional neural network on such a large dataset can take up to several months
  on a single node equipped with four recent GPUs. Moreover\, to select the
  best performing architecture and optimize the hyper-parameters\, it is of
 ten necessary to train several of such networks. Overall\, this becomes a 
 highly intensive computational task that has to be distributed on large HP
 C infrastructures. In order to address this problem\, we used the deep lea
 rning framework Intel CAFFE coupled with Intel MLSL library. This experime
 nt was carried out on two french national supercomputers\, their access wa
 s offered by GENCI. The first experiment was carried out on Occigen@CINES\
 , a 3.5 Pflop/s Tier-1 cluster based on Broadwell-14cores@2.6Ghz nodes. Th
 e second uses the Tier-0 «Joliot-Curie»@TGCC\, a BULL-Sequana-X1000 clus
 ter integrating 1656 nodes Intel Skylake8168-24cores@2.7GHz. We will repor
 t our experience using these two platforms.\n\nhttps://indico.egi.eu/event
 /3973/contributions/9313/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9313/
END:VEVENT
BEGIN:VEVENT
SUMMARY:WeNMR activities in the EOSC-Hub
DTSTART;VALUE=DATE-TIME:20181010T154000Z
DTEND;VALUE=DATE-TIME:20181010T154500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-19@indico.egi.eu
DESCRIPTION:Speakers: Rosato\, Antonio (CIRMMP)\nStructural biology deals 
 with the characterization of the structural (atomic coordinates) and dynam
 ic (fluctuation of atomic coordinates over time) properties of biological 
 macromolecules and adducts thereof. Since 2010\, the WeNMR project has imp
 lemented numerous web-based services to facilitate the use of advanced com
 putational tools by researchers in the field\, using the grid computationa
 l infrastructure provided by EGI [1]. These services have been further dev
 eloped in subsequent initiatives\, such as the West-Life VRC (www.west-lif
 e.eu). In particular\, the latter project developed implementation of a cl
 oud storage solution\, called VirtualFolder [2]\, which allows the user to
  connect to her/his account on B2DROP or on public clouds. This solution h
 as been implemented in several thematic portals in order to allow input da
 ta to be downloaded from and calculation results to be uploaded to the use
 rs cloud storage. Regarding AAI\, the thematic portals are transitioning\,
  also in response to the GDPR\, to the EGI SSO or other systems that are c
 ompatible with it. Finally\, all the thematic portals that send calculatio
 ns to the grid infrastructure are now making use of DIRAC [3]. \n\n[1] Was
 senaar TA\, et al.  WeNMR: Structural biology on the Grid. J. Grid. Comput
 ing 10:743-767\, 2012\n\n[2] https://portal.west-life.eu/virtualfolder/\n\
 n[3] https://github.com/DIRACGrid/DIRAC\n\nhttps://indico.egi.eu/event/397
 3/contributions/9316/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9316/
END:VEVENT
BEGIN:VEVENT
SUMMARY:UBORA: A digital infrastructure for collaborative research and dev
 elopment of open-source medical devices
DTSTART;VALUE=DATE-TIME:20181009T141500Z
DTEND;VALUE=DATE-TIME:20181009T143000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-18@indico.egi.eu
DESCRIPTION:Speakers: Diaz Lantada\, Andres (Universidad Politecnica de Ma
 drid)\nDigital infrastructures are already making a real impact in the way
  we develop innovative products. Platforms for sharing computer-aided desi
 gns have emerged in parallel to the maker movement with the advent of rapi
 d prototyping by 3D printing. Besides\, manufacturers of industrial compon
 ents are also keen to share the CAD files of their products\, so as to sup
 port designers with engineering design. However\, in the biomedical field 
 and in bioengineering research information sharing is not so common\, in s
 ome cases due to patient privacy protection\, but in most cases due to ind
 ustrial growth strategies\, in spite of the benefits that collaborative ap
 proaches and the related promotion of open-innovation could bring to patie
 nts and society.\n\nThe UBORA digital infrastructure\, presented in this s
 tudy\, has been developed to promote collaborative research and developmen
 ts in biomedical engineering\, especially regarding the collaborative engi
 neering design of biomedical devices. This infrastructure includes: a) A s
 ection for promoting open-innovation\, in which healthcare professionals a
 nd patients can propose needs for novel medical devices. b) A section for 
 project development\, through which designers can showcase their proposals
  or select those from healthcare professionals and develop them\, in a gui
 ded way\, as projects in collaboration with members of the UBORA community
 . c) A library in form of "wiki" for sharing all the information of the de
 veloped biomedical device projects\, hence fostering open-source strategie
 s. d) A section providing resources for supporting project development and
  bioengineering design education for all.\n\nUBORA has already enabled the
  creation of a community of more than 200 developers of biomedical devices
  and showcased around 10 complete projects and 40 concepts of innovative b
 iodevices and the community and wiki are continuously growing. Main concep
 tual decisions\, taken during the design of this digital infrastructure an
 d key decisions during implementation\, together with current challenges\,
  are presented. Potential synergies and collaborative activities with EOSC
  and EDI are also analyzed.\n\nhttps://indico.egi.eu/event/3973/contributi
 ons/9317/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9317/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Panel discussion
DTSTART;VALUE=DATE-TIME:20181009T162500Z
DTEND;VALUE=DATE-TIME:20181009T164500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-250@indico.egi.eu
DESCRIPTION:https://indico.egi.eu/event/3973/contributions/9320/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9320/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mike Payne - EPSRC (Topical 4)
DTSTART;VALUE=DATE-TIME:20181011T090000Z
DTEND;VALUE=DATE-TIME:20181011T093000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-17@indico.egi.eu
DESCRIPTION:https://indico.egi.eu/event/3973/contributions/9322/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9322/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rootless containers with udocker
DTSTART;VALUE=DATE-TIME:20181010T110000Z
DTEND;VALUE=DATE-TIME:20181010T111500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-31@indico.egi.eu
DESCRIPTION:Speakers: Gomes\, Jorge (LIP)\nTechnologies based on Linux con
 tainers have become very popular among software developers and system admi
 nistrators. The main reason behind this success is the flexibility and eff
 iciency that containers offer when it comes to pack\, deploy and run softw
 are. A containerized version of a given software can be created including 
 all its dependencies\, so that can be executed seamlessly regardless of th
 e Linux distribution in the target hosts. Linux containers are also very w
 ell suited to the heterogeneous run-time environments that researchers fac
 e today when running complex applications across computing resources such 
 as laptops\, desktops\, Linux interactive clusters\, cloud providers\, thr
 oughput computing and high performance computing infrastructures.\n\nudock
 er is a tool developed by LIP in the context of the INDIGO-DataCloud proje
 ct that addresses the problematic of executing Docker containers in user s
 pace\, i.e. without installing additional system software\, without requir
 ing any administrative privileges and in a way that respects resource usag
 e policies\, accounting and process controls. udocker  aims to empower use
 rs to execute applications encapsulated in Docker containers easily in any
  Linux system including computing clusters regardless of Docker or Linux n
 amespaces being locally available.\n\nudocker provides a command line inte
 rface similar to Docker and implements a subset of its commands aimed at s
 earching\, pulling\, importing\, loading and executing containers in a Doc
 ker like manner respecting much of the container metadata. The self instal
 lation allows a user to transfer the  udocker Python script\, execute it a
 nd automatically pull the required tools and libraries which are then stor
 ed in the user directory. This allows  udocker to be easily deployed and u
 pgraded by the user himself without system administrator intervention. All
  required binary tools and libraries are provided with udocker and compila
 tion is not required. \n\nudocker is an integration tool that incorporates
  several execution methods giving the user the best possible options to ex
 ecute their containers according to the target host capabilities.  Several
  interchangeable execution modes are available\, that exploit different te
 chnologies and tools\, which are integrated by udocker to enable execution
  both in older and newer Linux distributions. Currently four execution mod
 es are available which can be selected dynamically\, namelly:\n* system ca
 ll interception and pathname rewriting via PTRACE using a modified PROOT\n
 * dynamic library call interception and pathname rewriting via ld_preload 
 using a modified fakechroot\n* Linux unprivileged namespaces using runC\n*
  Linux namespaces using Singularity where available\n\nEach approach has i
 ts own advantages and limitations\, and therefore an integration tool offe
 rs flexibility and freedom of choice to adapt to the application and host 
 characteristics. udocker is been successfully used to support execution of
  high throughput computing\, high performance computing (MPI) and GPGPU ba
 sed applications in many datacenters and infrastructures including EGI.\n\
 nThe udocker has more than 300 stars on github (https://github.com/indigo-
 dc/udocker). This presentation will provide further information about udoc
 ker and will highlight several user cases.\n\nhttps://indico.egi.eu/event/
 3973/contributions/9323/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9323/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Policies in the EOSC Through the Lens of Research Infrastructures:
  The EOSCpilot Policy Recommendations
DTSTART;VALUE=DATE-TIME:20181009T132500Z
DTEND;VALUE=DATE-TIME:20181009T144500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-153@indico.egi.eu
DESCRIPTION:Speakers: Vermeulen\, Alex (ICOS ERIC)\, Jones\, Bob (CERN)\, 
 Robertson\, Dale (Jisc)\, Kuchma\, Iryna (EIFL)\, Manola\, Natalia (Univer
 sity of Athens\, Greece)\, Kahlem\, Pascal (ELIXIR)\nBackground: The EOSCp
 ilot project supports the first phase in the development of the European O
 pen Science Cloud (EOSC) governance. Its objectives include establishing t
 he policy environment required for the effective operation\, access and us
 e of the EOSC to foster research and Open Science.  \n\nBuilding on a high
 -level landscape review of European policies of relevance to the EOSC\, th
 e EOSCpilot project has developed draft policy recommendations aimed prima
 rily at funders/ministries\, research infrastructures and research produci
 ng organisations. The policies have been formulated using information from
  a range of sources\, including the EOSCpilot Science Demonstrator pilots\
 , and cover the areas of Open Science and Open Scholarship\, Data Protecti
 on\, Procurement and Ethics. These draft recommendations will be the subje
 ct of consultation with stakeholders from July onwards in order to validat
 e them and produce a final set of policy recommendations by the end of 201
 8. \n\nThis session will:\npresent an overview of the proposed policy reco
 mmendations\, focusing on those of most relevance to RIs\, and including f
 eedback already received\nprovide an opportunity for RIs to discuss the dr
 aft policy recommendations\, including their suitability for supporting th
 e implementation of the EOSC and considerations relating to their implemen
 tation by research infrastructures and other stakeholders\, examining also
  aspects such as timescales\, costs and collaboration requirements\nfocus 
 on key issues and barriers to implementing the EOSC and gather further sug
 gestions for policy actions which would help deliver the EOSC.\n\nWho is i
 t for: This session will target digital data infrastructures\, i.e. resear
 ch infrastructures and e-Infrastructures\, who envisage being involved in 
 the EOSC and need to align with the emerging EOSC developments.   \n\nWhy 
 is it important: The policy environment will support and complement the EO
 SC Rules of Participation and implicate the EOSC governance activities and
  developments. Policy formation for EOSC is key to establishing the EOSC a
 nd achieving its aims. The adaptation and adoption of appropriate policies
  by key stakeholders is of key importance and a very timely activity with 
 the EOSC November 2018 launch around the corner.\n\nObjective: The session
  is an important step in the process of validating the draft policy recomm
 endations to produce a set of final policy recommendations for the EOSC co
 vering Open Science\, Data Protection\, Procurement and Ethics. It aims to
  build on prior consultation input\, discuss the most important issues for
  research infrastructures with respect to the EOSC and agree those policy 
 actions which would most appropriately address them.\n\nFormat: This will 
 be an interactive session involving a small panel including representative
 s of the EOSCpilot\, research infrastructures and e-Infrastructures.  It w
 ill be facilitated to encourage contributions from the audience to a share
 d collaborative document capturing their views and suggestions for the “
 EOSC of the future”\, helping to support a constructive and focussed dis
 cussion of the desired environment and the steps needed to achieve it.\n\n
 https://indico.egi.eu/event/3973/contributions/9326/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9326/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shaping the EOSC service roadmap: what users need
DTSTART;VALUE=DATE-TIME:20181010T103000Z
DTEND;VALUE=DATE-TIME:20181010T120000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-155@indico.egi.eu
DESCRIPTION:Speakers: Angelis\, Jelena (European Future Innovation System 
 (EFIS) Centre)\, Ferreira\, Nuno (SURFsara BV)\, Andreozzi\, Sergio (EGI.e
 u)\nThe EOSC is an ambitious initiative aiming at the federation of existi
 ng and planned digital infrastructures for research. It seeks to remove ba
 rriers among disciplines and countries and make it easier for researchers 
 to share and access the digital resources they need. It will mobilise serv
 ice providers from both public and private sectors\, funders\, research co
 mmunities and other relevant stakeholders. In order to be successful\, it 
 needs to meet current and emerging needs of researchers and to rely on sou
 nd business models that stimulate service providers to join the ecosystem\
 , users to utilise the services\, and funders to support them.\n\n\nIn thi
 s area\, the EOSCpilot project explores possible services to be part of th
 e EOSC service catalogue by collaborating with a number of science demonst
 rators that explore\, use and evaluate services and service concepts that 
 are already available at resource providers. The experiences of the demons
 trators are coupled back to the project. The EOSC-hub project started in J
 anuary 2018 has launched an initial catalogue of services pre-selected via
  an open call mechanism and publicly accessible to researchers.\n\n\nLooki
 ng forward to the evolution of the EOSC\, it is essential to understand an
 d prioritise what services are most needed and that should be added to the
  future EOSC service portfolio and importantly\, what criteria are to be u
 sed for uptake in the service portfolio. \n\n\nThe goal of this session is
  to take advantage of the collective knowledge of the audience to extract 
 high-level needs for services and identify priorities for the coming years
  to develop a service roadmap. The approach is to use the world cafe style
  where\, after setting the context\, small discussion groups are created (
 e.g. homogeneous by stakeholder category) and are asked to answer specific
  questions (see more about world cafe session format: http://www.theworldc
 afe.com/wp-content/uploads/2015/07/Cafe-To-Go-Revised.pdf). At the end of 
 the discussion phase\, outputs per group will be collected and summarised 
 and will be used later on as inputs to the service roadmapping activities 
 of the EOSCpilot and EOSC-hub projects. This session is a natural follow-u
 p of the session “EOSC Service Architecture: how the services could supp
 ort the user communities” which presents the state of art.\n\nhttps://in
 dico.egi.eu/event/3973/contributions/9328/
LOCATION:Lisbon Auditorium B203
URL:https://indico.egi.eu/event/3973/contributions/9328/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Using Onedata for data caching in hybrid-cloud environments
DTSTART;VALUE=DATE-TIME:20181009T104500Z
DTEND;VALUE=DATE-TIME:20181009T110000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-154@indico.egi.eu
DESCRIPTION:Speakers: Dutka\, Lukasz (CYFRONET)\, Orzechowski\, Michal (CY
 FRONET)\nOnedata [1] is a global high-performance data management system\,
  that provides easy and unified access to globally distributed storage res
 ources and supports a wide range of use cases from personal data managemen
 t to data-intensive scientific computations. Onedata enables the creation 
 of complex hybrid-cloud deployments\, using private and commercial cloud r
 esources. It allows users to share\, collaborate and publish data as well 
 as perform high-performance computations on distributed data. Onedata syst
 em consists of zones (Onezone) which enable the creation of federations of
  data centres and users\, storage providers (Oneprovider) which expose sto
 rage resources\, and clients (Oneclient)\, who can access their data via a
  virtual POSIX file system. Onedata introduces the concept of space\, a vi
 rtual volume\, owned by one or more users\, where the data is stored. Each
  space can be supported by a dedicated amount of storage supplied by one o
 r multiple storage providers. Storage providers deploy Oneprovider instanc
 e near the storage resources\, register it in selected Onezone service to 
 become part of a federation and expose those resources to users. By suppor
 ting multiple types of storage backends\, such as POSIX\, S3\, Ceph and Op
 enStack Swift\, and GlusterFS. \n\nIn large-scale hybrid cloud deployments
 \, it is often the case that data maintained in the private cloud has to b
 e processed on-demand in the public cloud. While deploying remote jobs is 
 today fairly straightforward\, and can be automated using several orchestr
 ation platforms\, making the data available for processing in the remote c
 loud is a significant challenge. Onedata makes this easy\, by enabling aut
 omatic\, on-demand\, block-based data prefetching based on the POSIX reque
 sts from user applications and automatically caching the files based on an
 alysis of file popularity. In most cases\, prestaging is not necessary at 
 all\, as the data blocks are fetched on the fly when requested for reading
 \, however it provides REST API for controlling data replication manually 
 or integrating it with 3rd party services.\n\nCurrently\, Onedata is used 
 in Helix Nebula Science Cloud [2]\, eXtreme DataCloud [3]\, PLGrid [4]\, E
 uropean Open Science Cloud Hub\, and European Open Science Cloud Pilot [6]
 \, where it provides data transparency layer for computation deployed on h
 ybrid-clouds. In EOSC-hub [5] it serves as the basis of EGI Open Data Plat
 form\, supporting open science use cases such as open data curation (metad
 ata editing)\, publishing (DOI registration) and discovery (OAI-PMH protoc
 ol).\n\n 1. Onedata project website. http://onedata.org.\n 2. Helix Nebula
  Science Cloud (Europe’s Leading Public-Private Partnership for Cloud). 
 http://www.helix-nebula.eu.\n 3. eXtreme DataCloud (Developing scalable te
 chnologies for federating storage resources). http://www.extreme-datacloud
 .eu.\n 4. PL-Grid (Polish Infrastructure for Supporting Computational Scie
 nce in the European Research Space). http://projekt.plgrid.pl/en.\n 5. Eur
 opean Open Science Cloud Hub (Bringing together service providers to creat
 e a contact point for European researchers and innovators.). https://www.e
 osc-hub.eu.\n 6. European Open Science Cloud Pilot (Development of the EOS
 C-hub). https://eoscpilot.eu.\n\nhttps://indico.egi.eu/event/3973/contribu
 tions/9329/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9329/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EOSC-hub market research and business model analysis: call to acti
 on
DTSTART;VALUE=DATE-TIME:20181010T164500Z
DTEND;VALUE=DATE-TIME:20181010T165000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-157@indico.egi.eu
DESCRIPTION:Speakers: Andreozzi\, Sergio (EGI.eu)\nThe EOSC-hub project is
  conducting a market analysis to increase the understanding of the demand 
 for digital services and resources for research over the coming years. It 
 also seeks to understand what are the suitable business and procurement mo
 dels that would allow to reduce the time\, effort and risk while increasin
 g cost effectiveness\, especially for organisations that lack procurement 
 experience. The goal of this lightning talk is to advertise the current ac
 tivities and stimulate engagement from the community in contributing by at
 tending face-to-face interviews that will be conducted during the event or
  by filling online surveys.\n\nhttps://indico.egi.eu/event/3973/contributi
 ons/9330/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9330/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EOSC Service Architecture: how the services could support the use 
 communities
DTSTART;VALUE=DATE-TIME:20181009T131500Z
DTEND;VALUE=DATE-TIME:20181009T144500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-87@indico.egi.eu
DESCRIPTION:Speakers: Donvito\, Giacinto (INFN)\nEOSC-Hub project is activ
 ely working on proposing a new Service Architecture\, starting from the Se
 rvice already available at the proposal preparation\, plus considering als
 o the new services provided by use communities working in the EOSC-Hub pro
 ject and the ones implemented by external projects. \nThis activity has th
 e final goal to support the end users communities with powerful and easy t
 o exploit services. In the context of EOSC-Hub we are already working toge
 ther with the user communities\, to gather their requirements and propose 
 a coherent and effective service architecture. We will report on this acti
 vity with the aim to help other communities to exploit the services also i
 n different context. \n\nThe session will present an updated view on the E
 OSC-Hub Service architecture\, as released in the deliverable planned to e
 nd of September. \nWe will have one Technical talk where the service archi
 tecture will be shown from the point of view of the end users community: h
 ow those services could be composed and used by the user to build their ow
 n services. \nWe will provide information about the interaction between th
 e services in the EOSC Service catalogue and how they could be used togeth
 er also if they come from different environment. \nWe will put the EOSC ef
 fort to build the service architecture\, in the context of others projects
  in parallel (EOSCpilot\, EINFRA-21 project\, GEANT4-2\, OpenAIRE-Advanced
 \, e-InfraCentral\, etc)\, in order to present to the user community a coh
 erent view of the possibility available and they will evolve. \nWe will pu
 t into the agenda: \none talk that describe the EOSC-Hub effort in the con
 text of Service Architecture\none talk from external project that are prov
 iding or willing to provide new services into the EOSC Service Catalogue\n
 one talk from EOSCpilot to talk about the work done in the context of serv
 ice catalogue \none talk representative of others efforts in the same cont
 ext. \nOne example of use communities exploiting services in the service c
 atalogue to build brand-new service usable by end users. \nIn the World Ca
 fe Session\, we will also dedicate a short slot to update the audience abo
 ut the status of the service roadmap and the planned updated.\n\nhttps://i
 ndico.egi.eu/event/3973/contributions/9353/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9353/
END:VEVENT
BEGIN:VEVENT
SUMMARY:RDM: A library perspective of versioning\, curating and archiving 
 research data from diverse domains
DTSTART;VALUE=DATE-TIME:20181009T153000Z
DTEND;VALUE=DATE-TIME:20181009T154500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-156@indico.egi.eu
DESCRIPTION:Speakers: Ayer\, Vidya (Bielefeld University)\nLibraries are t
 he vanguards for RDM and digital curation. However\, beyond archival prese
 rvation\, versioning and digital curation of research data adds value to k
 nowledge assets insofar that these can be extended across domains to creat
 e services that are useful to the research community. At Bielefeld Univers
 ity\, the DFG-funded Conquaire project\, a collaboration between CITEC and
  the Bielefeld University library\, has created a generic RDM framework th
 at ensures research data quality using continuous integration (CI) princip
 les in order to ease the process of publishing research data to PUB\, our 
 institutional repository which is based on the free and open-source LibreC
 at software.\n\nThe Conquaire RDM system (RDMS) automates the analytical r
 eproducibility process by unobtrusively monitoring their research data sto
 red within a GitLab repository to validate its data quality for CSV files.
  Researchers receive automated quality assessments via email whenever they
  upload research data into their repository that is automatically monitore
 d using the inbuilt GitLab CI.\n\nFurthermore\, the continuous integration
  principle standardizes technology (platforms and tools) which enhances th
 e cross-domain data interoperability in an RDM service. A curated digital 
 dataset that validates standardized formats will mitigate digital obsolesc
 ence\, thereby making the research data accessible\, reusable\, and archiv
 able for users indefinitely.  \n\nAmong research artifacts\, the software 
 source code used for the analysis being an integral part of a research pro
 ject can be considered to be a form of data – research publications with
 out the code used to process and visualise the research data cannot be ana
 lytically reproduced. The source code also needs to be properly versioned\
 , curated and archived in order to fulfill the FAIR (Findable\, Accessible
 \, Interoperable and Reusable) data principles. Currently\, in addition to
  the data quality framework\, we are in the process of implementing a gene
 ric CI system that automates and aids the data validation system based on 
 the technical stack used by the partner groups.\n\nIn order to understand 
 the nine research partner groups' software toolkits and data analysis proc
 ess\, we undertook independent reproducibility experiments (ReX) that enta
 iled analytically reproducing one result from a paper already published by
  these groups. \n\nOur research experience during the ongoing collaboratio
 n with the case study partners has highlighted the technical challenges th
 at diverse research projects throw up during the process of creating a gen
 eric data quality framework. These range from finding common document form
 ats to analyse tools used among the various research groups partnering in 
 the Conquaire project. Finding a balance between this diversity (both tech
 nical and data-wise) without disturbing the existing workflow of each rese
 arch group has thrown up cross-domain challenges that need to be addressed
 .\n\nhttps://indico.egi.eu/event/3973/contributions/9331/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9331/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Business or Research Project?  A case study of the evolving busine
 ss model of HUBzero
DTSTART;VALUE=DATE-TIME:20181011T103000Z
DTEND;VALUE=DATE-TIME:20181011T104500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-159@indico.egi.eu
DESCRIPTION:Speakers: Zentner\, Michael (Purdue University\, HUBzero)\, Ge
 sing\, Sandra (University of Notre Dame)\, Olabarriaga\, Silvia (Universit
 y of Amsterdam)\nSustainability is a state that many science gateway effor
 ts strive toward\; however\, this is still elusive.  The HUBzero® platfor
 m has seen several phases of evolution on its sustainability path since it
  was first founded in 2007\, and earlier existed as the infrastructure run
 ning the nanoHUB.org science gateway since 2002.  A key learning is that t
 here are several turning points that gradually take an effort from a proje
 ct more toward operating as a business as it becomes self sustainable.\n\n
 The science gateway nanoHUB.org was created under the vision of Professor 
 Mark Lundstrom at Purdue University in 1998 as a focused functionality sit
 e for submitting simulation jobs to high performance computing resources a
 nd downloading results.  In 2002\, nanoHUB.org became the online delivery 
 vehicle for the newly funded Network for Computational Nanotechnology.  As
  users desired more functionality\, a larger software team was built with 
 expertise in middleware\, web front end\, database\, and operations.  All 
 of these functions one would begin to recognize as development and operati
 ons within a commercial software enterprise.  Coupled with the growth of t
 he software team\, Professor Lundstrom added Professor Gerhard Klimeck to 
 the team as technical director\, and later as director.  Professor Klimeck
  took nanoHUB.org beyond the visionary founding\, and worked within the na
 no communities to scale up the user base.\n\nAs the nanoHUB.org team grew\
 , an annual National Science Foundation review panel suggested that the in
 frastructure could be used to run many science gateways\, not just nanoHUB
 .org.  At the same time\, the software team was large enough that a career
  path beyond one project was desirable.  In 2007\, the team was therefore 
 relocated from the research project to the Research Computing group in Inf
 ormation Technology\, under the leadership of Dr. Michael McLennan and bec
 ame the HUBzero group.  The unit became responsible for its own revenue an
 d began scaling out across many communities.  Additional personnel were ad
 ded to develop and run a reliable infrastructure with high uptime\, to pro
 vide front line customer service\, and to handle additional development ta
 sks.  The Purdue University model for operating such a group is a\, “rec
 harge center\,” where the group is allowed to run in a non-profit manner
 .\n\nIn 2015\, Dr. McLennan left\, and Dr. Michael Zentner became director
 .  By this time\, the HUBzero group had operated more than 30 science gate
 ways\, and many others were using the open source HUBzero platform to run 
 gateways.  A key learning was that the recharge center model hindered plat
 form innovation.  The original costs of operation did not include several 
 essential functions: internal research and development to continue innovat
 ing the platform and to replace aging functionality\, sales and marketing 
 to continue to grow the community\, and helping HUBzero clients sustain th
 eir science gateways beyond their initial funding period.  Today the HUBze
 ro team is comprised of 25 full time professionals\, has operated cash flo
 w positive for 3 consecutive years\, and is addressing these needs by alte
 ring the team composition and adapting its platform and business offerings
 \, including OneSciencePlace.org to sustain gateways.\n\nhttps://indico.eg
 i.eu/event/3973/contributions/9332/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9332/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deep Learning for Predicting the Popularity of Datasets
DTSTART;VALUE=DATE-TIME:20181010T154500Z
DTEND;VALUE=DATE-TIME:20181010T160000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-158@indico.egi.eu
DESCRIPTION:Speakers: Zimmermann\, Nina (Univ. of Applied Sciences (HTW) B
 erlin)\nAccessing datasets stored on tape drives is comparatively time-con
 suming. Therefore\, a certain fraction of all datasets is usually provided
  on a cache storage built of hard disks. Caching algorithms are used to id
 entify popular datasets and to move them in advance from tape drives to th
 e cache storage. In general\, there is a considerable gap between the effe
 ctiveness of traditional caching algorithms and the optimal (or Belady) ca
 ching algorithm. It seems to be unlikely that the gap can be reduced signi
 ficantly by optimizing traditional caching algorithms. The aim of our proj
 ect is to explore whether popular datasets can be identified more optimall
 y by applying deep learning methods. \n\nTraining a neutral network is tim
 e-consuming. This is true\, in particular\, if the training sets are large
 . The Atlas experiment at the Large Hadron Collider (LHC) stores every acc
 ess to datasets in log files (many parameters are saved such as the name o
 f the file\, name of the dataset the file belongs to\, the tool used for a
 ccessing the file\, and the access time). In total log data of the order o
 f 0.5 TB are stored per month. Applying deep learning techniques to large 
 datasets needs a scalable infrastructure. To speed up the training of neur
 al networks\, several proposals were submitted\, for example the use of sp
 ecialized processors like GPUs or TPUs. We designed a cluster of container
 s for running neural networks in parallel. The cluster allows to investiga
 te different distributed deep learning strategies\, e.g. data parallelism 
 and model parallelism. To distribute files across the nodes of the cluster
  and to train neural networks in parallel\, the big data analytics framewo
 rks Apache Flink and Apache Spark are used. \n\nThe talk gives an overview
  of the current status of our project. The machine learning workflow runni
 ng on the cluster system is presented. First results obtained by applying 
 a Convolutional Neural Network to a small subset of Atlas log data are sho
 wn. The speedup of different parallelization strategies is evaluated. An o
 utlook on ongoing work will be given.\n\nhttps://indico.egi.eu/event/3973/
 contributions/9333/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9333/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ELIXIR Cloud Analysis Platform for EOSC
DTSTART;VALUE=DATE-TIME:20181010T160000Z
DTEND;VALUE=DATE-TIME:20181010T161500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-62@indico.egi.eu
DESCRIPTION:Speakers: Varma\, Susheel (EMBL EBI)\nThe aim of the ELIXIR Cl
 oud Analysis Platform is to co-develop and implement an integrated cloud p
 latform that is compliant with relevant global standards/specifications\, 
 such as those coming out of the Global Alliance for Genomics and Health (G
 A4GH). To date\, six national nodes (EMBL-EBI\, ELIXIR-FI\, ELIXIR-DE\, EL
 IXIR-CH\, ELIXIR-UK and ELIXIR-IT) have committed resources to develop and
  implement standards compliant cloud federation service. The project will 
 leverage multiple emerging specifications from GA4GH’s different work st
 ream areas\, namely Cloud (TRS\, DOS\, WES and TES)\, Discovery (Search\, 
 Service Registry)\, DURI (DUO\, BonaFide)\, LSG (htsget\, RefSeq) and Data
  Security (AAI).\n\n![ELIXIR Cloud Analysis Platform][1]\n\nThis presentat
 ion will showcase the technical AAI integration between EOSC and ELIXIR AA
 I to deploy a federated GA4GH compliant workflow analysis service. The ser
 vice once provisioned using EOSC credentials can subsequently be used by l
 ife science researchers to submit standardised workflow descriptions (CWL)
  to be executed by a Workflow Execution Service which can further leverage
  Europe-wide distributed task execution services stationed in a number of 
 ELIXIR national nodes. The presentation will also showcase the distributed
  Reference Dataset Distribution Service developed within the EOSC-Hub proj
 ect to allow bulk site-to-site transfer for large reference datasets for a
 nalysis by computational pipelines. This prototype integration between the
 se two key technical infrastructures is hoped to provide dynamic data-loca
 lity based optimisation of workflow task distribution in a federated envir
 onment like ELIXIR and EOSC.\n\nThe specific scientific drivers for this E
 LIXIR EOSC collaboration is to address the large-scale challenges in analy
 sing Marine Metagenomics/Transcriptomics\, distributed computational servi
 ces to address workflow access to sensitive data (EGA\, Local EGA) and sup
 port for large scale on-demand industry-driven research workflow execution
  for Protein homology/analogy recognition As A Service.\n\n\n  [1]: https:
 //github.com/EMBL-EBI-TSI/TESK/raw/master/documentation/img/project-archit
 ecture.png\n\nhttps://indico.egi.eu/event/3973/contributions/9334/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9334/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The OpenAIRE ScholeXplorer Service: aggregation and resolution of 
 literature-dataset links
DTSTART;VALUE=DATE-TIME:20181009T133000Z
DTEND;VALUE=DATE-TIME:20181009T134500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-48@indico.egi.eu
DESCRIPTION:Speakers: Manghi\, Paolo (Istituto di Scienza e Tecnologie del
 l'Informazione - CNR)\n**OpenAIRE** OpenAIRE is the European infrastructur
 e in support of Open Science. It fosters and monitors the adoption of Open
  Science across Europe and beyond\, at the National and international leve
 l and at the research community level. It  advocates the importance and th
 e uptake of Open Science-oriented research life-cycles and publishing work
 flows\, in support of reproducible science\, transparent assessment\, and 
 omni-comprehensive scientific reward. To this aim OpenAIRE leverages the r
 equired cultural shift via a pervasive network of people in Europe (NOADs 
 - National Open Access Desks) and beyond (“global alignment” via CORE)
 \, and facilitates the technological shift by providing technical services
  and interoperability guidelines. \n\n**Scholix** Under the international 
 forum of the Research Data Alliance and in collaboration with relevant sta
 keholders in the field\, such as DataCite\, CrossRef\, World Data System\,
  and Elsevier\, OpenAIRE has participated to a [Working Group][1] for the 
 definition of the [Scholix framework][2] (Scholarly Link eXchange). The go
 al of Schoix is to establish a high level interoperability framework for e
 xchanging information about the links between scholarly literature and dat
 a. It aims to enable an open information ecosystem to understand systemati
 cally what data underpins literature and what literature references data. 
 Scholix maintains an evolving set of Guidelines consisting of: (i) an info
 rmation model (conceptual definition of what is a Scholix scholarly link)\
 , (ii) a link metadata schema (set of metadata fields representing a Schol
 ix link)\, and (iii) a corresponding XML and JSON schema. Scholix is curre
 ntly adopted as export format for links by DataCite and CrossRef via the [
 CrossRef EventData][3] service\, by EuropePMC\, and by OpenAIRE via the Sc
 holeXplorer service.\n\n**ScholeXplorer** Scholexplorer is an OpenAIRE pro
 duction service that since 2017 offers access to a unique collection of li
 nks between publications and datasets collected from publishers (EventData
 )\, data centres (DataCite)\, and institutional and thematic repositories 
 (OpenAIRE). The collection is constantly populated and features 31Mi bi-di
 rectional links between 880.000 articles and 5.840.000 datasets from an ov
 erall 13.000 providers. The resulting graph of links can be accessed via t
 he [ScholeXplorer portal or via the APIs][4]\, which support third-party s
 ervices at resolving publication/dataset PIDs to obtained related datasets
  or publications - content is also made available as a [JSON dump via Zeno
 do.org][5]. Since the beginning of 2018 the service has counted around 700
  Million requests for PID resolution by third-party services (mainly Elsev
 ier ScienceDirect) which have integrated ScholeXplorer in their workflows 
 to show the list of datasets (publications) linked to their publications (
 datasets).\n\nIn this presentation we shall present the benefits of Scholi
 x and the technical challenges underlying ScholeXplorer as a production se
 rvice (i.e. aggregation\, resolution\, deduplication of link metadata) and
  the solutions adopted to achieve the quality of service as agreed on with
  data centers and publishers\, which are today using the service as their 
 main link-exchange channel. \n\n\n  [1]: https://goo.gl/F6WEzS\n  [2]: htt
 p://www.scholix.org\n  [3]: https://goo.gl/BQ377S\n  [4]: http://scholexpl
 orer.openaire.eu/\n  [5]: https://goo.gl/1oN2Vm\n\nhttps://indico.egi.eu/e
 vent/3973/contributions/9345/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9345/
END:VEVENT
BEGIN:VEVENT
SUMMARY:NeIC Dellingr project: long-term cross-border resource sharing.
DTSTART;VALUE=DATE-TIME:20181010T154500Z
DTEND;VALUE=DATE-TIME:20181010T155000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-47@indico.egi.eu
DESCRIPTION:Speakers: White\, John (NeIC Nordic e-Infrastructure Collabora
 tion)\nThe NeIC Dellingr project is investigating how a lightweight framew
 ork for sharing High Performance Computing (HPC) resources can be implemen
 ted between participating countries. These resources will be open to eligi
 ble researchers from the participating countries who wish to access resour
 ces in other participating countries. A feature of this resource sharing i
 ncludes the case where the computing project is performed in an HPC centre
  outside the home country of the researcher.\n\nNational computing centres
  for academic research are generally funded by ministries responsible for 
 scientific research and higher education – the same ministries that fund
  universities. The roles of computing centres and universities are distinc
 t: Universities do scientific research and give education. Computing centr
 es help them to reach their goals in these functions. Money allocated to c
 omputing centres should provide better\, or at least\, comparable results 
 as the same amount allocated to universities. Resource exchange can advanc
 e scientific research and education in three ways.\n\nFirst\, it can open 
 new research opportunities. Users may have certain technical requirements 
 regarding CPU performance and efficiency\, memory size and bus bandwidth\,
  disk storage size and speed\, as well as type and speed of interconnects.
  Other factors users may consider are the type and version of compilers\, 
 software and system administration support\, and also social and political
  factors such as available certifications of the system\, the source of th
 e electricity and terms of services of a particular resource provider.  Us
 ers may also prefer one system to another simply based on the perceived ea
 se-of-use\, level of user support and other intangibles or subjective meas
 ures of an attribute of a particular system. If one centre does not have c
 ertain hardware or software that a research group needs\, they can ideally
  use suitable resources made available from other countries.\n\nSecondly\,
  resource exchange can balance temporary resource shortages\, for example 
 during computer procurements. In the time between when a cluster is decomm
 issioned and the new cluster is available\, it is good if the users do not
  need to wait for the newly commissioned system but instead can “borrow
 ” resources from another provider. This long-term pool of shared resourc
 es can be used as temporary resources for users.\n\nThirdly\, another shar
 ing scenario to consider is when the HPC resources in one country are cons
 tantly “overbooked” and queuing times become unacceptably long for the
  users\, while in some other countries there might be an excess of free re
 sources.\n\nThis lightning talk will present the resource sharing models\,
  the legal and policy issues. Also\, the results of a first resource shari
 ng pilot and a proposed second pilot\, will be given.\n\nhttps://indico.eg
 i.eu/event/3973/contributions/9348/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9348/
END:VEVENT
BEGIN:VEVENT
SUMMARY:OpenAIRE Open Science publishing for Research Infrastructures: the
  EPOS use-case
DTSTART;VALUE=DATE-TIME:20181009T140000Z
DTEND;VALUE=DATE-TIME:20181009T141500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-45@indico.egi.eu
DESCRIPTION:Speakers: Manghi\, Paolo (Istituto di Scienza e Tecnologie del
 l'Informazione - CNR)\nOpenAIRE is the European infrastructure in support 
 of Open Science. It fosters and monitors the adoption of Open Science acro
 ss Europe and beyond\, at the National and international level and at the 
 research community level. It  advocates the importance and the uptake of O
 pen Science-oriented research life-cycles and publishing workflows\, in su
 pport of reproducible science\, transparent assessment\, and omni-comprehe
 nsive scientific reward. To this aim OpenAIRE leverages the required cultu
 ral shift via a pervasive network of people in Europe (NOADs) and beyond (
 “global alignment” via CORE)\, and facilitates the technological shift
  by providing technical services and interoperability guidelines. Among it
 s technical services OpenAIRE provides the Research Community Dashboard (R
 CD)\, which offers research communities the functionality to publish\, agg
 regate\, and discover their research outputs via a set of underlying OpenA
 IRE services that interlink publications\, datasets\, software\, experimen
 ts and other products to produce a fully-fledged view of a specific schola
 rly discipline. \n\nThe [European Plate Observing System][1] (EPOS) is a p
 an-European distributed Research Infrastructure for solid Earth science to
  support a safe and sustainable society. Through the integration of Nation
 al research infrastructures and data\, EPOS will allow scientists to make 
 a step change in developing new geo-hazards and geo-resources concepts and
  Earth science applications to help address key societal challenges. CNR-I
 REA is an Italian service provider of EPOS whose portfolio includes satell
 ite Earth Observation services aimed at generating value-added products fo
 r Solid Earth applications & natural disaster analysis\, prevention and mi
 tigation. \n\nIn collaboration with OpenAIRE\, CNR-IREA will integrate its
  EPOS services with the RCD service in order to ensure publishing of resea
 rch products and experiments in a way that supports their use\, reuse and 
 reproducibility. This presentation will describe the use-case selected to 
 drive the integration: in EPOS user interested in Solid Earth analyses thr
 ough satellite applications. Such a user can benefit from the on-demand EP
 OSAR service\, that implements an advanced Synthetic Aperture Radar interf
 erometric technique to retrieve Earth surface displacements. In particular
 \, EPOSAR allows the user to select from the Copernicus Programme reposito
 ries a dataset of Sentinel-1 satellite images in order to generate ground 
 displacement time series and velocity maps suitable to investigate both na
 tural (earthquakes\, volcanic unrests\, landslides) and man-made (tunnelli
 ng excavations\, aquifer exploitation\, oil and gas storage and extraction
 \, infrastructures monitoring) hazards. The EPOSAR workflow will interoper
 ate with an EPOS RCD to allow the users to publish in Zenodo.org: the list
  of processed satellite images as Input Dataset\; the output results as Da
 tasets\; and the configuration of EPOSAR service\, with links to input and
  output Datasets\, as Experiment. Each of these products will have its own
  DOI\, citation metadata\, semantics links with other products if needed\,
  and be discoverable through the EPOS RCD. It is of course up to the users
  to opt when their experiment is mature enough to published in OpenAIRE as
  a citable and preserved Experiment object\, and eventually cite the objec
 t from any articles they produce. \n\n\n  [1]: https://www.epos-ip.org/\n\
 nhttps://indico.egi.eu/event/3973/contributions/9349/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9349/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Preventing security incidents in the EOSC-hub era - by evolving So
 ftware Vulnerability Handling
DTSTART;VALUE=DATE-TIME:20181010T155000Z
DTEND;VALUE=DATE-TIME:20181010T155500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-43@indico.egi.eu
DESCRIPTION:Speakers: Cornwall\, Linda (STFC)\nThe EGI Software Vulnerabil
 ity Group (SVG) has been handling software vulnerabilities in order to hel
 p prevent security incidents in the EGI infrastructure and its predecessor
 \, the EGEE series of projects\, for more than a decade.  While the proced
 ure has evolved somewhat it has remained focussed on the fairly well defin
 ed Grid and later cloud technologies having a fairly standard configuratio
 n\, with vulnerabilities mostly investigated and risk assessed by the SVG 
 'Risk Assessment Team' or 'RAT'. \n\nDuring the last year it has become cl
 ear that major changes are needed to the way the SVG handles software vuln
 erabilities due to the proliferation of software and technology\, other co
 llaborating infrastructures\, lack of homogeneity and above all the servic
 es in the EOSC-hub service catalogue.  The current 'RAT' cannot be experts
  in all the various types of software and services\, and how software is c
 onfigured and deployed.  Those selecting software or deploying services wi
 ll need to take responsibility for investigating vulnerabilities in softwa
 re used to enable their services and the risk to those services. \n\nThis 
 talk will describe how we plan to evolve the SVG issue handling procedure 
 so that those who select and deploy software and services have a greater r
 ole in vulnerability handling\, while aiming for a consistent risk assessm
 ent so that the most serious vulnerabilities get priority in their resolut
 ion.  This will include plans for smooth communication with relevant parti
 es such as experts in specific software\, infrastructure providers\, as we
 ll as those providing specific services in the EOSC-hub catalogue which de
 pend on specific pieces of software. It will also inform service providers
  what they should do to help SVG to help them ensure that their services a
 re as free from vulnerabilities as possible\, to minimize the risk of secu
 rity incidents due to software vulnerabilities concerning their services.\
 n\nhttps://indico.egi.eu/event/3973/contributions/9351/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9351/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mapping Species Knowledge– the value of data and digital infrast
 ructure to address the extinction crisis
DTSTART;VALUE=DATE-TIME:20181009T093000Z
DTEND;VALUE=DATE-TIME:20181009T100000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-9@indico.egi.eu
DESCRIPTION:Speakers: Conde\, Dalia A. (Species360 CSA & CPop University o
 f Southern Denmark)\nThe wealth of species in our planet is critical for o
 ur survival. However\, we are losing species at a rate equivalent to a mas
 s extinction event. During the last century\, vertebrate extinction has be
 en about 100 times higher than what would be expected during stable geolog
 ical periods. Design of species conservation strategies and policies direc
 tly rely on accessible information\, such as species demography\, genetics
 \, habitat\, threats\, and legislation. Biodiversity databases are expandi
 ng\, and new ones are emerging\, but despite the technological advances on
  digital infrastructure we are still struggling to map the available infor
 mation for each of the described species. However\, such an endeavour will
  allow managers and policy makers to improve their decision process. The S
 pecies-Index is an initiative driven by both the Species360 Conservation S
 cience Alliance and the CPop at the University of Southern Denmark\, with 
 the aim to develop partnerships to map information\, generate development 
 platforms\, workflow systems\, and storage between open biodiversity repos
 itories. We have developed the concept based on the Species-Index on Demog
 raphy from 22 data repositories and the Zoological Information Management 
 System (ZIMS). We started by indexing demographic information because of t
 he immediate need to address species recovery strategies highly depends on
  births and death data to gain a deeper understanding of population dynami
 cs\, e.g.\, to assess species extinction risk or for the establishment of 
 sustainable harvesting quotas for species trade. We found that ZIMS can pr
 ovide a unique source of demographic knowledge to fill data gaps. As a res
 ult\, we are developing routines on SDU ABACUS (https://escience.sdu.dk/) 
 2.0 Supercomputer to estimate demographic measures that can be used by con
 servation scientist and policymakers\, with a significant focus on fightin
 g the illegal wildlife trade.\n\nhttps://indico.egi.eu/event/3973/contribu
 tions/9355/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9355/
END:VEVENT
BEGIN:VEVENT
SUMMARY:DIRAC Services for EGI users
DTSTART;VALUE=DATE-TIME:20181010T164500Z
DTEND;VALUE=DATE-TIME:20181010T170000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-149@indico.egi.eu
DESCRIPTION:Speakers: Tsaregorodtsev\, Andrei (CNRS)\nThe DIRAC services a
 re available for the EGI users since 2014 and since 2018 they are making p
 art of the EOSC-Hub service portfolio. The services are providing a versat
 ile Workload Management System which can replace the gLite WMS service. It
  gives access to all the EGI grid and cloud resources used for intensive c
 omputations. Users are allowed to specify also their own computing and sto
 rage resources which are not part of the EGI infrastructure. Higher level 
 functionality can be also made available on demand of particular communiti
 es\, for example\, services for managing complex workflows involving massi
 ve job submissions. DIRAC is providing also basic tools for managing user 
 data with an easy access to configurable storage elements and a powerful f
 ile catalog. The catalog allows not only to store file replica information
  but also to define complex access control customizable for a given commun
 ity. There is a possiblity to define user metadata for easy searches of th
 e necessary datasets. The DIRAC functionality is available via several int
 erfaces including command line\, RESTful interface as well as a comprehens
 ive Web Portal. The basic services are available for all the EGI communiti
 es. More advanced features can be offered as part of the support in the fr
 amework of particular Competence Centers. In this presentation we will des
 cribe functionalities offered by the DIRAC services to the EGI users as we
 ll as the experience of running the services for various EGI communities. 
 Outlook for the further evolution of the service will be also presented.\n
 \nhttps://indico.egi.eu/event/3973/contributions/9367/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9367/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A campus wide ePosters management system: KAUST Library initiative
  to build Digital Infrastructure to promote Open Access and Digital Preser
 vation
DTSTART;VALUE=DATE-TIME:20181010T161500Z
DTEND;VALUE=DATE-TIME:20181010T162000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-73@indico.egi.eu
DESCRIPTION:Speakers: Hall\, Garry (King Abdullah University of Science an
 d Technology)\nKing Abdullah University of Science and Technology (KAUST)\
 , established in 2009 as an international research University in Saudi Ara
 bia\, has adopted the first Open Access mandate for scientific publication
 s in the region and leads with a well-established research repository mana
 ged and promoted by the University Library. Having several scientific post
 er events annually at the campus\, with hosting supported by the Library\,
  printed posters have remained static\, highly localized and short-lived. 
 These characteristics are at odds with what is often the first formal comm
 unication of scientific research and\, as such would be of great interest 
 to other researchers. Addressing these limitations was a major motivator b
 ehind the trialing of an ePoster alternative at KAUST. This project was co
 nceived\, piloted and will be implemented and managed by the University Li
 brary\, in collaboration with IT Services. In addition to digitally captur
 ing research content for display and preservation\, ePoster functionality 
 changes the engagement dynamics whilst helping to bridge the gap between a
 cademia and professional practice. ePosters have been extensively embraced
  by international professional organizations\, however\, academic institut
 ions remain bound to printed posters. \n\nThis project identified a short-
 list of possible companies that responded to criteria identified by KAUST 
 as requirements for its campus wide ePoster management system. The evaluat
 ion process included student and researcher participation\, as well as web
 inars and demonstrations and culminated with site visits to the company he
 adquarters of the two finalists. The preferred supplier was then involved 
 with several pilot conferences at KAUST to demonstrate their system’s ca
 pabilities and\, as importantly\, to expose academic staff to ePosters in 
 operational settings. Surveys were conducted of conference participants\, 
 academic staff\, students and conference organizers to obtain feedback and
  reaction to this approach. Advantages were both obvious and embraced by r
 espondents\; they appreciated the functionality which included ongoing edi
 ting and/or updating of content by authors\, the ability of organizers to 
 monitor progress of submissions and control content display and statistics
  being available via a dashboard. \n\nePoster presentations engage the aud
 ience better\; they are more interactive\, dynamic and informative as a re
 sult of incorporating high resolution images and videos (with associated z
 oom capabilities) and audio. In addition\, the elimination of print and po
 ster mounting aligns with KAUST commitment to environmental stewardship an
 d open access to scientific output through a direct upload of content to t
 he Research Repository. Interest in ePosters is expanding\; this has seen 
 the Library involved in associated skills training and outreach.\n\nAcadem
 ia is notably behind this practitioner-driven trend. KAUST Library believe
 s that\, by rolling out an ePoster system to the University\, it is the fi
 rst campus in the world to offer this as a campus-wide solution\, truly re
 flecting a digital smart campus vision of KAUST.\n\nhttps://indico.egi.eu/
 event/3973/contributions/9370/
LOCATION:Lisbon Main Auditorium
URL:https://indico.egi.eu/event/3973/contributions/9370/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Repackaging OpenAIRE Services
DTSTART;VALUE=DATE-TIME:20181009T131500Z
DTEND;VALUE=DATE-TIME:20181009T133000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-96@indico.egi.eu
DESCRIPTION:Speakers: Principe\, Pedro (University of Minho)\nIn the new p
 hase of OpenAIRE\, among the project’s goals\, there is the one to repac
 kage OpenAIRE services providing them as complete products to the final us
 ers.\nIn fact\, OpenAIRE is working to bundle the current services into pr
 oducts to address specific stakeholders’ needs and Product Management pr
 ocesses. Each product has an assigned product manager that foresees the de
 velopment and implementation of the product and also communicates the visi
 on and the functionalities to the stakeholders.\nThe products will have th
 e form of dashboards\, allowing different stakeholders to benefit from tai
 lor-made solutions addressing specific needs.\nThese dashboards are:\n- Da
 ta Provider Manager Dashboard. Already piloted in OpenAIRE2020\, it gather
 s all functionalities that data providers (repository managers\, OA journa
 ls\, CRIS’s\, aggregators) use to interact with OpenAIRE: registration a
 nd validation\, aggregation process and status\, registration and visualiz
 ation of usage statistics and other metrics\, interaction with the OpenAIR
 E Broker (subscription and notification of metadata enrichment).\n- Resear
 ch community Dashboard. Being developed in the framework of the OpenAIRE-C
 onnect project\, this Dashboard includes functionalities used by research 
 communities to configure and deploy on-demand services: restrict the searc
 h\, browse\, and navigation related outputs\, authoritatively tune-up back
 end text-mining algorithms\, reliably monitor and report the research impa
 ct of their scientific production\, authoritatively provide links between 
 artefacts.\nIn addition to the two dashboards mentioned above\, three addi
 tional monitoring dashboards will be available to enable the aggregation o
 f results based on the type of stakeholder and the compliance checking.\n-
  Funder Dashboard: it allows to monitor research results by aggregated and
  summarised statistics\, to drill down queries by specific facets (timelin
 es\, subjects\, countries)\, compliance to OA mandates (with a breakdown f
 or gold/green)\, correlations to other funding streams\, and research tren
 ds.\n- Project Dashboard: all project research results are gathered in one
  place displaying the compliance with open access mandates\, timelines\, r
 elated data providers\, and correlations with other projects.\n- Instituti
 onal Dashboard: it monitors all institution’s related research outcomes\
 , the compliance with funder’s mandate on OA and much more.\nThe dashboa
 rds described above are also part of the OpenAIRE service catalogue deploy
 ed in collaboration with the eInfraCentral and EOSCpilot projects.\n\nhttp
 s://indico.egi.eu/event/3973/contributions/9374/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9374/
END:VEVENT
BEGIN:VEVENT
SUMMARY:West-Life Virtual Folder - connecting data and computation for str
 uctural biology
DTSTART;VALUE=DATE-TIME:20181009T113000Z
DTEND;VALUE=DATE-TIME:20181009T114500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-2@indico.egi.eu
DESCRIPTION:Speakers: Kulhanek\, Tomas (STFC)\n\n\nWest-Life is H2020 proj
 ect aiming to deliver virtual research environment in order to support int
 egrative research in structural biology. Structural biology involves multi
 ple techniques X-ray crystallography\, croyEM\, NMR\, mass spectroscopy an
 d others. It aims to address two main challenges: \n\n - Allow discovery a
 nd deliver multiple software tools and techniques to user lowering the eff
 ort for installation configuration and integration. \n - Aggregate scatter
 ed data into virtual folder view and allow processing them using uniform i
 nterface\n\nWest-Life Virtual Folder allows to register data storage provi
 der in one place and aggregate them when the data are needed. It supports 
 to register Dropbox or EUDAT’s B2DROP service or any data storage servic
 e giving WEBDAV interface. It heavily uses WEBDAV interface and protocol t
 o deliver standard method to download and upload data by other services an
 d web sites. There is possibility to integrate proprietary solution provid
 ed by each data storage provider delivering better performance.\n\nWest-Li
 fe Virtual machine templates brings uniform configuration for launching so
 ftware and processing data. It leverages CernVM-FS technology for distribu
 ting software suites\, thus they are not included in VM template itself\, 
 but are downloaded to VM cache and executed on demand\, bringing initial V
 M image very small (18 MB). Virtual Folder inside virtual machine integrat
 es user's data with software deployed on computation node ready for proces
 sing.\n\nCurrently software suites of CCP4\,CCPEM\, SCIPION and others are
  available for user's of VM.\n\nhttps://indico.egi.eu/event/3973/contribut
 ions/9377/
LOCATION:Lisbon Auditorium JJLaginha
URL:https://indico.egi.eu/event/3973/contributions/9377/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EC DIH Initiatives: the wider context
DTSTART;VALUE=DATE-TIME:20181011T133500Z
DTEND;VALUE=DATE-TIME:20181011T134500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-262@indico.egi.eu
DESCRIPTION:Speakers: Piscitelli\, Roberta (EGI.eu)\nhttps://indico.egi.eu
 /event/3973/contributions/9379/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9379/
END:VEVENT
BEGIN:VEVENT
SUMMARY:eInfraCentral Service Description Template
DTSTART;VALUE=DATE-TIME:20181009T161500Z
DTEND;VALUE=DATE-TIME:20181009T162500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-267@indico.egi.eu
DESCRIPTION:Speakers: Sanchez\, Jorge (JNP)\nhttps://indico.egi.eu/event/3
 973/contributions/9380/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9380/
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Implementation of MPI Cluster as a Service on Supercomputer
  System
DTSTART;VALUE=DATE-TIME:20181010T160000Z
DTEND;VALUE=DATE-TIME:20181010T161500Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-126@indico.egi.eu
DESCRIPTION:Speakers: Simchev\, Teodor (IICT-BAS)\nThe vast majority of HP
 C users are heavily leveraging MPI middleware for their applications. Hist
 orically\, MPI was mainly configured on Supercomputer Systems and the appl
 ications were living in the boundaries set by the system administrators. T
 his led to different issues\, including but not limited to problems with a
 pplication distribution\, environment configuration\, resource allocation 
 and filesystem permissions. Recently\, the expansion of Cloud Computing br
 ought the attention of many HPC users to offerings like Infrastructure as 
 a Service\, Platform as a Service\, Software as a Service and HPC as a Ser
 vice. These Services give the power users much more granular control over 
 the provided resources.\nFor the last year we have been researching variet
 y of Linux operating system-level virtualization technologies aiming to mi
 mic the flexibility\, isolation and resource management provided by Cloud 
 Computing into world of Supercomputer Systems without compromising the per
 formance.\nOur research resulted in Linux containers which gained their po
 pularity and were adopted due to their small footprint\, distribution form
 \, runtime isolation and relatively neglectable performance overhead. Thes
 e make them a very good candidate for implementing virtual Supercomputer S
 ystems.\nIn this talk we present the approach that we used to provide our 
 users with the ease to deploy virtualized MPI clusters and the power to co
 ntrol their configurations through the associated lifecycle operations. We
  framed all these in MPI cluster as a Service solution.\nFor platform impl
 ementation we used Supercomputer System Avitohol at IICT-BAS which is the 
 core of the scientific computing infrastructure in Bulgaria and currently 
 the most powerful supercomputer in the region with its 150 computational s
 ervers each equipped with two Intel Xeon Phi coprocessors and theoretical 
 peak performance of 412.3 TFlop/s in double precision.\nOur experiments sh
 owed that the performance overhead of executing MPI applications inside MP
 I Linux container-based cluster is close to zero. Hardware capacity is use
 d more effectively by many concurrent users. MPI programs can be developed
  and sanity tested on local computer and easily transferred to the Superco
 mputer Systems. By using Linux containers\, we have improved the overall Q
 uality of Service for Avitohol users of scientific computing. \nThe applic
 ation domain of this design is not limited to HPC but IoT\, meteorology\, 
 traffic control\, trading systems\, in other words almost any MPI applicat
 ion available today.\n\nhttps://indico.egi.eu/event/3973/contributions/938
 1/
LOCATION:Lisbon
URL:https://indico.egi.eu/event/3973/contributions/9381/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sensitive data activities in EOSC-hub
DTSTART;VALUE=DATE-TIME:20181010T133000Z
DTEND;VALUE=DATE-TIME:20181010T140000Z
DTSTAMP;VALUE=DATE-TIME:20200325T102852Z
UID:indico-contribution-3973-266@indico.egi.eu
DESCRIPTION:Speakers: Azab\, Abdulrahman (UIO)\nhttps://indico.egi.eu/even
 t/3973/contributions/9382/
LOCATION:Lisbon Auditorium B104
URL:https://indico.egi.eu/event/3973/contributions/9382/
END:VEVENT
END:VCALENDAR
