26–30 Mar 2012
Leibniz Supercomputing Centre (LRZ)
CET timezone
CALL FOR PARTICIPATION: is now closed and successful applicants have been informed

Lessons learned from UNICORE EMI-ES Adoption towards Improved Open Standards

29 Mar 2012, 14:30
30m
FMI Hall 2 (100) (Leibniz Supercomputing Centre (LRZ))

FMI Hall 2 (100)

Leibniz Supercomputing Centre (LRZ)

Middleware services EGI: Standards

Speaker

Mr Mohammad Shahbaz Memon (Juelich Supercomputing Centre)

Overview (For the conference guide)

The EMI project unites a set of production Grid middleware technologies providing scientific communities a secure access to distributed and heterogeneous, compute and data resources. Within the compute area, job management and monitoring are considered to be the most significant areas of work. The EMI compute team embarked upon an effort to review the existing standards and their adoption in the domain of job management and to explore advanced execution service concepts that have been provided in the EMI-ES specification. The goal of this paper is to present the concepts of the EMI-ES interface and its information model that is required to manage, monitor, and model activities in production Grids. In this paper, we will delineate the architectural details of EMI-ES, and one of its ‘proof of concept’ realization in UNICORE. While we intend to contribute this effort to open standards, we will also shed light on the existing Grid standards by comparing them with EMI-ES.

Description of the Work

EMI products have the mandate to support diversified scientific communities with non-trivial job management requirements. Initially, Grid middlewares followed an approach with proprietary interfaces and information models. But this emerged to bewas not productive as scientific communities tend to use resources in multiple infrastructures more often. Over the years, standard bodies such as OGF produced a set of common standards like OGSA-BES, JSDL, and GLUE2. It has been revealed from our experiences that these standards work well in production, but that some basic standards can also be improved with advanced execution service concepts. Hence, several of these concepts bear the potential to improve the efficiency of Grid applications on distributed computing infrastructures today. But in order to suggest several improvements for open standards, EMI followed an approach to cover a set of concepts to be supported by ARC, gLite, and UNICORE, thus resulting in the EMI-ES specification. It consists of job management and monitoring interfaces, and also encapsulates a state model, activity description, resource and activity information. The activity services are divided into the five following interfaces. Creation interface specifies a message model to create an activity. ResourceInfo is an interface to search and project high level resource information. ActivityManagement interface exposes main stream functions to hold, resume, and cancel activities. ActvityInfo defines mainly the operations to monitor the vector of activities. The Delegation interface allows execution service to issue trust delegation credentials on the behalf of users while performing activity data-stagings. These are easily adopted within UNICORE as it is based on Web services technology. In the UNICORE architecture, there are services (like OGSA-BES or EMI-ES) through which clients can interact and from that the services direct requests to a strong execution backend.

Impact

The EMI-ES specification per se and the lessons learned from the proof of concept implementations in EMI (here UNICORE) serve broad set of interests of scientific communities, middleware providers, and the portfolio of open Grid standards. The specification comes up with a suite of well-defined interfaces, activity state model, support of vector operations, and an integrated job description model. The design and architectural elements have been discussed by the representatives of three major middleware technologies (ARC, gLite, UNICORE) within EMI and represent an excellent proof-of-concept specification for further standardization activities. EMI-ES captures a set of experiences of production middleware technologies serving HPC and HTC with diversified requirements. Consequently, we foresee its impact in fostering interoperability required by the scientific communities engaged in executing complex scientific use cases such as HEP, VPH, Drug Discovery, and Neuroscience. By having ‘proof of concept’ implementations in EMI, we could practically implement a set of desired improvements aiming to improve the efficiency of production Grid applications. Pragmatically it realizes the set of concepts which could be improved in the existing standards space. With this effort we see EMI-ES becoming a potential source for the next generation of existing open Grid job management and description standards such as OGSA-BES, JSDL, and GLUE2, thus it will be a major contribution to the overall Grid standards community.

Conclusions

In this contribution, we present the EMI-ES adoption in UNICORE, an execution service aimed to realize Grid Activity Management and Modelling requirements from the experiences of EMI’s ARC, gLite, and UNICORE solutions. One of the ‘proof of concept’ implementations is in UNICORE, while ARC as well as gLite also adopting it. Once all the EMI-ES adoptions within EMI are stable, we will take our endeavours to exercise interoperability with real application use cases providing insights on how future standard specifications can take production experience from this practical evaluation. By following this approach, we could anticipate its impact on overall scientific applications required to access resources in multiple Grids. We have also given the EMI-ES specification to the open standard working groups active in the area of Grid job management and modelling (e.g. next generation BES, JSDL, etc.) where the current focus is to take existing standards to the next versions of them.

Primary authors

Mr Mohammad Shahbaz Memon (Juelich Supercomputing Centre) Mr Morris Riedel (Juelich Supercomputing Centre)

Co-authors

Mr Ahmed Shiraz Memon (Juelich Supercomputing Centre) Dr Bernd Schuller (Juelich Supercomputing Centre) Mr Björn Hagemeier (Juelich Supercomputing Centre) Mr Michele Carpene (CINECA)

Presentation materials