26–30 Mar 2012
Leibniz Supercomputing Centre (LRZ)
CET timezone
CALL FOR PARTICIPATION: is now closed and successful applicants have been informed

Added Value of the new features of the ATLAS Computing Model and a shared Tier2&Tier3 facilities from the Community point of view

28 Mar 2012, 14:20
20m
FMI Hall 2 (100) (Leibniz Supercomputing Centre (LRZ))

FMI Hall 2 (100)

Leibniz Supercomputing Centre (LRZ)

Users and communities Community-tailored Services

Speakers

Mr Fernandez Alvaro (CSIC)Dr GONZALEZ DE LA HOZ Santiago (CSIC)Dr Salt Jose (CSIC)

Impact

The model of shared ATLAS Tier2 and Tier3 facilities in the EGI/gLite flavour at IFIC-Valencia is explained. This is a good occasion to see if we have developed all the Grid tools necessary for the ATLAS Distributed Analysis which is an important fraction of the activity performed by the ATLAS Community (users), and in case we don not, to try to fix it, in order to be ready for the foreseen increase in ATLAS activity in the next years.

Description of the Work

The ATLAS computing and data models have moved/are moving away from the strict MONARC model (hierarchy) to a mesh model. Evolution of computing models also requires evolution of network infrastructure to enable any Tier2 and Tier3 to easily connect to any Tier1 or Tier2. In this way some changing of the data model are required:
a) Any site can replicate data from any other site.
b) Dynamic data caching. Analysis sites receive datasets from any other site “on demand” based on usage pattern, and possibly using a dynamic placement of datasets by centrally managed replication of whole datasets. Unused data is removed.
c) Remote data access. Local jobs could access data stored at remote sites using local caching on a file or sub-file level.
Data taking in ATLAS has been going on for more than one year. The Tier2 and Tier3 facility setup, how do we get the data, how do we enable at the same time grid and local data access, how Tier2 and Tier3 activities affect the cluster differently and process of hundreds of million of events, will be presented.
It is also used the EGI/gLite middleware flavour, and explained from the point of view of the user community, which is supported at the national and international level also with different Virtual Organizations in addition to Atlas
Finally, an example of how a real physics analysis is working at these sites will be shown.

Conclusions

A number of concepts and challenges are raised in these proposals, and in this contribution we show how these changes affect an Atlas Tier2 and its co-located Tier3 that are using the EGI infrastructure.

We will present the T2&T3 facility setup, how do we get the data and the arrangements proposed to fulfil the requirements coming from the new model, like the fact that any site can replicate data from any other site.

We also present how do we enable at the same time grid and local data access for our users, using the EGI infrastructure and procedures, and the middleware glite flavour that is being provided by EMI releases. In this direction an example of a real physics analysis and how the users are working will be presented, to check the readiness of the tools and how they perform with the current and within the changes being adopted coming from the evolution of the model.

The new computing model has allowed improving the efficiency to obtain analysis results using Real and MCData.

Overview (For the conference guide)

Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access.
In this way Tier1s and Tier2s are becoming more equivalent for the network and the Hierarchy of Tier1, 2 is not longer so important. This talk will present the usage of Tier2s resources in different GRID activities, caching of data at Tier2s, and their role in the analysis in the new ATLAS computing model. The Tier3s in the US and the Tier3s in Europe are rather different because in Europe we have facilities, which are Tier2s with a Tier3 component (Tier3 with a co-located Tier2).In our infrastructure, this model is in addition adopted sharing ATLAS Tier2 and co-located Tier3 facilities, which provides a different approach from for example Tier3 model in the US.

Primary authors

Dr Amoros Gabriel (CSIC) Mr Fernandez Alvaro (CSIC) Dr GONZALEZ DE LA HOZ Santiago (CSIC) Dr Kaci Mohammed (CSIC) Mr Oliver Elena (CSIC) Dr Salt Jose (CSIC) Mr Sanchez Javier (CSIC) Ms Sanchez Victoria (CSIC) Mr Villaplana Miguel (CSIC)

Presentation materials

There are no materials yet.