Carlo Vandoni IT/DI
The twenty-first CERN School of Computing will be held at the Hotel Savoy in Funchal, Madeira, Portugal, from Sunday 6 September to Saturday 19 September 1998.
The School is open to postgraduate students and research workers with a few years' experience in elementary Particle Physics, in computing or in related fields.
All other details of logistics, e.g. concerning accommodation, language and travel can be found on the Web, at URL:
The School is based on the presentation of approximately 42 lectures and on related practical exercises on PCs or workstations, and the programme of the Schools is organised around four themes:
This series of lectures will be complemented by exercises
This track explores the general field of distributed computing with a special emphasis on high-performance on the one hand, and on the use of agent technologies on the other hand.
The track is composed of two distinct parts. During the first week, background knowledge on high-performance distributed computing is provided, ranging from the statement of the problems to be resolved to the description of underlying technologies, including the use of toolkits such as DCE/CORBA. Case-studies will conclude this first part.
During the second week, the focus is on the use of agents written in Java. This method is applied to the specific field of Distributed Physics Analysis. After an introduction to the Agent technology, the class will discuss the application of the approach to Physics, including the use of submission and compute servers, and algorithm agents. Exercises where student write physics analysis algorithms in Java, and agent-based job submission systems, and finally merge their outcome into a global system, will complement the lectures.
This track discusses the application of artificial intelligence techniques to the monitoring and control of large experimental systems in High Energy Physics. The complexity and the expected long lifetime of the next generation of HEP experiments, in particular those that will take place at CERN's Large Hadron Collider, will place major challenges to the reliable operation of the detectors. Sophisticated data acquisition, monitoring and detector control systems will be developed to perform these tasks. Artificial intelligence techniques will play an important role in this context. Diagnosis, alarm handling and user assistance tools extracting information from a common knowledge base are examples of the facilities that can help to solve problems when the human experts are not available.
Although High Energy Physics experiments have long generated data volumes far in excess of the usual norm, a number of experiments due to start in around 2000, and those planned to take data at CERN's Large Hadron Collider, will push these limits even further. The LHC experiments are expected to take some 5 PetaBytes (PB) (1PB = 10**15 bytes) of data per year, giving rise to a total data sample, integrated over 15-20 years' running, of perhaps as much as 100 PB. In addition to the sheer volumes of data involved, the extremely long time-scales - by the standards of the IT industry or the career of an individual physicist - mean that great care must be taken to protect against change. Thus, the use of standard, and, if possible, commodity solutions are necessary.
This track discusses the basic components that are being investigated in order to provide data storage and data management solutions to future HEP experiments. Not only will the standards that are involved be discussed, but so too will concrete solutions that have already been used in production for storing and managing physics data. The main components that are currently being studied include the High Performance Storage System (HPSS), a mass storage system built according to the IEEE Computer Society's reference model for such systems, and Objectivity/DB, an ODMG-compliant Object Database Management System (ODBMS). The track will cover these two components, plus the relevant standards and architectural choices, as well as the CERN RD45 project which is investigating these components as a solution to handling LHC event data.
Smooth software evolution requires a well established development process. Starting from the standard software process model (CMM) of the Software Engineering Institute, the lecturers will explain the activities required to start improving the process and expand further on configuration management and quality through software metrics. For the exercises, students would be requested to work on their own projects, applying software metrics to the source code and organizing all the development documents with a version control system (CVS - Concurrent Version System).
If you have a WWW viewer that supports forms then you may
fill out and submit the CSC '98 WWW application form
If you decide to use the Web
to send in your application form do not forget that you also need
to send A Formal Letter of Reference.
If you choose not to use the Web, then you must fill out
a printed copy of the
plain text version of the
When applying to attend the School, each student is
requested to provide a summary in English of about 100 words in
length, describing his/her current work.
Applicants are requested to forward their summary to Miss J.
Turner by e-mail
Candidates should forward the completed application form and formal letter of reference to Miss J. Turner as specified in the section "Enquiries and Correspondence".
The deadline for receipt of applications is 15 May 1998.More details concerning the application cost and information concerning financial support or selection are given on the Web at URL:
All enquiries and correspondence related to the School should be addressed to:
Miss J. Turner
CERN School of Computing
CH-1211 GENEVA 23