CERN Accelerating science

This website is no longer maintained. Its content may be obsolete. Please visit for current CERN information.

CERN home pageCERN home pageDocuments by ReferenceDocuments by ReferenceCNLsCNLsYear 2001Year 2001Help, Info about this page


Editorial Information
If you need help
Announcements Special 35th Anniversary Physics Computing Desktop Computing Internet Services and Network Scientific Applications and Software Engineering Desktop Publishing The Learning Zone User Documentation Just For Fun ...
Previous:Highlights from CHEP 2001
Next:Desktop Computing
 (See printing version)

Data Services Group Activities and Plans

Harry Renshall , IT/DS


A review of the current activities and plans of the DS group in the services of AFS, backup, managed storage, tapes, tape drives and tape robots. The group was recently formed out of the Data Management section of PDP group.

The Data Services group is currently responsible for four major areas. These include the home and project directories infrastructure based on the AFS product and the backup services for computer centre and departmental servers based on the TSM and Legato products. We are responsible for the managed storage services of the CERN Advanced Storage Manager (CASTOR) (which includes backwards compatibility with the SHIFT tape access software) and the HPSS product (now being phased out) and finally for the magnetic tape drive, robotics and disk server infrastructure that underpins all of these. Changes are planned or have recently happened in all of these areas and it is timely to report on them.

AFS Services

Over the summer months there was a clear degradation in the quality and stability of the AFS services with many clients, mostly running Linux, experiencing access problems whereby directories or files apparently disappeared or clients froze completely. There was certainly a large increase in load at this time as many new clients appeared but there was no obvious single cause as both we and the communications and Linux systems staff investigated. We scheduled an upgrade of the fileserver software as there were fixes for some of the locking situations and this was performed on 14 August involving a total stop of AFS services for 1 hour. There was a noticeable improvement in some of the cases but the real change came on 17 August when a network switch in the computer centre was changed. Since then these services have returned to an acceptable level. The next changes here will be to introduce larger scale scratch space servers as part of replacing old non-Linux systems and deploying next year's capacity. In the longer term we will be looking at using OpenAFS fileservers instead of using IBM AFS (we already have many Linux clients using OpenAFS).

Backup Services

IT does not backup the individual desktop expecting local disk to be for scratch purposes only and permanent files to be either on the Windows servers, in AFS or under managed storage, either used from there or copied locally. We backup critical computer centre machines using Legato and group or experiment local servers using the IBM TSM product. TSM is also used for explicit user driven archiving (the pubarch command). This separation is historical and we believe we can now decide on only one of these products. Much of the service is assured by outsourcing and we do not have the time or experience any more to decide how to proceed so have appointed a consultancy to advise us. They are due to report before the end of this year.

Managed Storage

The CASTOR system went into full production at the beginning of this year being successfully used by the second Alice data challenge and replacing HPSS totally for this year's central data recording. There is now some 160 TB of data in 2500000 files in CASTOR as shown in the plot below. This usage has built up faster than we expected and there have been more teething troubles as a result. In addition new functionality was urgently requested by running experiments who were given priority and some of the bugs or features have hence taken longer than we would like to be fixed in the production services. The large amount of data successfully stored strongly demonstrates the overall success of the CASTOR system. In addition we have recruited a new staff member into the team and have put robustness and reliability as the highest priority until we are satisfied, followed by accounting, control and new functionality. In addition work on interfacing Castor to the European Data Grid has started.

We have now put into production a requested wide area ftp interface to CASTOR. Users can make ftp transfers from outside of Cern to the machine using their CERN afs login and password. They can transfer directly to and from /castor and /afs. Files written into CASTOR will automatically be migrated to tape during the next 30 minutes. Files to be transferred out of CASTOR will first be staged to the local disks on wacdr so there will be a delay while this happens. For a 1GB file this could be 15 minutes but the staging will complete even if the ftp session is closed. A subsequent ftp request will then find the file to be already on disk and start the transfer.

CERN decided to phase out HPSS in favour of CASTOR. Now that CASTOR phase 1 is in production there is no longer a need for two systems and CASTOR brings us far more flexibility in satisfying the special requirements of HEP, at the price of manpower of course. Owing to workload and vacations we have not systematically started migrating data from HPSS to CASTOR but we are now starting this process. HPSS users should move their files into CASTOR themselves by running the H2C command. When it has successfully completed (which can take many hours for a large volume) users should delete their HPSS files. H2C changes the default file path prefix from HPSS to CASTOR so to delete files use the full path eg 'hsm delete hpsssrv1:/hpss/'.

Tapes and Tape Robots

Two years ago we decided on a disaster avoidance scenario of having two physically separate tape robot installations. As a first step we built a second STK robot four-silo complex in the basement of building 513, the first one being on the ground floor. A new building, 613, about 200 metres away is now ready. During the months of October and November we will be moving silos and data from the basement of 513 into the new 613. At the same time we have bought two new silos, each of 6000 cartridge slots, for next year's raw data so as to have sufficient equipment for two complexes of five silos each. We will use the new silos to minimise service downtime in this transition by installing them as the first equipment in the new building. This should be finished by 22 October when we start moving cartridges into them. As soon as an existing silo is empty we will dismantle it and move it to its new site. The whole process should be finished by the end of November and should be transparent apart from some mount delays except there will probably be two periods of up to twelve hours when a whole complex will be unavailable while new silos are connected up. We will schedule these in quiet times (e.g. accelerator stoppages) with plenty of warning. The CASTOR software itself will be put into a pause mode during these times and batch jobs will be delayed but should not fail.

At the end of 2000 we agreed a project with STK to replace the unreliable helical scan Redwood tape drives with the linear 9940 technology over two years. We now have 28 9940 drives and they have proved very successful. We have already reduced from 32 to 20 Redwood drives and will go down to 12 as part of the move to the new building. By the end of 2002 we plan to have only 4 Redwood drives and for them to be only lightly used. About half of the total number of Redwood cartridges have so far been copied to 9940, mostly as CASTOR files.

An important objective of this timing is to clear the vault to prepare it to be used for the LHC Computing Grid project. Many archive tapes are stored there, plus some DLT tapes, and experiments have already been requested to tell us what we can scrap or to where they should be moved. There will be a limited amount of space in IT for 'active' DLTs that can be cycled into the small 600 slot DLT robot. There will be no more manual DLT mounts and we anticipate only small numbers of manual mounts for 4mm and 8mm media.

Also in the area of tape drives we are starting an acquisition process for high quality, high reliability linear technology tape drives to satisfy the requirements of the LHC Computing Grid project for its first phase. This requires a sustained data recording rate to tape of 200MB/sec in 2002 rising to 500MB/sec in 2003. We believe it is too early to rely on the cheaper mid-range technology of LTO for this but will be acquiring a small robot with several LTO vendor tape drives to intensively test this new technology and also support import/export.

For matters related to this article please contact the author.

Vol. XXXVI, issue no 3

Last Updated on Fri Dec 07 14:18:27 CET 2001.
Copyright © CERN 2001 -- European Organization for Nuclear Research