|Previous:||Configuring an Internet Access Provider||(See printing version)|
|Next:||How (Not) to Block the Computer Networking and ...|
Jean-Michel Jouanigot IT/CS
The structured cabling project as defined in 1994 was completed by the end of 1998. The main goal of this project was to provide reliable 10 Mbits per second (Mbps) network connections to the desks in all of the major buildings. Its companion, the routing project, whose goal was to restructure and stabilize the overall network, is also complete. In all of the major buildings, structured cabling and routing have provided reliable and stable network services to almost all the desks, and to the Computer Centre. The backbone itself, defined at the beginning of this complex project, was based upon switched FDDI 100 Mbps rings, interconnected by a series of Gigaswitches. On each ring a series of routers are connected, providing shared 10 Mbps Ethernet to most of the desks, or FDDI to some work groups.
At some places structured cabling could not be introduced: in
these the remaining coaxial infrastructure is being restructured
and then routed, thus creating what we can call "structured coax".
In particular, the experimental halls have all been restructured,
such that each experiment now gets its own dedicated Ethernet
segment(s). It is expected that this restructuring and routing will
be completed by the beginning of next year, at which time the old
network (in which all TCP/IP network numbers are of the form
128.141.x.y) will be completely removed.
Since the definition of the project some new technologies have arrived on the market. The IT/CS group has been closely looking at these emerging technologies, and has evaluated some of them, in particular Fast Ethernet and ATM. Since 1997 it has become clear that Fast Ethernet would become the de facto standard for 100 Mbps connections. Fast Ethernet is exactly like standard Ethernet, but running 10 times faster. However, it can also run in "Full duplex" between two systems, with the advantage that there are no more collisions on the medium. It is therefore possible to get more than 11 Megabytes per second (MBps) between two systems in each direction at the same time (a value which is easily attainable with modern systems). Fast Ethernet can only run on Fibre or on the structured cabling wiring and plugs: it cannot run on the old coaxial cables. The first Fast Ethernet switch was installed in August 1997 in the Computer Centre, and the first experiment to use this technology was NA57 in their Data Acquisition System (DAQ) at the end of 1997. It was clear that deploying Fast Ethernet connections would not be possible if the CERN backbone itself was not able to handle multiple simultaneous connections. Gigabit Ethernet was available just in time.
Gigabit Ethernet is exactly like Fast Ethernet, but running 10 times faster (100 times faster than Ethernet). It can only run Full duplex between two systems, meaning that the available bandwidth is more than 120 MBps in each direction at the same time. It is currently only available on fibre, but it is expected the structured cabling and plugs will very soon be able to carry that speed! The first Gigabit Ethernet experiments were carried at CERN in late 1997, and used in production for the first time for the NA48 Central Data Recording in January 1998 between the North Area (Building 918 in Prevessin) and the CS2 in the Computer Centre. The switches used in this experiment were two FDDI/Fast Ethernet switches interconnected with Gigabit Ethernet over a 7 km link (which is almost twice the accepted maximum length defined by the standard). Four systems in the NA48 DAQ were connected with FDDI to one of the switches, whilst the CS2 was connected to the system using four Fast Ethernet connections. The maximum effective bandwidth achieved (real data, real software), was over 35 Mbytes per second (30% of the Gigabit link), which was more than the experiment's requirement. This setup ran successfully until the beginning of 1999, at which time NA48 was integrated into the general CDR Gigabit facility.
Since then, progressively, the Gigabit Ethernet infrastructure has been deployed, strongly pushed by the needs of the PC farms used in physics data analysis. Some of these farms include more than 50 PCs interconnected with Fast Ethernet, the disk servers and tape servers being connected with Gigabit Ethernet to the switching system. More than 10 such farms have been installed in the last 18 months in the Computer Centre or in the physics DAQs.
Since the number of systems using Fast Ethernet or Gigabit Ethernet was rapidly increasing, the introduction of a new generation of routers, able to handle Gigabit speeds, was necessary. The first two such routers were introduced in the Computer Centre in February 1999: all central PC farms, as well as most of the NICE servers, are now connected to them. The migration to these new routers of the various other servers in the computer centre is planned this year. At the same time, the central FDDI Gigaswitch was replaced by new FDDI/Gigabit Ethernet switches so that the old FDDI backbone now has a fast connection to the new one.
The IT/CS group has started the deployment of the new generation Campus backbone based upon Gigabit routers interconnected with Gigabit links. The very first places where this new backbone was deployed are the experimental halls. In Prevessin the North Hall (around building 887) has CMS, NA49, NA57, NA48 and COMPASS, all of which have their Data Acquisition systems connected with Gigabit Ethernet. The East Hall (Building 157) and the West Area (building 180) also now are on the new backbone. This overall network setup was shown to permit more than 110 Mbytes per second between the NA48 DAQ in Prevessin and a series of servers on Gigabit Ethernet in the Computer Centre.
This new backbone will in future be deployed to other places, where required. It will provide high bandwidth to the structured cabling star-points, making possible the introduction of Fast Ethernet on the desk where necessary. It is currently planned to equip more than 10 star-points with this new technology by the end of this year.
For more information on this new project, please consult
http://network.cern.ch (section "Projects").
For matters related to this article please contact the author.