Capacities, Facilities & Technologies

Background
ico_computing

Computing

Current particle physics experiments, in particular, those installed at the LHC collider, require immense amounts of computing and storage resources. For example, the smallest of the 4 main experiments of LHC, LHCb, is generating about 250GB of data per hour. In a time when these experiments are being carried out by international collaborations, the installation of those resources in just one data centre has never been a viable option. Being High Energy Physics (HEP) groups pioneered in the use and development of distributed computation techniques, they establish a collaboration known as WLCG (Worldwide LHC Computing Grid) to address the challenge. The Grid is a collection of computing resources distributed around the world and interfaces that make unnecessary to know their location to access them.

We classify WLCG computer resources in 4th levels:

Tier 0

Located at CERN is in charge of safekeeping the raw data from the detector, an initial data processing and its distribution to the different Tier1’s.

Tier 1

These are computing centres with high processing and storage capabilities. They must provide support 24 hours, 7 days per week. Their primary role is the track reconstruction of particles involved in the collisions and the selection of those events of interest to the scientist.

Tier 2

One or several sites can form them. They generate simulation data and depending on its size, and the experiment they are serving can perform data selection and reconstruction tasks.

Tier 3

Local computing resources provided only to the scientists associated with the hosting institution.

In 2002, our institute put into operation a computer cluster to provide 60% of the computational resources of the Spanish LHCb Tier-2 (ES-LHCb-T2), with the University of Barcelona providing the remaining 40%. The Spanish LHCb Tier-2 centre was designed to offer around 6% of the total computational power provided by all Tier-2 centres to the LHCb experiment. This goal with has been reached and quite often even surpassed. Currently, our institute provides roughly 1000 cores and https://wiki.egi.eu/wiki/FAQ_HEP_SPEC06]10kHS06 /(a computing power benchmark developed by the HEP community based on the SPEC benchmark to measure the capabilities of the CPUs and tailored to the needs of our community) to LHCb.

In 2008, a Tier-3 was put in operation to provide computer resources to the Nuclear, Theory and LHCb groups in our institute. Currently, this cluster provides close to 1000 cores and approximately 15kHS06. It also provides around 200TB of storage space. Users can access around 33% of the computing resources interactively while the rest is available through a queue system.

ico-cpu-06

Microelectronics

Experiments researching into new phenomena and yet unknown subatomic particles requires devoted detectors. These need to be developed for experimental setups such as of the LHCb experiment at CERN, nuclear experiments like R3B at FAIR, and of astrophysics experiments (dark matter, neutrino experiments).

For the future, the institute is contributing to the high luminosity run of the LHC (HL-LHC) (LHCb Upgrade phase II) and is planning on contributing to the instrumentation of gravitational waves experiments.

In the IGFAE we have infrastructures to design, prototype and test once produced electronics for all these types of research programmes.

There is a laboratory for the characterisation and construction of silicon detectors. It has a clean room (30 m2 upgradable, class 100000) with a wedge bonding machine and different equipment to test silicon sensors. A second laboratory devotes to develop micro-electronics (SMD population and reworking laboratory) and a third laboratory to readout systems development, equipped with state of the art DAQ systems and oscilloscopes.