Mittwoch, 3. Juni 2009

Manna für Manno: 172,5 mio CHF für Superrechner im Tessin - ohne dringenden Bedarf

Der Bundesrat hat am Freitag vor Pfingsten beschlossen, Manna auf das Hochleistungsrechenzentrum von Manno regnen zu lassen: 70 Millionen für einen Rechner, der 2012 in Cornaredo bei Lugano so schnell sein soll, wie heute die Top 2 der Top 500. Plus 50 Millionen für die Unterbringung der Maschine. Plus 5 Millionen für deren Kühlung. Plus 35 Millionen für's Aufrüsten des physischen Netzwerks ins und im Tessin und Fortbildungsmassnahmen an den Hochschulen, die den Rechner benutzen. Das sind zwar erst 160 Millionen, aber detailierter sind die Angaben weder im Artikel im papierenen Inlandteil der NZZ vom Samstag (leider nicht online; bedauerlicherweise meldet auch der Wissenschaftsteil nichts von dem grosszügigen Bundesratsbeschluss), noch im Schlussbericht des ETH-Rats zum "Schweizerischen Nationalen Strategischen Plan für Hochleistungsrechnen und -Vernetzung" vom 4. Juli 2007 (die Grundlage für den Bundesratsentscheid), noch in dessen Anhang G, in dem die Antworten der zur Vernehmlassung eingeladenen Fachleute versammelt sind. Die Gretchenfrage in dem knappen Dutzend Auskünfte, die sie zu geben hatten, war gleich die erste:
Should Switzerland acquire a petaflop-level system? If so, what are the technical and scientific justifications in terms of major national projects? If not, what is the top-performing computing system that Switzerland should acquire in 2008-2011?
Das offizielle "Abstimmungsresultat": Ja 20, Nein 4, "Nicht explizit" (Terminologie des Berichtes): 18. Im Bericht wird daraus gefolgert:
3.2 Braucht die Schweiz im Zeitraum 2008–11 ein System der Petaflops-Leistungsklasse?
Wenn die Schweiz in einer globalisierten Wissenschafts-, Wirtschafts- und Industriewelt wettbewerbsfähig bleiben will, ist die Antwort ein klares „Ja“. Die derzeitigen Spitzenrechner in der Schweiz erreichen ein Niveau von 20 Teraflops und es ist zu erwarten, dass innerhalb von drei bis vier Jahren Bedarf an Petaflops-Computing bestehen wird. Dieser Bedarf muss durch ein skalierbares System erfüllt werden, das bis 2010–11 Petaflops-Niveau erreichen kann.
Diese Schlussfolgerung beruht auf einem breiten Konsens und erklärten Absichten zahlreicher Schweizer Nutzer. Der Konsens entstand aus verschiedenen Voten, die am Tag der Offenen Tür für Interessengruppen im Dezember 2006 in Bern eingebracht wurden (zu den Ergebnissen des Tags der offenen Tür siehe Anhang B und G). Etwa die Hälfte der Teilnehmer brachte ein ausdrückliches Interesse an Petaflops-Leistung vor 2011 zum Ausdruck. Die meisten der Übrigen vertraten keine eindeutige Meinung und nur eine sehr kleine Minderheit sprach sich gegen die genannte Schlussfolgerung aus.

Aber stimmt das, was da steht, tatsächlich? Gibt es diesen "breiten Konsens"? Schauen wir die gemäss Seite B-1 ff. 4 Neinsager und die 18 "nicht expliziten" mal näher an. Sie interessieren mehr, denn sie haben der naheliegenden Versuchung widerstanden, einfach "Ja!" zu rufen auf die Frage, "Wollt ihr grösser, schneller, besser?"! Es stellt sich heraus: Mindestens einer sagt Nein, obwohl er zu den Ja!-Sagern gezählt wird. Und von den 18 "nicht expliziten" sind 15 eher dem Nein- & Skeptiker-Lager zuzuzählen. Und die 3 anderen enthalten sich der Stimme. Somit ergibt eine Nachzählung: 19 Ja-Sager gegen 20 Nein-Sager, bei 3 Enthaltungen. 20 (-1) : 4 (+1 +15) : 18 (-15). Ein sehr anderes Resultat, kein "breiter Konsens"! Insbesondere fällt auf, dass keiner der angefragten Industrievertreter einen Bedarf für einen Petaflop-Rechner im Tessin sieht. Die beschlossenen 160 Bundesmillionen stehen also auf deutlich dünnerem Eis, als der Bericht behauptet und der Bundesrat impliziert. Die Geschichte des CSCS in Manno war übrigens in den vergangenen Jahrzehnten immer mal wieder begleitet von heftigen Nebengeräuschen. Haben die 160 Millionen, die ins Tessin fliessen werden, vielleicht mehr mit Regionalpolitik als mit Spitzenforschung zu tun?

Explizit "Nein!" zu einem Petaflop-Rechner sagen 2006:

Jörg Hutter, Professor für physikalische Chemie, Uni ZH:
I don't think the investments into hardware of about 350 MSfr (estimated from money allocated for IBM and Cray to build a petaflop computer in the US) necessary for such a system can be justified today. The lack of a user community ready to use such a machine, together with the (most likely) imbalance of money spent for hardware instead of software and human resources will lead to a failure and will harm HPCN on the long term.
Victor Jongeneel, Schweizerisches Institut für Bioinformatik:
The acquisition of computer hardware should always be driven by specific projects, and not the other way around. If a project emerges that is of clear national interest and requires petaflop-level compute resources, then such an acquisition makes sense. If not, it doesn’t. In my specialty area (life science informatics) there are to my knowledge no projects at this time that would require general-purpose petaflop compute power. However, in the foreseeable future, life science research will move in the direction of computational simulation of ever-larger and more complex systems, and within 5-10 years will require this level of computing power, if not more.
Henrik Nordborg, ABB Forschungszentrum Schweiz:
A petaflop-system will probably have little immediate impact on more applied industrial research. The reason is that industrial research uses commercial software to a large extent, and this software is typically not available for the largest and fastest systems. The simulations typically also have the character of parameter studies, and it is therefore more important to be able to run many smaller jobs than one big one. Therefore, ABB would be more interested in flexible multi-purpose machines than a top-notch petaflop system.
Helena Van Swygenhoven, Paul Scherrer Institut:
Switzerland should have as goal to acquire a peta-flop system, however because the size of research budgets and the technological demand, acquisition should not be the immediate goal (2008-2011). The knowledge base at international level for effective use of petaflop platforms needs to be developed to avoid the use of peta-flop computers as a simple ensemble of tera-flop computers. Switzerland has high intellectual resources that should play an important role in the developments in the field of I/O, data mining, data visualization etc. necessary for the effective use of peta-flop computers. In other words, we should not jump blindly in the peta-flop race...
Und dann wären da noch die 18 als "nicht explizit" klassifizierten: Ron Appel, Direktor des Schweizerischen Instituts für Bioinformatik:
As often, there are two aspects to consider when deploying a large-scale nation-wide project. On one hand, developing a nation-wide High-Performance Computing and Networking infrastructure will allow many important projects to become realizable and thus will enable Swiss academic and industrial research and development to remain at the forefront of their respective fields. On the other hand, such an infrastructure will require large amounts of funds to be immobilized in hardware, software and human resources and thus the flexibility in setting up new innovative environments and projects that is absolutely necessary in today’s science might become difficult to maintain. This being stated, Switzerland should build the necessary infrastructure, funding scheme and competencies so that it can provide the required HPCN environment and resources when specific project require them.
Kim Baldridge, Uni ZH:
To decide this question, a careful and detailed analysis of the current situation of computational processing in the Swiss HPC centers, in particular at CSCS, should be performed. Which of the submitted jobs require an "expensive" supercomputer infrastructure, which could/should be executed on a "cheaper" cluster or grid infrastructure? How much necessary processing time cannot be granted to projects that are scientifically interesting? What could/should better run on individual University resources? How can the future needs be approximated by using current and previous such data? While there are several specific scientific domains that can exploit very high end computational infrastructure of the 'tera' and 'peta' scale, many areas of computing, petaflop-level systems are not efficient or employable, yet, access to general-purpose and commodity-style hardware, and grid oriented infrastructure is highly useful. Given strategic planning, it would be possible to bring a few of these latter domains into the higher end, but not without resources/people to restructure algorithms/applications.
Pawel Bednarek, ehem. Uni Fribourg:
Grid computing projects should get stronger, since thousands of desktop computers are idle during the night and most of the day. I think that cluster solutions are more price efficient than big vector computers and this kind of system are more flexible. Of course mixed systems are also very useful.
Richard Bührer, Direktionspräsident Fachhochschule Nordwestschweiz:
The Universities of Applied Sciences (UAS) in Switzerland heavily focus on application orientation (teaching, applied research and development). In order to provide high quality products the UAS base heavily on results from basic research being done at Universities and ETH. It is thus of great importance that basic research in Switzerland is done at top class level. For various research topics, the availability of high performance computing and networking at University level is thus fundamental. With respect to Universities and ETH, however, the availability of a petaflop system for UAS projects will be of much less importance.
Bastien Chopard, Informatiker, Uni GE:
The CSCS must play a central role in such a project. But all Universities, EPF, Technical school should be associated in a balance way. GRID technology must also be reinforced and supported by the Swiss government. Having a clear message from the Federal government is also instrumental to develop and coordinate existing projects at a local level and stimulate research institution to encourage local HPCN initiatives. I also think that financing a large computing infrastructure is important but new positions for young researchers and engineers must come together to exploit the potential, ensure novelty and continuity.
Doris Folini, damals an der EMPA, heute an der ETHZ, antwortet gar nicht auf die Gretchenfrage. Schade. Der Oekonom Beat Hotz Hart von der Uni ZH:
Im Rahmen der verfügbaren Mittel sowie in Einklang mit den laufenden Entwicklungen in Europa (z.B. anstehende Entscheide über Aufbau von Zentren des HPC) sollte die Schweiz deshalb geeignete Ressourcen für das Hochleistungsrechnen sowie entsprechende wissenschaftliche Beratung der Forschung zur Verfügung stellen. Die Ressourcen sollten aus Effizienzgründen wenigen, wissenschaftlich hervorragenden Projekten über einen Wettbewerbsmechanismus zugeteilt werden. Deren Resultate tragen einen wichtigen Beitrag zum wissenschaftlichen Output der Schweizer Forschung bei. Das Hochleistungsrechnen muss in der Schweiz unbedingt zwischen den relevanten Partnern CSCS, den Hochschulen und den Forschungsinstitutionen abgestimmt werden. Dabei sollte der Fokus auf die koordinierte Beschaffung und Benutzung der vernetzten Hochleistungsrechner (Supercomputers und Hochleistungsclusters) gelegt werden.
Mehdi Jazayeri, Informatiker an der Uni Lugano, äussert sich irritierenderweise explizit nicht zu Frage 1:
My comments apply only to the points 2 through 6 in which I have some expertise or interest.
Markus Meuwly, Chemiker an der Uni Basel, gibt keine Antwort auf die zentrale Frage, bermerkt aber:
As no institution – except maybe the ETH’s – have the HPCN facilities and knowledge right at their place, it is difficult for the majority of interested students to become familiar with HPCN.
Olivier Michielin vom Bioinformatikinstitut, antwortete mit einer einzigen, reichlich kryptischen Präsentationsfolie. Andrea Schlapbach von der Versicherungsgesellschaft SwissRe:
The financial services industry's needs on HPC exist. These needs are primarily of operational and not R&D nature. While it is possible that increasing HPC needs lead to acquiring computing performance services from a 3rd party, it is rather unlikely that such an operational service is easily shared with academia as the service level needs of commercial enterprises are different, and potentially much higher outside of the pure performance metric. Furthermore, top-end HPC infrastructures often require very specific application architectures, leading to a footprint which can be assessed as being too high.
- The insurance industry will continue to have a rather intense R&D involvement as risk management - in contrast to investment banking - is still a very evolving discipline. For this reason, active collaboration with academia already exists and needs to continue, i.e. these skills must be available. Such skills include risk modelling, implementation and HPC.
Beat Schmid , ehem. Uni SG:
We do not propose a specific solution, as Petaflops machine, or a grid of similar computing power. We leave such judgement to other schools, closer to technology.

Ulrich Straumann, Physiker an der Uni ZH, sagt Nein!, obwohl der unverständlicherweise als "Ja!"-Sager gezählt wird:
Do not buy a big system, which is supposed to serve everybody. The user acceptance is unpredictable, while the performance will be outdated soon. Instead invest in several small systems which are adapted to single projects or to a few projects with similar demands, and which can easily be scaled-up, when required.
Minh-Quang Tran, Plasmaphysiker an der EPFL:
The current emphasis on procurement of massively parallel systems based on peak performance measurements of simple mathematical algorithms fails to recognize that an important number of real applications that involve strong nonlinear and non-local effects achieve a very weak fraction of the peak performance quoted as a result of scalability problems to a large number of processors. The “Petaflop/s system” should be part of a much broader and long term vision about HPC in Switzerland. It can serve as a “flagship” and thus will hopefully attract the best human competences and stimulate the most challenging projects in the country and beyond. However, such a system alone will not be able to serve all HPCN needs, which are diverse and will almost certainly require several different types of computer architecture to be fulfilled. The recent evolution of computer architectures points to several crucial issues such as memory access and interconnect characteristics in order to assess the performance on real applications.
Rolf Wohlgemut, Siemens Schweiz, sagt eigentlich Nein, wird aber als "nicht explizit" aufgeführt:
Die Siemens Schweiz AG hat in der Vergangenheit noch keinen Bedarf an einer zentralen Supercomputing Leistung gehabt. Auch sehen wir für die nähere Zukunft ebenfalls keinen Bedarf. Wir könnten uns als globaler Konzern eher eine interne Gridlösung bei Bedarf vorstellen, um unsere weltweit verteilen Computerleistungen zu bündeln.

Dean Flanders, Informatikchef beim Friedrich Miescher Institut der Novartis in Basel, zählt die Liste zu den Ja!-Sagern, obwohl er gegenteilig argumentiert:
I have yet to see a supercomputer center or grid succeed in the democratization of high performance computing (HPC) and we should learn from these failures. CSCS needs to take a lead role in HPC in Switzerland, and facilitate HPC and its use (but not control it). However, as we already know, if a large single computer system is built, it will only be used to answer a limited number of big questions, inaccessible to most researchers, completely saturated (no matter how many CPUs), and be outdated the minute it enters the computing center. This PC strategy should be different, it should be pragmatic, involve scientists in the process, focus on solving scientific problems (not developing new technologies), have no “sacred cows” (be open to commercial vendors of software), and not re-invent wheel. That is why I recommend a nodal approach, bringing compute resources near to researchers (and their data) and providing a broad portfolio of compute resources. The goal should not be a petaflop computer, but a petaflop of computing. I feel at the moment there are very few technical issues to overcome for this to happen, it is more of an implementation (and political) issue to establish a robust HPC infrastructure in Switzerland.
Rolf Würgler, Rektor Uni Bern:
We see the role of CSCS as high performance computing center, not in a leadership of notional oder interantional HPCN activities. Leadership must be taken by all HPC centers, not just by one.
Mihaela Zavolan, Bioinformatikerin an der Uni Basel:
Whether Switzerland should acquire a peta-flop system.. in the long run it will happen. Whether it should be now or later, that depends on the user demands in the foreseeable future. As for the role of CSCS, that depends on too many things I do not know.

Keine Kommentare: