Energy Aware Auto-Tuning for Scientific Applications
EASE (Energy Aware Auto-Tuning for Scientific Applications), an energy-aware framework, enacts energy aware auto-tuning mechanisms for HPC applications. The framework considers performance prediction and energy modeling aspects of HPC applications with the support of insieme compiler and EnergyAnalyzer tool.
This project is a bilateral collaborative project which was funded by FWF (Austria) and DST (India) to the tune of Rs.3.5crores (Approx). This project is sanctioned for 3 years from May 2014. The project has 5 workpackages: WP1 – Management (Both Indian and Austrian Side); WP2 – Compiler support (Austria Side); WP3 – Runtime system (Austrian Side); WP4 – Energy Modeling (Indian Side); WP5 – Online Performance Analysis (Indian Side).
Principal Investigators: Prof. Dr. Thomas Fahringer (Austrian Side) and Dr. Shajulin Benedict Ph.D, PostDoc(Germany) (Indian Side)
Project Members (Indian Side): Mrs. Rejitha R.S. and Mrs. Preethi M.
Supporting Project Members (Indian Side): Mrs. Suja A. Alex
Project Members (Austrian Side): Dr. Radu Prodan (Asso.Prof) and Mr. Philipp Gschwandtner Project weblink here.
RainCloud: Scientific Computing in the Cloud, Standortagentur Tirol
Despite the existence of many vendors that, similar to Grid computing, aggregate a potentially unbounded number of compute resources, Cloud computing remains a domain dominated by business applications (e.g. Web hosting, database servers) and whose suitability for scientific computing remains largely unexplored. In this project we plan to research basic methods that investigate the potential of Cloud infrastructures for high-performance scientific computing and apply them for operational daily use in a domain with high computational and QoS demands: meteorological weather forecast.
- Investigate the performance of the computational resources offered by commercial Cloud providers and device models that assess whether their performance is sufficient for scientific computing;
- Research resource management and scheduling methods for scientific workflows on Cloud platforms by extending an existing Grid application development and computing environment;
- Research economically-viable SLAs that encapsulate a balance between the QoS offered by the resource providers and the cost of resource use;
- Quantify the benefits of using leased Cloud resources for scientific applications with respect to performance, reliability, and price, compared to traditionally owned supercomputers, clusters, and Grids;
- Validate the research methods for two real scientific applications from the meteorological and astrophysics domains;
- Use the researched methods and infrastructure in operational daily use at the Avalanche Service Tyrol and the Tyrolean Hydrographical Service for obtaining precipitation forecasts in mountainous terrains with a spatial resolution of 0.5 km and with extensive information about the uncertainty of the forecasts.
Benefits of this project:
For University Innsbruck
The results from this project can represent essential input for the University of Innsbruck in shifting its business model from operating an expensive self-owned data/supercomputing centre towards renting on-demand resources from specialised companies in the right amount only when and for how long it is needed. Through this ability, University of Innsbruck can avoid capital expenditure on hardware (operation, maintenance, and over-provisioning), software, and services, rather paying a provider only for what they use. Consumption is billed on a utility (e.g. resources consumed, like electricity) or subscription (e.g. time-based, like a newspaper) basis with little or no upfront cost. By combining a professionally run data centre with an economy of scale, Clouds promise to become a cheap alternative to supercomputers and specialised clusters, a much more reliable platform than Grids, and a much more scalable platform than the largest of commodity clusters or resource pools. This new paradigm can produce significant budget savings which can be redirected by the University of Innsbruck towards the real science (e.g. hiring additional research stuff) rather than investing on non-profitable infrastructure hardware.
For Tirol Region
Two public services, which provide a user platform for the meteorological application of this project, will already test the product during its development phase:
- The daily avalanche bulletin of the Avalanche Service Tyrol (Lawinenwarndienst, LWD) has a huge potential impact on day-to-day operations of ski areas, tourism centres, and everyday live throughout Tirol in winter, e.g. through road blocks, necessary avalanche blasting, etc. The LWD product is based on automatic and human observations, as well as numerical weather forecasts. Providing additional support with probability information helps to make the decisions and forecasts more precise. The LWD needs twice-daily updated visualization in a user friendly manner of the precipitation fields as well as the computed and simulated certainties/uncertainties. All this information has to be available before a specific time each day, as the LWD outputs its avalanche bulletin at 7:30 AM;
- The Tyrolean Hydrographical Service (Hydrographschen Dienst) has among many duties the warning of risks of flooding and landslides. Especially for extreme precipitation events with low recurrence periods, having additional fine-scale probability information about the expected amounts instead of just one (or a few) precipitation sums is very helpful. It can support emergency services for prevention measures and planning in case of such an event
Edutain@grid is an exciting and ground-breaking project which aims to open the benfits of GRID technology to the wider public.
GRID technology enables high performance computing that until now has typically only been available to academia and large industry. Edutain@grid is developing middleware that will give other application developers access to this powerful technology without the need for Grid infrastructure management. Edutain@grid also recognises that application interactivity and responsiveness expectations will need to be maintained. Its success will be demonstrated through the development of two pilot applications for massively multi-player interactive gaming and e-learning, which the project defines as examples of Real-Time Online Interactive Applications (ROIAs).
The Distributed and Parallel Systems group of the University of Innsbruck is coordinating this project.
The CoreGRID Network of Excellence (NoE) aims at strengthening and advancing scientific and technological excellence in the area of Grid and Peer-to-Peer technologies. To achieve this objective, the Network brings together a critical mass of well-established researchers from forty-one institutions who have constructed an ambitious joint programme of activities. This joint programme of activity is structured around six complementary research areas that have been selected on the basis of their strategic importance, their research challenges and the recognised European expertise to develop next generation Grid middleware, namely:
- knowledge & data management;
- programming models;
- architectural issues: scalability, dependability, adaptability;
- Grid information, resource and workflow monitoring services;
- resource management and scheduling;
- Grid systems, tools and environments.
The Network is operated as a European Research Laboratory (known as the CoreGRID Research Laboratory) having six institutes mapped to the areas that have been identified in the joint programme of activity. The Network is thus committed to set up this Laboratory and to make it internationally recognised and sustainable. It is funded by a European grant (8.2 M euros) assigned to the CoreGRID NoE, for a duration of four years (starting September 1st, 2004), to cover the integration costs while the network partners cover the expense required to perform the research associated with the joint programme of activities.
The DPS group of the University of Innsbruck led by Prof. Thomas Fahringer joined CoreGRID on Sept 1st, 2006.
DPS is jointly conducting research with numerous other CoreGRID partners in the area of resource management and scheduling as well as Grid information, resource and workflow monitoring.
ASG – Adaptive Services Grid
EU IST Integrated Project (IST-2002-004617). The goal of Adaptive Services Grid (ASG) is to develop a proof-of-concept prototype of an open development platform for adaptive services discovery, creation, composition, and enactment. Based on semantic specifications of requested services by service customers, ASG discovers appropriate services, composes complex processes and if required generates software to create new application services on demand.
AURORA Advanced Models, Applications and Software Systems Systems for High Performance Computing. Long term reseach program funded by the Austrian Science Fund. Project leader: software tools for the Grid (performance measurement/instrumentation/prediction/analysis, scheduling parameter studies, networking and testing).
The goals of the CEGC (Central European Grid Consortium) are to:
- coordinate Grid infrastructures of partner countries within a Central-European Grid Consortium
- jointly develop a Grid infrastructure
- jointly participate in EU 6th Framework Grid projects as well in other international Grid projects.
The Internet communication infrastructure (the TCP/IP protocol stack) is designed for broad use; as such, it does not take the specific characteristics of Grid applications into account. This one-size-fits-all approach works for a number of application domains, however, it is far from being optimal – general network mechanisms, while useful for the Grid, cannot be as efficient as customised solutions. While the Grid is slowly emerging, its network infrastructure is still in its infancy. Thus, based on a number of properties that make Grids unique from the network perspective, the project EC-GIN (Europe-China Grid InterNetworking) will develop tailored network technology in dedicated support of Grid applications. These technical solutions will be supplemented with a secure and incentive-based Grid Services network traffic management system, which will balance the conflicting performance demand and the economic use of resources in the network and within the Grid.
EC-GIN – factsheet
The main target of EuroNGI – Design and Engineering of the Next Generation Internet is to create and maintain the most prominent European centre of excellence in Next Generation Internet design and engineering, leading towards a leadership in this domain.
EU IST STREP Project (*IST*-2002-511385). The goal of the project is to provide a Grid environment which assist the users in composing
workflow Grid applications, and which schedules and executes the applications on the Grid. All the phases of application processing will be
strongly based on knowledge and semantic technologies. Applications used in K-Wf Grid belong to different fields, including scientific simulations
(flood forecasting simulation) an industrial applications (ERP and traffic management).
Tailor-made Congestion Control (for the Grid)
In the Tailor-made Congestion Control project, we develop a “Network Adaptation Layer” which chooses and tunes congestion control mechanisms based on requirements and traffic specifications from applications. This project is funded by the Austrian Science Fund (FWF). We ensure applicability of the results to Grid Computing via the add-on project “Tailor-made Congestion Control for the Grid”, which is funded by the Tyrolean Science Fund.
APART: Real Tools
EU IST APART Automatic Performance Analysis, Specification and Modeling of Cluster and Grid Architectures/Applications.
In the MEP-VPN (Middlebox End-to-end Performance Enhancements for VPNs) project, mechanisms for dynamically and transparently adapting VPN data streams to changes in the network conditions will be developed. This project is carried out in collaboration with phion Information Technologies with funding from TransIT.
GEMSCLAIM – Greener Mobile Systems by Cross Layer Integrated energy management
Personal computing currently faces a rapid trend from desktop machines towards mobile services, accessed via tablets, smartphones and similar terminal devices. With respect to computing power, today´s handheld devices are similar to Cray-2 supercomputers from the 1980s. Due to higher computational load (e.g. via multimedia apps) and the variety of radio interfaces (such as WiFi, 3G, and LTE), modern terminals are getting increasingly energy hungry. For instance, a single UMTS upload or a video recording process on today´s smartphones may consume as much as 1.5 Watts, i.e. roughly 50% of the maximal device power. In the near future, higher data rates and traffic, advanced media codecs, and graphics applications will ask for even more energy than the battery can deliver. At the same time, the power density limit might lead to a significant share of “Dark Silicon” at 22nm CMOS and below. Obviously, disruptive energy optimizations are required that go well beyond traditional technologies like DVFS (dynamic voltage and frequency scaling) and power-down of temporarily unused components.
The GEMSCLAIM project aims at introducing novel approaches for reducing this “greed for energy”, thereby improving the user experience and enabling new opportunities for mobile computing.
The focus is on three novel approaches:(1) cross layer energy optimization, ranging from the compiler over the operating system down to the target HW platform, (2) efficient programming support for energy-optimized heterogeneous Multicore platforms based on energy-aware service level agreements (SLAs) and energy-sensitive tunable parameters, and (3) introducing energy awareness into Virtual Platforms for the purpose of dynamically customizing the HW architecture for energy optimization and online energy monitoring and accounting.
GEMSCLAIM will provide new methodologies and tools in these domains and will quantify the potential energy savings via benchmarks and a HW platform prototype.
The contribution of the DPS group is the development of an energy-aware compiler that optimizes the energy consumption for mobile systems based on a language extension of OpenMP+-.
More details are provided at the webiste for the Gemsclaim project.
One of the major problems in the practical application of industrial robots is the high effort which results from revising the handling task of the robot or from replacing the robot by different type. The main reason is the complex relationship between the movement to be performed by the robot, and the required movement of each of the robots axes, the so called inverse kinematics of the robot. It is a non-linear mathematical problem with no unique solution. Basic research at the Department of Geometry and CAD at the University of Innsbruck yielded a general algorithm for solving the inverse kinematics.
It is critical, in particular for mobile robots, that the algorithm is executed in real time within the robot controller as quickly as possible. The DPS group is investigating fundamental strategies of parallelizing the algorithm and developing parallel implementations on multiprocessor systems and multicore architectures. Possibilities for parallelism in the calculations will be evaluated and exploited. In addition, we are exploring implementations of the method not only in software, but also in hardware, by employing the potential of field programmable gate arrays (FPGAs) for use in high performance computing.
The project has been started in October 2010.
UMIT, Institute of Automation and Control Engineering (Prof. Michael Hofbaur):
- modular robot test bed, specifications of criteria for path planning, numerical studies, experimental evaluation
LFUI, Department of Geometry and CAD of the Faculty of Civil Engineering (Prof. Manfred Husty):
- algorithmic formulations of inverse kinematics and optimal path planning
LFUI, Institute of Computer Science, Department of Distributed and Parallel Systems (Prof. Thomas Fahringer):
- parallel implementation of the kinematic engine
Article in “Der Standard”:
Static & Dynamic Multi-objective Optimizations for Many-core Chips
For the duration of a two year project with Intel, the Distributed and Parallel Systems Group got access to a new and experimental multicore chip, the Single-chip Cloud Computer (SCC).
The aim of the project is to study the effect of modifications in the program code being executed as well as modifications of the hardware on runtime, power and energy consumption, as well as other non-functional parameters that are of interest. In addition, research regarding the automatic, simultaneous optimization of multiple, different objectives (multi-objective optimization) will be done. Such optimization strategies facilitate the development of software for multicore architectures since it requires less knowledge about parallel programming while still offering the vast performance that is potentially available in hardware.
The Intel SCC offers new possibilities, consisting of 48 cores that are interconnected via a high performance on-chip network. Primarily intended as a research object for parallel computing, it contains further distinguishing elements such as separate communication buffers that speed up communication between the cores. In addition to the high number of cores and its special architecture, it differs from other current multicore processors by providing new energy management functions. They enable the developer to change the operating frequency and voltage of groups of cores and therefore offer fine-grained power and energy consumption optimization capabilities with regard to the current work load. Furthermore previously unknown hardware optimization techniques are made available with the chip’s highly configurable architecture.
With these properties and the trend of increasing numbers of cores in future processors, the Intel SCC represents an interesting chip for research in the field of non-functional parameter studies and multi-objective optimization.
SHIWA – SHaring Interoperable Workflows for large-scale scientific simulations on Available DCIs
The SHIWA project’s main goal is to leverage existing workflow based solutions and enable cross-workflow and inter-workflow exploitation of DCIs by applying both coarse- and fine-grained strategies.
The coarse-grained (CG) approach enables to combine workflows written in different workflow languages in order to reuse existing reuse and combine existing workflow applications written in various workflow languages. The CG approach treats existing workflows as black box systems that can be incorporated into other workflow applications as workflow nodes. The fine-grained approach addresses language interoperability by defining an intermediate representation to be used for translation of workflows across various systems (ASKALON, Pegasus, P-Grade, MOTEUR, Triana). SHIWA develops, deploys and operates the SHIWA Simulation Platform to offer users production-level services supporting workflow interoperability following both approaches. As part of the SHIWA Simulation Platform the SHIWA Repository facilitates publishing and sharing workflows, and the SHIWA Portal enables their actual enactment. Use cases targeting various scientific domains will serve to drive and evaluate this platform from a user’s perspective.The SHIWA project started on the 1st of July 2010 and lasts two years. Official website here.
Doctoral School (FWF DK-plus):Computational Interdisciplinary Modelling
The Doctoral School “Computational Interdisciplinary Modelling” initiated by the University of Innsbruck brings together leading scientists from various fields: applied and basic research areas from astro-, plasma, and molecular physics and engineering on one hand and the methodologically oriented fields mathematics and computer science on the other hand. Apart from interdisciplinarity, an innovative teaching concept combined with strong international networking are key to this programme. The project is funded by the Austrian Science Fund FWF and the University of Innsbruck for a period of up to 12 years.
A ManyCore Compiler for Industrial Engineering Stability Analysis
This project aims at solving the computationally-intensive nonlinear structural analysis problem of large and complex light weight structures by using modern parallel multi-core processors enhanced with hardware accelerators such as e.g. GPUs. The project therefore addresses the following two research gaps.
On the one hand, the structural response of lightweight structures is characterised by the occurrence of bifurcation points which determine the load carry capacity. A rigorous treatment of the load carrying behaviour of complex structures consists of three steps: (1) exact localisation of bifurcations in course of the equilibrium path continuation; (2) classification of the singularity and determining the direction(s) of the bifurcated branch(es); and (3) choosing a predictor for initialising the path following procedure in the post-buckling regime. While steps 1 and 2 are well understood from a mathematical point of view, step 3 relies essentially on engineering decisions. However, a reliable and robust assessment of highly loaded, imperfection-sensitive structures is still a challenging problem for the engineer applying commercial finite element programs. The nonlinear analysis of complex lightweight structures like the Front Skirt of the ARIANE 5 launcher with three million degrees of freedom requests a huge computing time. The detection of critical domains within the structural response claims a large part of the computing time using currently available commercial Finite Element Method (FEM) software like ABAQUS or ANSYS.
On the other hand, processor development encountered the three walls for serial performance: the power wall, the memory wall, and the instruction-level parallelism wall. This led to CPUs aggregating multiple processing cores as well as specialized hardware accelerators with very high peak performance gained though their massively parallel architecture. Unfortunately, each of these new devices is offering a different programming interface and execution environment which hinder applications from being ported to different platforms and make the joint use of different tightly-coupled multi-core devices difficult or impossible..More Infos here...
Parallel Computing with Java for Manycore Computers
In a few years from now most mainstream processors will be equipped with more than 100 cores (chip-level multiprocessors) and the number of cores will double approximately every 1,5 years. Such processors are commonly known as manycore processors. Every desktop computer, server, supercomputer, cell phones, game console (with the Playstation 3 as a prominent representative), notebook, PDA (personal digital assistant) will be based on manycore processors which will be a fundamental turning point for computer architecture and software development. All applications used in industry, commerce, science, entertainment, education and private sectors will have to be parallel and should behave well in ranges between 100 and 10000 cores. Many applications and algorithms will have to be restructured (in many cases at best semi-automatically) if those applications should progress at the same speed as the hardware (increase of cores per processor) is progressing.
The goal of this project is to provide a novel development environment for Java programs that will run on manycore parallel architectures which will cover a wide variety of computers including desktop computers, servers, supercomputers, cell phones, game consoles, notebooks, and PDAs. As part of this project we plan to develop a programming paradigm that allows a programmer to control parallelism, load balancing and locality at a high level of abstraction, and a measurement and instrumentation framework to analyse Java programs.
More information can be found at project home web site.
The duration of this project is 3 years. It will start in Dec. 2008 and is funded by the Tiroler Zukunftsstiftung.
The goal of the AUSTRIAN GRID is to start and support grid computing in Austria in general, and to provide coordination and collaboration between research areas interested in grid computing.
In concrete, the AUSTRIAN GRID initiative aims at
- development and usage of grid computing infrastructures for diverse application areas, and
- installation and operation of a national grid testbed in Austria.
As decided on the 1st technical meeting for Austrian Grid (04 december 2003), you can find the following information about:
- The Austrian Grid Certification Authority (contact ca[at]austriangrid.at)
- Software for the Austrian Grid (contact Alex Villazon @ Innsbruck)
- The Austrian Grid Demo Application (contact Paul Heinzlreiter @ Linz)
EGEE (Enabling Grids for E-sciencE)
The Enabling Grids for E-sciencE project brings together scientists and engineers from more than 240 institutions in 45 countries world-wide to provide a seamless Grid infrastructure for e-Science that is available to scientists 24 hours-a-day. Conceived from the start as a four-year project, the second two-year phase started on 1 April 2006, and is funded by the European Commission.
Expanding from originally two scientific fields, high energy physics and life sciences, EGEE now integrates applications from many other scientific fields, ranging from geology to computational chemistry.
Generally, the EGEE Grid infrastructure is ideal for any scientific research especially where the time and resources needed for running the applications are considered impractical when using traditional IT infrastructures.
The EGEE Grid consists of 41,000 CPU available to users 24 hours a day, 7 days a week, in addition to about 5 PB disk (5 million Gigabytes) + tape MSS of storage, and maintains 100,000 concurrent jobs. Having such resources available changes the way scientific research takes place. The end use depends on the users’ needs: large storage capacity, the bandwidth that the infrastructure provides, or the sheer computing power available.
The Distributed and Parallel Systems group of the University of Innsbruck has been a member of the EGEE-I (2004-2006), and EGEE-II (2006-2008) projects, and is currently participating in EGEE-III (2008-2010). We sharing our expertise in project porting, grid site administration, and Grid training.
High Performance Production Simulations in the Automotive Industry with Many-core Parallel Computing Systems
In this project we will restructure and redesign the simulation of the dip paint process for cars which is an important part of the overall car production chain.
The main problem is that conventional dip paint simulation is too slow to be effectively used in the production process. For this purpose we rebuild the dip paint simulation, parallelize and optimize it for modern many-core parallel computers by using highly sophisticated computer science technology. Through this approach we expect a dramatic reduction of the simulation time from currently up to one week to less than one day.
This dramatic improvement of simulation time will on the one hand enable effective usage of dip paint simulation for the car production industry. On the other hand, it will substantially reduce the time-to-market, reduce resource requirements and energy consumption, improve the reliability of cars as well as reduce the impact on the environment with respect to CO2.
The DPS coordinates this project funded by the FFG and is the main technology provider.
We develop innovative compiler technologies for homogeneous shared memory parallel architectures to tune the runtime behavior of two complex real-world simulation codes provided by Magna Powertrain, Engineering Center Steyr (ECS).
Energy Aware Computing
EN-ACT aims to drastically reduce energy consumption and CO2 emissions of applications and systems based on Information and Communication Technology (ICT).
The main goal of the project is to define specifications for the design of energy-aware software (that specifically concerns about the energy consumption, i.e., “green software”). The project will be characterized in terms of energy performance at application level, based on the implementation of platform-independent metrics.
The DPS group of the University of Innsbruck contributes to this project by working on compiler technology for accelerator based systems. We work on auto-tuning for energy and runtime for OpenCL based codes on mobile devices and server compute systems.
Details can be found at the En-Act home-page.