Version 5 (modified by 12 years ago) (diff) | ,
---|
Post doctoral positions
New technologies for data management in fundamental physics
- Contact person: Suren A. Chilingaryan csa@dside.dyndns.org
- Detailed announcement
- Apply Online
Complex distributed data acquisition systems are required to operate modern scientific experiments. Recently a wide range of new technologies has been introduced, significantly increasing computing throughput (Parallel & GPU computing, Grids and Clouds) and enriching interactive and communication capabilities of web applications (HTML5, WebGL, WebCL). With the rapid grow of data rates, increasing complexity through interaction between measurement and simulation subsystems and the request to store scientific results traceable and referenceable, there is a rising demand for the rapid adoption of these new technologies and for new concepts in the areas of data management and scientific computing.
Our objective is to design a flexible data management platform and on top of it to build customizable data portals for upcoming dark matter experiments. New methods for data organization utilizing the latest computing, database, visualization, and web technologies have to be developed. The platform should be tightly integrated with the data-center scale infrastructures, like the Large-Scale-Data-Facility (LSDF) at KIT. To push the co-operation within collaborations to a new level, an intuitive web portal providing simulation, visualization, and analysis capabilities has to be developed. Combination of cutting edge web technologies with advanced computing techniques and large-scale computer infrastructures will build the foundation for the next generation of analysis platforms.
Responsibilities:
- Analyze new computing technologies relevant for data management and actively contribute to scientific collaborations by proposing and developing new methods of data organization.
- Design and implement data management system for EURECA and maintain running systems for other experiments like KATRIN, KITcube, TOSKA.
- Present results on conferences, publish research and technical articles, write grant proposals, and supervise students.
Qualification and Required Skills:
- PhD in computer science, mathematics or physics.
- Expertise in high performance computing, scientific data processing, parallel systems and software optimizations (knowledge of GPU computing is a plus)
- Knowledge of cutting-edge database and web technologies
PhD topics
Advanced large-scale data management infrastructure for science
- Contact person: Suren A. Chilingaryan csa@dside.dyndns.org
- Detailed announcement
- Apply Online
Huge quantities of information are produced by scientific experiments world wide. Data formats, underlying storage engines, and sampling rates are varying significantly. The components of large international experiments are often located in multiple, sometimes hardly accessible places, and are developed by only loosely connected research groups. To handle the distributed and heterogeneous nature of modern experiments, a conceptually new design of data management system is required. The goal of the project is to propose a new model for storage of large and growing archives of multi-dimensional time series. The model have to be implemented as a flexible and customizable data management framework applicable to a wide range of experimental conditions. It should provide the building blocks required to organize the data flow in scientific experiments and include components simplifying design of easy-to-use web interfaces for data analysis and visualization. The concept should be used for the mobile meteorological experiment KITcube that integrates about 30 complementary devices for atmospheric studies. Application to further experiments is foreseen. The task requires knowledge of broad areas of computer technology from high-speed parallel computing to database optimization, and web-based visualization. Emerging technologies should be studied to bring faster and more convenient interface to the users. Particulary, WebCL and WebGL are of extreme importance for web-based data analysis and visualization. A sophisticated techniques of data preprocessing and storage should be developed to quickly generate previews summarizing large quantities of data. The whole range of client platforms ranging from high-end visualization stations to hand-held multi touch smartphones has to be supported.
Qualification and Required Skills:
- Master in Computer Science, Electrical Engineering, Physics, or Mathematics
- Web technologies, database design, parallel programming, 3D rendering, mathematical statistics
Advanced control system for ultra fast X-ray imaging with GPU-clusters
- Contact person: Suren A. Chilingaryan csa@dside.dyndns.org
- Detailed announcement
- Apply Online
The new Image beamline at the KITs synchrotron ANKA is dedicated to the investigation of structures in materials and organisms with a high spatial and temporal resolution. The imaging station consists of an X-ray optical system, a high-precision mechanical system and a set of high-speed cameras producing hundred thousands frames per second and a data rate of up to 4 GB/s. A novel image processing framework has been developed to simplify implementation of algorithms for modern parallel architectures with OpenCL and optimized for clustered environment. To operate this beamline, an intelligent fast control and data management system is required. The goal of this work is to build a beamline control system managing the data flow from the camera to the storage. Based on the image processing framework, the image-driven control loops have to be developed. Additional algorithms should be implemented, optimized and added to the framework. An intuitive control interface should allow an easy customization of control loops and data processing chain. Real-time visualization and manual interventions during the experiment is mandatory. For permanent data storage, the Large-Scale-Data-Facility (LSDF) at KIT will be used. It is necessary to connect the control system to the LSDF and design methods to allowing visual navigation through the stored data and fast access to the selected datasets. The control system finally will server as a prototype for the new generation of high-speed and high-throughput beamlines in the synchrotron community. The work is embedded in national and international collaborations for high data-rate processing.
Qualification and Required Skills:
- Master in Computer Science, Mathematics or Physics
- Strong C and Python knowledge, parallel computing architectures, numerical algorithms in image processing, 3D visualization techniques, data management, control theory, good understanding of natural sciences
Master topics
Web-based monitoring of large-scale data in scientific experiments
- Contact person: Suren A. Chilingaryan csa@dside.dyndns.org
- Detailed announcement
- Required Skills: JavaScript? & PHP; knowledge of OpenGL/WebGL is a plus
- Experience Gained: WebCL/WebGL, Data management in high energy physics experiments, Visualization of scientific data
Huge quantities of information are produced by scientific experiments world wide. Data formats, underlying storage engines, and sampling rates are varying significantly. At the Institute for Data Processing and Electronics we develop a web-based visualization framework which handles multiple types of slow-control data and helps engineers and scientists to inspect device operation and examine the integrity and validity of the measurements. The framework is used in a wide area of applications ranging from fusions experiments, astroparticle physics, to meteorological systems. State-of-the-art web browsers support a rich set of features to construct sophisticated interfaces using web technologies only. With introduction of WebGL it become possible to perform 3D visualization as well. The student is expected to design and implement a new module for real-time monitoring. The main challenge is to visualize multi-dimensional data sets and arrays of sensors mapped to the 3D models. One example is shown in the image below where an array of temperature sensors was mapped to the model of KATRIN tank to visualize the temperature distribution.
Optimizing imaging algorithms to the latest parallel CPU and GPU architectures
- Contact person: Suren A. Chilingaryan csa@dside.dyndns.org
- Detailed announcement
- Required Skills: Good knowledge of C programming language, knowledge of OpenCL or/and CUDA is a plus
- Experience Gained: Parallel programming, GPU programming, Image processing
Parallel computing has become increasingly important in the last several years. The standard servers include nowadays up to 64 computing cores. Modern GPUs are able to execute thousands of floating point operations in parallel and have become a valuable tool in multiple scientific field that require high computational throughput. It becomes more and more important to parallelize existing image processing algorithms and tune the implementations to the recent hardware architectures. It is crucial to take into the consideration the details of hardware architectures. The computational units may employ different types of cache hierarchies to accelerate memory access, the new processors often introduce new sets of instructions accelerating specific operations. The student will select an algorithm from one of the ongoing projects and perform optimization and tuning for the used hardware. Available options include differential phase contrast imaging done in cooperation with ANKA synchrotron, digital image correlation and tracking done in collaboration with University of Pennsylvania, X-Ray CT done in collaboration with Helmholtz Center in Dresden-Rosendorf.
Attachments (10)
- adei_postdoc_v4.pdf (732.1 KB) - added by 12 years ago.
- kitcube_adei_phd_v3.pdf (1.1 MB) - added by 12 years ago.
- ufo_phd_v5.pdf (1.9 MB) - added by 12 years ago.
- 1301-adei-status-v2.pdf (442.4 KB) - added by 12 years ago.
- 1301-gpu-optimization-v2.pdf (197.6 KB) - added by 12 years ago.
- ufo_phd_v7.pdf (1.3 MB) - added by 12 years ago.
- kitcube_adei_phd_v5.pdf (1.1 MB) - added by 12 years ago.
- ufo_phd_v7.2.pdf (1.3 MB) - added by 11 years ago.
- kitcube_adei_phd_v5.2.pdf (1.1 MB) - added by 11 years ago.
- 1409-strobos-ak.pdf (192.5 KB) - added by 10 years ago.