Uni-Kassel
14. März 2017Vorlesung Introduction to Parallel Computing
Course Description Parallel computing has become an almost ubiquitous way to perform computer simulations involving large amounts of data or intensive calculations. The basic purpose of using several processors is to speed up computations of large problems by distributing the...
Erstelle deinen persönlichen Lernplan
Wir helfen dir, diesen Kurs optimal vorzubereiten — mit einem individuellen Lernplan, Tipps und passenden Ressourcen.
Jetzt Lernplan erstellenCourse Description
Parallel computing has become an almost ubiquitous way to perform computer simulations involving large amounts of data or intensive calculations. The basic purpose of using several processors is to speed up computations of large problems by distributing the work. But large problems typically involve vast quantities of data as well; by distributing the data across several processors, problems of previously unsolvable size can now be tackled in reasonable time.
This course will both introduce the basic aspects of parallel programming and the algorithmic considerations involved in designed scalable parallel numerical methods. The programming will use MPI (Message Passing Interface), the most common library of parallel communication commands today for any type of parallel machine architecture. We will discuss several application examples in detail that show how parallel computing can be used to solve large application problems in practice.
Learning Goals
By the end of this course, you should:
understand and remember the key ideas, concepts, definitions, and theorems of the subject. Examples include understanding the purpose of parallel computing and why it can work, being aware of potential limitations, and knowing the major types of hardware available. This information will be communicated in class and in the textbook, but also in additional reading. --> This information will be discussed in the lecture as well as in the textbook and other assigned reading.
have experience writing code for a Linux cluster using MPI in C, C++, and/or Fortran that correctly solves problems in scientific computing. The sample problems are taken from mathematics and your code has to compile without error or warning, run without error, and give mathematically correct results first of all. In addition, it needs to run on a Linux cluster without error and you need to be able to explain its scalability, i.e., why or why not it executes faster on several processors than in serial. We will have problems stated in different ways and from various sources to provide you with exposure to as many issues as possible. --> This is the main purpose of the homework and most learning will take place here.
have gained proficiency in delivering code written by you to others for compilation and use. This includes the concept of providing a README file that gives instructions how to compile and run the code as well as of providing a sample output file to allow the user to check the results. We will work together in class to discuss best practices to transfer code for homework problems of increasing complexity. --> You will submit your homework code by e-mail to the instructor and it needs to compile and run in parallel for credit; this is complemented by a report that shows and explains your results.
have some experience how to learn information from a research paper and to discuss it with peers. Group work requiring communication for effective collaboration with peers and supervisors is a vital professional skill, and the development of professional skills is a declared learning goal of this course. --> I will supply some research papers carefully selected for their readability and relevance to the course. Learning from research papers is a crucial skill to develop.
Required textbook: Peter S. Pacheco, Parallel Programming with MPI, Morgan Kaufmann, 1997. Associated webpage: http://nexus.cs.usfca.edu/mpi
Recommended book: William Gropp, Ewing Lusk, and Anthony Skjellum, Using MPI: Portable Parallel Programming with the Message-Passing Interface, second edition, MIT Press, 1999. Associated webpage: http://www-unix.mcs.anl.gov/mpi/usingmpi/examples/main.htm
Recommended book: William Gropp, Ewing Lusk, and Rajeev Thakur, Using MPI-2: Advanced Features of the Message-Passing Interface, MIT Press, 1999.
Recommended book: Brian W. Kernighan and Dennis M. Ritchie, The C Programming Language, second edition, Prentice-Hall, 1988. Associated webpage: http://cm.bell-labs.com/cm/cs/cbook/
Bemerkung
This course will be followed by the course Parallel Computing for Partial Differential Equations in Sommersemester 2012. The course will take place pplace in the form 4 V + 2 Ü during the second half of the semester
Voraussetzungen
Numerik I, fluency in programming C, C++, or Fortran and proficiency in using the Unix/Linux operating system, or consent of instructor
FB 10 Mathematik und Naturwissenschaften
Numerik I, fluency in programming C, C++, or Fortran and proficiency in using the Unix/Linux operating system, or consent of instructor
Uni Kassel
WiSe 2011/12
Lehrveranstaltungspool FB 10
Informatik
Prof. Dr.
Gobbert Matthias