Υπολογιστικές Μέθοδοι Ανάλυσης και Σχεδιασµού Υπολογιστικό Εργαστήριο Μάθηµα 1 Εισαγωγή στο MPI http://ecourses.chemeng.ntua.gr/courses/computational_methods
Αρχιτεκτονικές Παράλληλων Υπολογιστών Shared Memory Distributed Memory
Προγραµµατισµός Παράλληλων Υπολογιστών µε MPI Distributed memory
Μια απλή υπολογιστική συστοιχία (cluster)
MPI : Message Passing Interface Τι είναι; Βιβλιοθήκη υποπρογραµµάτων τα οποία µπορείτε να καλέσετε µέσα από έναν κώδικα FORTRAN ή C για να πραγµατοποιήσετε την ανταλλαγή πληροφοριών (communication) µεταξύ των διεργασιών (processes) Γιατί MPI; Καθιερωµένο Αποδοτικό Εύκολο στη χρήση
MPI : Start - Finish call MPI_INIT(ierror) call MPI_COMM_RANK(MPI_COMM_WORLD, MyID, ierror) call MPI_COMM_SIZE(MPI_COMM_WORLD, Nprocs, ierror)... call MPI_FINALIZE(ierror) MPI_COMM_WORLD 1 0 2 3 Integer : MyID, Nprocs, ierror
MPI : Μετάφραση και εκτέλεση mpif90 myprogram.f90 mpirun -np S a.out S είναι το πλήθος των διεργασιών στις οποίες θα τρέξει ο κώδικας (nn = S)
Παράδειγµα 1: Ο πρώτος µου κώδικας µε MPI program hello implicit NONE include 'mpif.h' integer MyID, Nprocs, ierror call MPI_INIT(ierror) call MPI_COMM_RANK(MPI_COMM_WORLD, MyID, ierror) call MPI_COMM_SIZE(MPI_COMM_WORLD, Nprocs, ierror) print *, 'Hello world from ', MyID call MPI_FINALIZE(ierror) end
Collective communication Reduce MPI_REDUCE(sendbuf, recvbuf, count, type, op, root, comm, ierror) MPI_ALLREDUCE(sendbuf, recvbuf, count, type, op, comm, ierror) comm: MPI_COMM_WORLD type:mpi_integer, MPI_REAL, MPI_DOUBLE_PRECISION, MPI_COMPLEX, MPI_LOGICAL, MPI_CHARACTER op: MPI_MAX, MPI_MIN, MPI_SUM, MPI_PROD
Παράδειγµα 2: MPI_ALLREDUCE program allreduce implicit NONE include 'mpif.h' integer MyID, Nprocs, ierror real a, sum call MPI_INIT(ierror) call MPI_COMM_RANK(MPI_COMM_WORLD, MyID, ierror) call MPI_COMM_SIZE(MPI_COMM_WORLD, Nprocs, ierror) a = MyID + 1 call MPI_ALLREDUCE(a, sum, 1, MPI_REAL, MPI_SUM, MPI_COMM_WORLD, ierror) print *, sum call MPI_FINALIZE(ierror) end
Collective communication Broadcast, Scatter and Gather MPI_BCAST(buffer, count, type, root, comm, ierror) MPI_SCATTER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror) A B C D A B C D A B C D MPI_GATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror) A A B A B C D B C C D D
Παράδειγµα 3: MPI_SCATTER program scatter implicit NONE include 'mpif.h' integer size parameter(size=4) integer MyID, Nprocs, sendcount, recvcount, source, ierror real sendbuf(size,size), recvbuf(size) data sendbuf / 1.0, 2.0, 3.0, 4.0, & 5.0, 6.0, 7.0, 8.0, & 9.0, 10.0, 11.0, 12.0, & 13.0, 14.0, 15.0, 16.0 / call MPI_INIT(ierror) call MPI_COMM_RANK(MPI_COMM_WORLD, MyID, ierror) call MPI_COMM_SIZE(MPI_COMM_WORLD, Nprocs, ierror) if (Nprocs==size) then source = 1 sendcount = size recvcount = size call MPI_SCATTER(sendbuf, sendcount, MPI_REAL, & recvbuf, recvcount, MPI_REAL, & source, MPI_COMM_WORLD, ierror) print *, 'Task ID= ',MyID,' Results: ',recvbuf else print *, 'Error. Must specify',size,' processors.' endif call MPI_FINALIZE(ierror) end
Επικοινωνία Point to Point Η επικοινωνία point to point γίνεται µόνο µεταξύ 2 διεργασιών 4 τύποι επικοινωνίας standard buffered synchronous ready Όλοι οι τύποι µπορεί να είναι είτε σε µορφή blocking είτε σε µορφή non-blocking Blocking form non-blocking form Synchronous send MPI_SSEND MPI_ISSEND Buffered send MPI_BSEND MPI_IBSEND Standard send MPI_SEND MPI_ISEND Ready send MPI_RSEND MPI_IRSEND Receive MPI_RECV MPI_IRECV
Επικοινωνία Point to Point Ορίσµατα Blocking send non-blocking send Blocking receive non-blocking receive buffer, count, type, dest, tag, comm, ierror buffer, count, type, dest, tag, comm, request, ierror buffer, count, type, source, tag, comm, status, ierror buffer, count, type, source, tag, comm, request, ierror
Παράδειγµα 4: Blocking message passing program ping implicit NONE include 'mpif.h' integer MyID, Nprocs, dest, source, count, tag, ierror integer status(mpi_status_size) character inmsg, outmsg outmsg = 'x' tag = 1 call MPI_INIT(ierror) call MPI_COMM_RANK(MPI_COMM_WORLD, MyID, ierror) call MPI_COMM_SIZE(MPI_COMM_WORLD, Nprocs, ierror) if (MyID==0) then dest = 1 source = 1 call MPI_SSEND(outmsg, 1, MPI_CHARACTER, dest, tag, MPI_COMM_WORLD, ierror) call MPI_RECV(inmsg, 1, MPI_CHARACTER, source, tag, MPI_COMM_WORLD, status, ierror) endif if (MyID==1) then dest = 0 source = 0 call MPI_RECV(inmsg, 1, MPI_CHARACTER, source, tag, MPI_COMM_WORLD, status, ierror) call MPI_SSEND(outmsg, 1, MPI_CHARACTER, dest, tag, MPI_COMM_WORLD, ierror) endif print *, MyID, inmsg call MPI_FINALIZE(ierror) end
Παράδειγµα 5: non - blocking message passing program iping implicit NONE include 'mpif.h' integer MyID, Nprocs, dest, source, count, tag, request, ierror integer status(mpi_status_size) character inmsg, outmsg outmsg = 'x' tag = 1 call MPI_INIT(ierror) call MPI_COMM_RANK(MPI_COMM_WORLD, MyID, ierror) call MPI_COMM_SIZE(MPI_COMM_WORLD, Nprocs, ierror) dest = MyID + 1 source = MyID - 1 if(dest > Nprocs-1) dest = 0 if(source < 0) source = Nprocs-1 call MPI_ISSEND(outmsg, 1, MPI_CHARACTER, dest, tag, MPI_COMM_WORLD, request,ierror) call MPI_RECV(inmsg, 1, MPI_CHARACTER, source, tag, MPI_COMM_WORLD, status, ierror) call MPI_WAIT(request, status, ierror) print *, MyID, inmsg call MPI_FINALIZE(ierror) end