IT박스

MPI 통신에 MPI_Bcast 사용

itboxs 2020. 11. 11. 08:24
반응형

MPI 통신에 MPI_Bcast 사용


MPI_Bcast를 사용하여 루트 노드에서 다른 모든 노드로 메시지를 브로드 캐스트하려고합니다. 그러나이 프로그램을 실행할 때마다 항상 처음에 중단됩니다. 아무도 그것에 무엇이 잘못되었는지 알고 있습니까?

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
        int rank;
        int buf;
        MPI_Status status;
        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);

        if(rank == 0) {
                buf = 777;
                MPI_Bcast(&buf, 1, MPI_INT, 0, MPI_COMM_WORLD);
        }
        else {
                MPI_Recv(&buf, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
                printf("rank %d receiving received %d\n", rank, buf);
        }

        MPI_Finalize();
        return 0;
}

이것은 MPI를 처음 접하는 사람들에게 일반적인 혼란의 원인입니다. MPI_Recv()방송에서 보낸 데이터를 수신하는 데 사용하지 않습니다 . 당신은 MPI_Bcast().

예를 들어 원하는 것은 다음과 같습니다.

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
        int rank;
        int buf;
        const int root=0;

        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);

        if(rank == root) {
           buf = 777;
        }

        printf("[%d]: Before Bcast, buf is %d\n", rank, buf);

        /* everyone calls bcast, data is taken from root and ends up in everyone's buf */
        MPI_Bcast(&buf, 1, MPI_INT, root, MPI_COMM_WORLD);

        printf("[%d]: After Bcast, buf is %d\n", rank, buf);

        MPI_Finalize();
        return 0;
}

For MPI collective communications, everyone has to particpate; everyone has to call the Bcast, or the Allreduce, or what have you. (That's why the Bcast routine has a parameter that specifies the "root", or who is doing the sending; if only the sender called bcast, you wouldn't need this.) Everyone calls the broadcast, including the receivers; the receviers don't just post a receive.

The reason for this is that the collective operations can involve everyone in the communication, so that you state what you want to happen (everyone gets one processes' data) rather than how it happens (eg, root processor loops over all other ranks and does a send), so that there is scope for optimizing the communication patterns (eg, a tree-based hierarchical communication that takes log(P) steps rather than P steps for P processes).


MPI_Bcast is a collective operation and it must be called by all processes in order to complete.

And there is no need to call MPI_Recv when using MPI_Bcast. There is a post that may be helpful for you, click here

참고URL : https://stackoverflow.com/questions/7864075/using-mpi-bcast-for-mpi-communication

반응형