c – MPI_Scatterv中的displs参数是什么?

来自MPI_Scatterv()函数的displs参数被称为“整数数组(长度为组大小).条目i指定位移(相对于sendbuf,从中获取传出数据来处理i”).

那么我们就说我有发送参数

int sendcounts[7] = {3, 3, 3, 3, 4, 4, 4};

我推理出来的方式是displs数组应始终以值0开始,因为第一个条目的位移相对于sendbuf是0,所以在上面的例子中,displs应该如下所示:

int displs[7] = {0, 3, 6, 9, 13, 17, 21};

那是对的吗?我知道这是一个微不足道的问题,但由于某种原因,网络根本没有帮助.那里没有好的例子,因此我的问题.

最佳答案 是的,置换为根信息提供了关于将哪些项发送到特定任务的信息 – 起始项的偏移量.因此,在大多数简单的情况下(例如,您使用MPI_Scatter但计数不均匀),可以立即根据计数信息计算:

displs[0] = 0;              // offsets into the global array
for (size_t i=1; i<comsize; i++)
    displs[i] = displs[i-1] + counts[i-1];

但它不需要那样;唯一的限制是您发送的数据不能重叠.你也可以从背面算起来:

displs[0] = globalsize - counts[0];                 
for (size_t i=1; i<comsize; i++)
    displs[i] = displs[i-1] - counts[i];

或任何任意顺序也会起作用.

通常,计算可能会更复杂,因为发送缓冲区和接收缓冲区的类型必须一致但不一定相同 – 例如,如果要发送多维数组切片,通常会得到这个.

作为简单案例的一个例子,下面是前后案例:

#include <iostream>
#include <vector>
#include "mpi.h"

int main(int argc, char **argv) {
    const int root = 0;             // the processor with the initial global data

    size_t globalsize;
    std::vector<char> global;       // only root has this

    const size_t localsize = 2;     // most ranks will have 2 items; one will have localsize+1
    char local[localsize+2];        // everyone has this
    int  mynum;                     // how many items 

    MPI_Init(&argc, &argv); 

    int comrank, comsize;
    MPI_Comm_rank(MPI_COMM_WORLD, &comrank);
    MPI_Comm_size(MPI_COMM_WORLD, &comsize);

    // initialize global vector
    if (comrank == root) {
        globalsize = comsize*localsize + 1;
        for (size_t i=0; i<globalsize; i++) 
            global.push_back('a'+i);
    }

    // initialize local
    for (size_t i=0; i<localsize+1; i++) 
        local[i] = '-';
    local[localsize+1] = '\0';

    int counts[comsize];        // how many pieces of data everyone has
    for (size_t i=0; i<comsize; i++)
        counts[i] = localsize;
    counts[comsize-1]++;

    mynum = counts[comrank];
    int displs[comsize];

    if (comrank == 0) 
        std::cout << "In forward order" << std::endl;

    displs[0] = 0;              // offsets into the global array
    for (size_t i=1; i<comsize; i++)
        displs[i] = displs[i-1] + counts[i-1];

    MPI_Scatterv(global.data(), counts, displs, MPI_CHAR, // For root: proc i gets counts[i] MPI_CHARAs from displs[i] 
                 local, mynum, MPI_CHAR,                  // I'm receiving mynum MPI_CHARs into local */
                 root, MPI_COMM_WORLD);                   // Task (root, MPI_COMM_WORLD) is the root

    local[mynum] = '\0';
    std::cout << comrank << " " << local << std::endl;

    std::cout.flush();
    if (comrank == 0) 
        std::cout << "In reverse order" << std::endl;

    displs[0] = globalsize - counts[0];                 
    for (size_t i=1; i<comsize; i++)
        displs[i] = displs[i-1] - counts[i];

    MPI_Scatterv(global.data(), counts, displs, MPI_CHAR, // For root: proc i gets counts[i] MPI_CHARAs from displs[i] 
                 local, mynum, MPI_CHAR,                  // I'm receiving mynum MPI_CHARs into local */
                 root, MPI_COMM_WORLD);                   // Task (root, MPI_COMM_WORLD) is the root

    local[mynum] = '\0';
    std::cout << comrank << " " << local << std::endl;

    MPI_Finalize();
}

跑步给出:

In forward order
0 ab
1 cd
2 ef
3 ghi

In reverse order
0 hi
1 fg
2 de
3 abc
点赞