且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

CUDA入门(三) 初探线程与块

更新时间:2022-06-25 14:06:18

在配置GPU时一般都看重其的架构,流处理器数,以及显存数。
以英伟达的GPU为例架构一般以科学家的名字来命名,如Fermi(费米),Kepler(开普勒),现在主流的Maxwell(麦克斯韦),Pascal(帕斯卡),不同的架构主要体现在如纹理单元,流处理器,带宽等较为底层的东西不同,为线程与块中主要关心的是其流多处理器(streaming multiprocessor,SM)以及一个流多处理器包含的多个流处理器(scalar processor,SP) 或称为CUDA核(CUDA core)。当控制器将一个线程分配给一个流多处理器后,流多处理器的核协调工作,并行处理所有的线程。
在并行运算时,会出现网格(grid),线程块(block),线程(thread)三个常见的概念。一个网格下包含一个或多个线程块,一个线程块下包含多个线程。CUDA入门(三) 初探线程与块
在使用时 block和grid都可以用三维的向量表示,其中block向量的元素是thread,grid的元素是block。当然线程块内的线程数不是无限的,如先前的G80一个线程块内线程数最多为512个,Fermi架构下的线程块的线程数增加到1024个,Kepler架构下的线程数达到了2048个。

CUDA入门(三) 初探线程与块
如图中grid block,可用dim3来表示三维的向量:

dim3 `gridSize(3×2×1);`
dim3 blockSize(222);
kernel<<<gridSize,blockSize>>>();

如需二维,则定义时将第三元缺省,如果需一维,则可以直接用整型量int来表示线程和块数;
在核函数kernel中线程是并行执行的,而当一个线程中需要用到另一个线程的量时就需要用到线程的索引:
grid内每个block的位置可以通过blockIdx变量来获得,block的大小可以由变量blockDim来获得,thread在block中位置可以由threadId来获得;

下面再来看先前的两个数组相加的程序,这次我们加上线程索引和块索引;并且将数组的大小扩大到1000:


#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include<stdlib.h>
#include <stdio.h>

cudaError_t addWithCuda(int *c,  int *a,  int *b, unsigned int size);

__global__ void addKernel(int *c,  int *a,  int *b)
{
    int i = threadIdx.x+blockIdx.x*blockDim.x;
    c[i] = a[i] + b[i];
}

int main()
{
     const int arraySize = 1000;
    int a[arraySize];
    int b[arraySize];
    int c[arraySize] = { 0 };

    for (int i = 0; i < arraySize; i++)
    {
        a[i] = rand() % 100;
        b[i] = rand() % 100;

    }
    // Add vectors in parallel.
    cudaError_t cudaStatus = addWithCuda(c, a, b, arraySize);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "addWithCuda failed!");
        return 1;
    }
    for (int i = 0; i < 1000; i++)
    {
        printf("c[%d]=%d\n", i, c[i]);
    }

    getchar();
    // cudaDeviceReset must be called before exiting in order for profiling and
    // tracing tools such as Nsight and Visual Profiler to show complete traces.
    cudaStatus = cudaDeviceReset();
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaDeviceReset failed!");
        return 1;
    }

    return 0;
}

// Helper function for using CUDA to add vectors in parallel.
cudaError_t addWithCuda(int *c,  int *a,  int *b, unsigned int size)
{
    int *dev_a = 0;
    int *dev_b = 0;
    int *dev_c = 0;
    cudaError_t cudaStatus;


    // Choose which GPU to run on, change this on a multi-GPU system.
    cudaStatus = cudaSetDevice(0);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaSetDevice failed!  Do you have a CUDA-capable GPU installed?");
        goto Error;
    }

    // Allocate GPU buffers for three vectors (two input, one output)    .
    cudaStatus = cudaMalloc((void**)&dev_c, size * sizeof(int));
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMalloc failed!");
        goto Error;
    }

    cudaStatus = cudaMalloc((void**)&dev_a, size * sizeof(int));
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMalloc failed!");
        goto Error;
    }

    cudaStatus = cudaMalloc((void**)&dev_b, size * sizeof(int));
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMalloc failed!");
        goto Error;
    }

    // Copy input vectors from host memory to GPU buffers.
    cudaStatus = cudaMemcpy(dev_a, a, size * sizeof(int), cudaMemcpyHostToDevice);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMemcpy failed!");
        goto Error;
    }

    cudaStatus = cudaMemcpy(dev_b, b, size * sizeof(int), cudaMemcpyHostToDevice);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMemcpy failed!");
        goto Error;
    }

    // Launch a kernel on the GPU with one thread for each element.
    addKernel<<<4,256>>>(dev_c, dev_a, dev_b);

    // Check for any errors launching the kernel
    cudaStatus = cudaGetLastError();
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "addKernel launch failed: %s\n", cudaGetErrorString(cudaStatus));
        goto Error;
    }

    // cudaDeviceSynchronize waits for the kernel to finish, and returns
    // any errors encountered during the launch.
    cudaStatus = cudaDeviceSynchronize();
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaDeviceSynchronize returned error code %d after launching addKernel!\n", cudaStatus);
        goto Error;
    }

    // Copy output vector from GPU buffer to host memory.
    cudaStatus = cudaMemcpy(c, dev_c, size * sizeof(int), cudaMemcpyDeviceToHost);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMemcpy failed!");
        goto Error;
    }

Error:
    cudaFree(dev_c);
    cudaFree(dev_a);
    cudaFree(dev_b);

    return cudaStatus;
}

这里单个块内的线程数为256,块的个数为4。
块内线程数的选择有一个warp的概念,一个block内的线程同时也被打包成多个warp,一个warp由32各线程组成,所以为了让资源不浪费,即warp内的线程数被充分利用,线程数经常设置为32的整数倍。