基于流处理器编程模型和集群架构的研究

VIP免费
3.0 牛悦 2024-11-19 4 4 6.43MB 86 页 15积分
侵权投诉
I
摘 要
今天,GPU 作为可编程通用流处理器的代表,其性能飞速发展,已经打破 CPU
发展所遵照的摩尔定律。而且,GPU 利用其可编程性和功能扩展性来支持复杂的
计算和处理,这个特性已经得到业界的公认。在架构上,主流 GPU 都是采用统一
的流体系架构,并且实现了细粒度的线程间通信,大大扩展了通用领域的应有范
围。因此,基于 GPU 的通用计算模型的研究也成为当今研究的焦点。
NVIDIA CUDA 编程模型的出现使基于 GPU 的通用计算GPGPU得到了突
破性的发展,传统 GPGPU Cg Brook+编程模型慢慢将退出历史的舞台。利用
GPU 大规模的线程级并行处理能力,可以将 GPU 作为计算的通用计算平台;通过
构建高性能集群系统,可以提供更高性能的科学计算能力。
本论文主要工作如下:
1) 深入研究了基于 NVIDIA GPU 的流编程模型 CUDA
2) 对基Tesla 硬件架CUDA 硬件映射机制SIMT 执行模型、多层存
储器技术、异步执行模式等的一系列 CUDA 技术进行了剖析;
3) 以控制和计算相分离的思想为前提,提出了不同于传统 x86 集群系统的流
处理器集群架构。并从 MKSDMulti-Kernels Single Data,多 Kernel 单数据流)
MKMDMulti-Kernels Multi-DataKernel 多数据流)两种并行模式上探
流处理器集群系统的设计模式;
4) 采用 MPI+CUDA 混合编程模型和通用流处理程序装载模型,分离了流处
理器集群的控制和计算,优化了对多数据流处理策略,使得流处理器集群系统较
传统 x86 集群系统拥有数倍性能的提升。
关键词:流处理器 GPU 编程模型 CUDA 技术 流处理器集群
II
ABSTRACT
Today, GPU as a representative of programmable general-purpose stream
processors, its developing performance is rapid, and the Moore's Law, according to
which CPU is developing, has been broken. Moreover, GPU makes use of its
programmability and scalability features to support complex computing and processing,
and this feature has been recognized by industry. In architecture, the mainstream GPU is
based on the unified stream architecture, and achieved a fine-grained inter-thread
communication, greatly expanded the application scope in the general field. Therefore,
the research on GPU based general-purpose computing model has become one of the
focuses of today’s research.
The emergence of NVIDIA CUDA programming model make general-purpose
GPU-based computing has been a breakthrough development. Cg and Brook +
programming model of traditional GPGPU will be gradually withdrawn from the
historic stage. Take full advantage of a large-scale thread-level parallelism processing
ability of GPU to provide general-purpose computing platform of the computing with
GPU; constructing high-performance cluster system, which can provide more
high-performance scientific computing ability.
The major works of this paper can be listed as follows:
1) The paper has an in-depth research on NVIDIA GPU-based stream
programming model.
2) Analyzed a series of the CUDA Technology including the CUDA mapping
mechanism for the Tesla hardware architecture, SIMT execution model,
multi-memory technology, asynchronous execution mode etc.
3) According to the precondition for the idea that separate control from computing,
this paper proposed a stream processor cluster that be different from the
traditional x86 architecture. Explore stream processor cluster system designing
patterns from two kinds of parallel patterns that MKSD (Multi-Kernel Single
Data) and MKMD (Multi-Kernel Multi-Data,).
4) Through the use of hybrid MPI + CUDA programming model and general
stream processing program loading model, we have separated stream processor
clusters control from compute, optimized processing strategy for multi-stream,
made the stream processor cluster system than the traditional x86 cluster
III
system has several times performance improved.
Key WordsStream ProcessorGPU Programming ModelCUDA
TechnologyStream Processor Cluster
IV
目 录
中文摘要
ABSTRACT
第一章 绪论 .....................................................................................................................1
§1.1 引言 ....................................................................................................................... 1
§1.2 多核并行 ............................................................................................................... 1
§1.3 课题来源及意义 ................................................................................................... 2
§1.3.1 CPU 多核并行 ................................................................................................2
§1.3.2 CPU 多核到 GPU 众核 .............................................................................3
§1.3.3 从超级计算机、集群与分布式计算到流处理器集群 .................................3
§1.4 论文的主要工作和组织结构 ............................................................................... 4
第二章 NVIDIA GPU 流计算硬件架构 .........................................................................6
§2.1 使GPU 成为高性能可编程流处理器的关键 ..................................................... 6
§2.2 Tesla 计算架构 ......................................................................................................7
§2.2.1 计算单元 .........................................................................................................9
§2.2.2 流多处理器 SM............................................................................................10
§2.2.3 理论性能的计算 ...........................................................................................12
§2.3 本章小结 ............................................................................................................. 12
第三章 GPU 通用计算技术 ..........................................................................................13
§3.1 GPU 通用计算 .................................................................................................... 13
§3.2 传统 GPGPU 开发的困难 .................................................................................. 14
§3.3 CUDA 编程模型 .................................................................................................15
§3.3.1 主机和设备代码的分离技术 .......................................................................15
§3.3.1.1 nvcc 编译器 ...........................................................................................16
§3.3.2 Kernel 函数两层并行技术 ...........................................................................17
§3.3.3 线程的组织结构及管理技术 .......................................................................18
§3.3.4 CUDA 硬件映射技术 .................................................................................. 20
§3.3.5 SIMT 执行模型 ............................................................................................ 20
§3.3.5.1 SIMT SIMD 的不同 ......................................................................... 22
§3.3.5.2 分支预测对 SIMT 执行效率的影响 .................................................... 22
§3.4 CUDA 多层存储技术 .........................................................................................22
§3.4.1 寄存器的使用 ...............................................................................................24
§3.4.2 局部内存技术 ...............................................................................................24
§3.4.3 共享内存技术 ...............................................................................................25
§3.4.4 全局内存技术 ...............................................................................................25
§3.4.5 常量内存技术 ...............................................................................................25
§3.4.6 数据加载/回收技术 ..................................................................................... 26
§3.5 CUDA 线程通信技术 .........................................................................................27
§3.5.1 GPU 线程同步函数 ......................................................................................27
V
§3.5.2 Kernel 间通信技术 .......................................................................................27
§3.5.3 CPU GPU 线程的共同同步机制 ............................................................28
§3.6 CUDA 异步执行技术 .........................................................................................28
§3.6.1 异步执行机制 ...............................................................................................28
§3.6.2 ...................................................................................................................28
§3.6.3 异步执行的意义 ...........................................................................................30
§3.7 本章小结 ............................................................................................................. 30
第四章 MKSD 流处理器集群架构设计及构建 .......................................................... 31
§4.1 基于流处理器的集群架构模型 ......................................................................... 31
§4.2 传统集群节点与流处理器集群节点 ................................................................. 32
§4.3 MPI+CUDA 混合编程技术 ............................................................................... 34
§4.3.1 消息传递接口 MPI ...................................................................................... 34
§4.3.2 CUDA 技术 .................................................................................................. 34
§4.3.2.1 典型 CUDA 应用程序一般执行流程 .................................................. 35
§4.3.2.2 任务的划分 ............................................................................................35
§4.3.2.3 线程的设置与流处理器的分配 ............................................................36
§4.3.3 MPI+CUDA 混合编程模型 .........................................................................38
§4.3.4 MPI CUDA 的初始化 ............................................................................. 39
§4.3.5 MPI 通信的消息数据组织结构 .................................................................. 40
§4.4 流处理器集群的构建 ......................................................................................... 41
§4.4.1 硬件环境 .......................................................................................................41
§4.4.2 软件环境 .......................................................................................................42
§4.4.3 集群性能测试 ...............................................................................................42
§4.5 本章小结 ............................................................................................................. 44
第五章 MKMD 流处理器集群系统设计及相关实验 .................................................45
§5.1 通用流处理程序装载模型 ................................................................................. 45
§5.1.1 设计目的 .......................................................................................................46
§5.2 集群系统整体布局 ............................................................................................. 46
§5.3 系统功能需求 ..................................................................................................... 47
§5.3.1 主控机功能需求 ...........................................................................................47
§5.3.2 从机功能需求 ...............................................................................................49
§5.4 系统数据流图 ..................................................................................................... 50
§5.5 集群系统相关处理策略 ..................................................................................... 53
§5.5.1 开发及运行环境 ...........................................................................................53
§5.5.2 缓冲池的设计 ...............................................................................................53
§5.5.3 主控机接收/发送策略 ................................................................................. 54
§5.5.4 从机数据处理策略 .......................................................................................55
§5.6 数据结构定义 ..................................................................................................... 56
§5.6.1 主控机缓冲池的数据结构映射 ...................................................................56
§5.6.2 视频质量指标计算结果数据类型 ...............................................................57
§5.6.3 任务类型数据结构 .......................................................................................57
VI
§5.7 流处理任务类型表的设计 ................................................................................. 58
§5.8 通信配置表的设计 ............................................................................................. 58
§5.9 通信协议的设计 ................................................................................................. 60
§5.9.1 有名通道 0上的协议 ...................................................................................60
§5.9.2 Socket 通信协议 ...........................................................................................60
§5.10 多流多 Kernel 执行的实验与分析 .................................................................. 61
§5.10.1 实验目的 .....................................................................................................61
§5.10.2 Kernel 启动模式 .........................................................................................61
§5.10.2.1 多线程 Kernel 启动模式 .....................................................................61
§5.10.2.2 Kernel 异步启动模式 ..........................................................................62
§5.10.3 数据流的加载/回收模式 ........................................................................... 62
§5.10.3.1 同步模式 ..............................................................................................62
§5.10.3.2 异步模式 ..............................................................................................62
§5.10.4 实验环境 .....................................................................................................62
§5.10.5 实验策略 .....................................................................................................63
§5.10.6 实验结果与分析 .........................................................................................63
§5.10.6.1 单流无数据装载/同构 Kernel .............................................................63
§5.10.6.2 单流无数据装载/异构 Kernel .............................................................64
§5.10.6.3 单流有数据装载/同构 Kernel .............................................................64
§5.10.6.4 多流异步数据装载/同构 Kernel .........................................................66
§5.10.6.5 流的规模对性能的影响 ......................................................................68
§5.10.7 数据装载 I/0 性能的提升 .......................................................................... 71
§5.10.7.1 Linux 内存文件系统 ........................................................................... 71
§5.10.7.2 I/O 性能测试实验 ............................................................................... 72
§5.11 本章小结 ........................................................................................................... 74
第六章 总结与展望 .......................................................................................................75
§6.1 结论 ..................................................................................................................75
§6.2 展望 ..................................................................................................................75
§6.2.1 对于非规则流的更好支持 .......................................................................75
§6.2.2 基于流模型的多核调度 ...........................................................................76
§6.2.3 流处理器集群节点间的通信机制 ...........................................................77
§6.2.4 通用邻域的发展 .......................................................................................77
参考文献 .........................................................................................................................78
在读期间公开发表的论文和承担科研项目及取得成果 .............................................81
.........................................................................................................................82
摘要:

I摘要今天,GPU作为可编程通用流处理器的代表,其性能飞速发展,已经打破CPU发展所遵照的摩尔定律。而且,GPU利用其可编程性和功能扩展性来支持复杂的计算和处理,这个特性已经得到业界的公认。在架构上,主流GPU都是采用统一的流体系架构,并且实现了细粒度的线程间通信,大大扩展了通用领域的应有范围。因此,基于GPU的通用计算模型的研究也成为当今研究的焦点。NVIDIACUDA编程模型的出现使基于GPU的通用计算(GPGPU)得到了突破性的发展,传统GPGPU的Cg和Brook+编程模型慢慢将退出历史的舞台。利用GPU大规模的线程级并行处理能力,可以将GPU作为计算的通用计算平台;通过构建高性能集群...

展开>> 收起<<
基于流处理器编程模型和集群架构的研究.pdf

共86页,预览9页

还剩页未读, 继续阅读

作者:牛悦 分类:高等教育资料 价格:15积分 属性:86 页 大小:6.43MB 格式:PDF 时间:2024-11-19

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 86
客服
关注