Publications by Tobias%20Gysi

×

Status message

The Publications site is currently under construction, as a result some publications might be missing.

2016

Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Salt Lake City, UT, USA, November 2016
Over the last decade, CUDA and the underlying GPU hardware architecture have continuously gained popularity in various high-performance computing application domains such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent programming model for GPU clusters. We therefore introduce the dCUDA programming model, which implements device-side remote memory access with target notification. To hide instruction pipeline latencies, CUDA programs over-decompose the problem and over-subscribe the device by running many more threads than there are hardware execution units. Whenever a thread stalls, the hardware scheduler immediately proceeds with the execution of another thread ready for execution. This latency hiding technique is key to make best use of the available hardware resources. With dCUDA, we apply latency hiding at cluster scale to automatically overlap computation and communication. Our benchmarks demonstrate perfect overlap for memory bandwidth-bound tasks and good overlap for compute-bound tasks.
@inproceedings{abc,
	abstract = {Over the last decade, CUDA and the underlying GPU hardware architecture have continuously gained popularity in various high-performance computing application domains such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent programming model for GPU clusters. We therefore introduce the dCUDA programming model, which implements device-side remote memory access with target notification. To hide instruction pipeline latencies, CUDA programs over-decompose the problem and over-subscribe the device by running many more threads than there are hardware execution units. Whenever a thread stalls, the hardware scheduler immediately proceeds with the execution of another thread ready for execution. This latency hiding technique is key to make best use of the available hardware resources. With dCUDA, we apply latency hiding at cluster scale to automatically overlap computation and communication. Our benchmarks demonstrate perfect overlap for memory bandwidth-bound tasks and good overlap for compute-bound tasks.},
	author = {Tobias Gysi and Jeremia B{\"a}r and Torsten Hoefler},
	booktitle = {Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis},
	title = {dCUDA: Hardware Supported Overlap of Computation and Communication},
	venue = {Salt Lake City, UT, USA},
	year = {2016}
}

2015

Proceedings of the 29th ACM on International Conference on Supercomputing, ICS'15, Newport Beach/Irvine, CA, USA, June 2015
@inproceedings{abc,
	author = {Tobias Gysi and Tobias Grosser and Torsten Hoefler},
	booktitle = {Proceedings of the 29th ACM on International Conference on Supercomputing, ICS{\textquoteright}15, Newport Beach/Irvine, CA, USA},
	title = {MODESTO: Data-centric Analytic Optimization of Complex Stencil Programs on Heterogeneous Architectures.},
	url = {http://doi.acm.org/10.1145/2751205.2751223},
	year = {2015}
}