ACIVS 2012

Advanced Concepts for Intelligent Vision Systems

GPU Optimization of Convolution for Large 3-D Real Images

Pavel Karas, David Svoboda, Pavel Zemcik

In this paper, we propose a method for computing convolution of large 3-D images with respect to real signals. The convolution is performed in a frequency domain using a convolution theorem. Due to properties of real signals, the algorithm can be optimized so that both time and the memory consumption are halved when compared to complex signals of the same size. Convolution is decomposed in a frequency domain using the decimation in frequency (DIF) algorithm. The algorithm is accelerated on a graphics hardware by means of the CUDA parallel computing model, achieving up to 10x speedup with a single GPU over an optimized implementation on a quad-core CPU.