Abstract
In this letter, we present a graphics processing unit (GPU)-based LDPC convolutional code (LDPC-CC) pipeline decoder with optimized parallelism. The proposed decoder exploits different granularities of decoding parallelism for both the compute unified device architecture (CUDA) kernel execution stage and the data transfer stage.Moreover, the parameter selection criteria for decoder implementation are designed to avoid exhaustive search of all the combinations of parameters. The experiments are carried out on NvidiaGTX460 andGTX580 platforms. The results demonstrate the proposed decoder achieves about 3 times speedup compared to the existing GPU-based work.
| Original language | English |
|---|---|
| Article number | 7289356 |
| Pages (from-to) | 2066-2069 |
| Number of pages | 4 |
| Journal | IEEE Communications Letters |
| Volume | 19 |
| Issue number | 12 |
| DOIs | |
| State | Published - 1 Dec 2015 |
Keywords
- GPU
- LDPC convolutional code
- Parallelism
- Pipeline decoder
Fingerprint
Dive into the research topics of 'High throughput pipeline decoder for LDPC convolutional codes on GPU'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver