Dynamic Memory Management on Graphics Processing Units (GPUs)

­Advantages:

  • Optimizes GPUs, reducing memory bottlenecks for efficient parallel computation.
  • Reduces delays, speeding up data-intensive tasks for enhanced responsiveness.
  • Outperforms rivals by up to a hundredfold, delivering remarkable application enhancements.

Summary: 

This technology transforms massively parallel systems, like GPUs, by addressing a key issue in dynamic memory allocation. Instead of using centralized data structures that can slow down parallel tasks, this approach allows individual threads to search for available memory on their own. Extensive testing and mathematical proofs confirm its effectiveness, consistently delivering faster performance compared to existing methods. Additionally, advanced designs handle complex scenarios and improve performance even further, outperforming current solutions by a significant margin. This technology has practical applications in real-world scenarios, such as GPU-based hash join and group-by algorithms, where it significantly boosts performance. It holds great promise for enhancing data-intensive computing environments and making the most of parallel hardware in various industries.

Demonstration of Random Walk based page request algorithm: Thread paths colored the same, blue for free, red for occupied pages.

Desired Partnerships:

  • License
  • Sponsored Research
  • Co-Development
Patent Information: