Show HN: Sparse Matrix-Vector Multiplication that works at 30–90% sparsity
github.comTo get benefits from sparsity, you usually need to have very sparse matrices, impose some structure on the sparsity pattern or have specialized hardware. None of it is the case if you want to rune pruned LLMs on consumer devices. I wanted to see how far can you push it on a GPU and ended up with this. Blog: https://www.grizzlytech.dev/blog/macko-spmv Paper: https://arxiv.org/abs/2511.13061 Code (example with torch): https://github.com/vlejd/macko_spmv
Cool method. Pre deep learning there was plenty of interesting research on sparse methods. What do you think we're missing to have more widely used neural+sparse approaches?
I think the lack of efficient GPU kernels was the main problem. It is much, much easier to get a real speedup and memory reduction from quantization from fp16 to fp8 than from 50% sparsity. For sparsity you needed structure (which makes your model worse) and special hardware support.
Interesting approach -- thanks
Interesting approach! I've been thinking a lot about how often we get caught up in striving for extreme sparsity without considering the practical implications of using pruned models on consumer hardware. It reminds me of a project I worked on where we had to optimize for both performance and memory constraints, and we found ourselves tangled in the weeds of matrix representation.
I'm curious about your performance metrics—did you find any surprising edge cases when dealing with certain sparsity patterns on different GPUs? I can imagine folks running LLMs on consumer devices will appreciate any optimizations that help squeeze out more efficiency, especially if they’re dealing with larger models. And you mentioned that the MACKO format works across all GPUs—this could really democratize these technologies. That's exciting!
Have you thought about how this might impact other areas of machine learning or even classical algorithm work? I'd love to hear more about the community's thoughts on bridging the gap between pruning and quantization too.