Accepted Papers

Spotlights


Posters

1. Randomly Initialized Subnetworks with Iterative Weight Recycling

2. Sparse and Binary Transformers for Multivariate Time Series Modeling 

3. MaskedKD: Efficient Distillation of Vision Transformers with Masked Images [arxiv, poster]

4. Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks [poster]

5. Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling [arxiv, poster]

6. Pruning CodeBERT for Improved Code-to-Text Efficiency

7. Playing the lottery with concave regularizers

8. Dynamic Sparsity Is Channel-Level Sparsity Learner

9. Convolutional Sparse Coding is improved by heterogeneous uncertainty modeling [code]

10. PopSparse: Accelerated block sparse matrix multiplication on IPU [arxiv, blog]

11. Pruning at initialization vs random pruning: Do they produce topologically different sparse neural networks? [poster]

12. Iterative magnitude pruning drives local receptive field formation by accentuating higher-order data statistics

13. SPARC : UNDERSTANDING THE TRUE COST OF SPARSE ACCELERATORS

14. How to uncover the hierarchical modularity of a task through pruning and network analysis methods? [poster]

15. Towards stratified sparse training for large output spaces

16. Sequoia: Hierarchical Self-Attention Layer with Sparse Updates for Point Clouds and Long Sequences [paper, poster]

17. Simultaneous linear connectivity of neural networks modulo permutation [poster]

18. AUTOSPARSE: TOWARDS AUTOMATED SPARSE TRAINING OF DEEP NEURAL NETWORKS [arxiv]

19. Revisiting Implicit Models: Sparsity Trade-offs Capability in Weight-tied Model

20. SGD with large step sizes learns sparse features [arxiv]

21. Efficient Real Time Recurrent Learning through combined activity and parameter sparsity [arxiv]

22. Bias in Pruned Vision Models Is Largely A Confidence Problem

23. CoS-NeRF: Co-Sparsification of Sampling and Model for Efficient Neural Radiance Fields

24. Ten Lessons We Have Learned in the New ''Sparseland'': A Short Handbook for Sparse Neural Network Researchers

25. Accelerable Lottery Tickets with the Mixed-precision quantization

26. Towards Compute-Optimal Transfer Learning [paper, poster]

27. ZipLM: Hardware-Aware Structured Pruning of Language Models [arxiv]

28. Nerva: a Truly Sparse Implementation of Neural Networks

29. Optimizing the Communication - Accuracy Trade-off in Federated Learning with Rate - Distortion Theory [arxiv, code

30. Can Less Yield More? Insights into Truly Sparse Training [poster]

31. Lottery Tickets in Evolutionary Optimization: On Sparse Backpropagation-Free Trainability

32. Sparsified Model Zoo Twins: Investigating Populations of Sparsified Neural Network Models

33. Federated Select: A Primitive for Communication- and Memory-Efficient Federated Learning

34. Alternating Updates for Efficient Transformers

35. Efficient Backpropagation for Sparse Training with Speedup [arxiv

36. Vision-based route following by an embodied insect-inspired sparse neural network [arxiv]

37. Event-based Backpropagation for Analog Neuromorphic Hardware [arxiv]

38. PGHash: Large-Scale Distributed Learning  via Private On-Device Locality-Sensitive Hashing

39. Training Large Language Models efficiently with Sparsity and Dataflow [arxiv, poster]

40. SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models [arxiv, blog, poster]

41. JaxPruner: A modular library for sparsity research

42. Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time

43. SalientGrads: Sparse Models for Communication Efficient and data aware Distributed Federated Training [arxiv, poster]

44. IDKM: Memory Efficient Neural Network Quantization via Implicit, Differentiable k-Means [poster]

45. Understanding the Effect of the Long Tail on Neural Network Compression

46. Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks [arxiv]

47. Dynamic Sparse Network for Time Series Classification: Learning What to "See" [code, poster]

48. Massive Language Models Can be Accurately Pruned in One-Shot [arxiv, code]

49. Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning  [arxiv, code, poster]

50. Where to Pay Attention in Sparse Training for Feature Selection?

51. On the Origin of Simplicities

52. How to Prune Your Language Model:  Recovering Accuracy on the "Sparsity May Cry" Benchmark

53. Finding Sparse, Trainable DNN Initialisations via Evolutionary Search

54. Importance Estimation with Random Gradient for Neural Network Pruning

55. Getting away with more network pruning: From sparsity to geometry and linear regions [arxiv]

56. LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation