Accepted Papers

  1. Spectral Tensor Train Parameterization of Deep Learning Layers [Poster]

  2. On the Transferability of Winning Tickets in Non-Natural Image Datasets [Poster]

  3. Doping: A technique for extreme compression of LSTM models using sparse structured additive matrices [Poster] [Website]

  4. Chasing Sparsity in Vision Transformers: An End-to-End Exploration [Poster] [Code]

  5. Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective [Poster] [Code]

  6. Are wider nets better given the same number of parameters? [Poster]

  7. SparseDNN: Fast Sparse Deep Learning Inference on CPUs [Poster]

  8. Sparse Training via Boosting Pruning Plasticity with Neuroregeneration [Poster] [Code]

  9. Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning [Poster]

  10. Intragroup sparsity for efficient inference [Poster]

  11. Meta-learning sparse implicit neural representations [Poster]

  12. The Elastic Lottery Ticket Hypothesis [Poster] [Code]

  13. Search Spaces for Neural Model Training [Poster]

  14. Rate-Distortion Theoretic Model Compression: Successive Refinement for Pruning [Poster]

  15. Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer [Poster]

  16. The self-sparsification behavior of gradient descent for training two-layer neural networks [Poster]

  17. Extreme sparsity gives rise to functional specialization [Poster]

  18. Powerpropagation: A sparsity inducing weight reparameterisation [Poster]

  19. Multiplying Matrices Without Multiplying

  20. Sparse PointPillars: Exploiting Sparsity in Birds-Eye-View Object Detection [Poster] [Code]

  21. Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets Win [Poster]

  22. Sparse Spiking Gradient Descent [Poster]

  23. Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones? [Poster]

  24. Adapting by Pruning: A Case Study on BERT [Poster]

  25. Dynamic Sparse Pre-Training of BERT [Poster]

  26. Efficient Proximal Mapping of the 1-path-norm of Shallow Networks [Poster]

  27. Uncertainty Quantification for Sparse Deep Learning [Poster]

  28. Non-Convex Tl1 Regularization for Learning Sparse Neural Networks [Poster]

  29. Lottery Ticket Hypothesis in Random Features Models [Poster]

  30. Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation [Poster]

  31. On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning [Poster]

  32. SpaceNet: Make Free Space For Continual Learning [Poster] [Code]

  33. Dynamic Sparse Training for Deep Reinforcement Learning [Poster] [Code]

  34. AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks [Poster]

  35. Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders [Poster] [Code]

  36. One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget [Blog] [Code]

  37. Channel Permutations for N:M Sparsity [Poster]

  38. Understanding the effect of sparsity on neural networks' robustness [Poster]

  39. The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks [Poster]

  40. "How can we be so slow?" Realizing the performance benefits of Sparse networks [Poster]

  41. Structured Sparsity in Deep Neural Networks using Attention based Variance Regularization [Poster]

  42. Breaking BERT: Evaluating Sparsified Attention [Poster]

  43. A Generalized Lottery Ticket Hypothesis [Poster]

  44. Robustness of sparse MLPs for supervised feature selection [Poster]

  45. Pruning Convolutional Filters using Batch Bridgeout [Poster]

  46. Sparse embeddings for reduced communication costs in federated learning of language models [Poster]

  47. Finding Everything within Random Binary Networks [Poster]

  48. GreedyPrune: layer-wise optimization algorithms for magnitude-based pruning [Poster]

  49. Scaling Up Exact Neural Network Compression by ReLU Stability [Poster]

  50. FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity [Poster] [Code]

  51. Keep the Gradients Flowing: Using Gradient Flow to study Sparse Network Optimization [Poster]

  52. MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training [Poster] [Video]

  53. Scatterbrain: Unifying Sparse and Low-rank Attention Approximation [Poster]

  54. A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness [Poster]

  55. Why is Pruning at Initialization Immune to Reinitializing and Shuffling? [Poster]

  56. Learning Digital Circuits: A Journey Through Weight Invariant Self-Pruning Neural Networks [Code]

  57. Disentangled Sparsity Networks for Explainable AI [Poster]

  58. A Unified Analysis of Network Pruning through the Lens of Gradient Flow and Symmetry [Poster] [Paper 1] [Paper 2]

  59. On independent pruning of attention heads [Poster]

  60. Model-Invariant State Abstractions for Model-Based Reinforcement Learning [Poster]

  61. Algorithm to Compilation Co-design: An Integrated View of Neural Network Sparsity [Poster]

  62. Going Beyond Classification Accuracy Metrics in Model Compression [Poster]