ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases
More data or more parameters? Investigating the effect of data structure on generalization
Transformed CNNs: recasting pre-trained convolutional layers with self-attention
Double Trouble in Double Descent: Bias and Variance (s) in the Lazy Regime
Scaling description of generalization with number of parameters in deep learning
Triple descent and the two kinds of overfitting: where & why do they appear?
Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias
Jamming transition as a paradigm to understand the loss landscape of deep neural networks