
Clayton Sanford
Clayton Sanford is a Research Scientist at Google with a focus on studying the fundamental capabilities and limitations of modern sequential architectures.
Research Areas
Authored Publications
Sort By
Depth-Width tradeoffs in Algorithmic Reasoning of Graph Tasks with Transformers
Gilad Yehudai
Maya Bechler-Speicher
Orr Fischer
Ran Gilad-Bachrach
2025
Preview abstract
In particular, they can be used to solve complex algorithmic problems, including graph-based tasks. In such algorithmic tasks a key question is what is the minimal size of a transformer that can implement a task. Recent work has begun to explore this problem for graph-based tasks, showing that for sub-linear embedding dimension (i.e., model width) logarithmic depth suffices. However, an open question, which we address here, is what happens if width is allowed to grow linearly. Here we analyze this setting, and provide the surprising result that with linear width, constant depth suffices for solving a host of graph-based problems. This suggests that a moderate increase in width can allow much shallower models, which are advantageous in terms of inference time. For other problems, we show that quadratic width is required. Our results demonstrate the complex and intriguing landscape of transformer implementations of graph-based algorithms. We support our theoretical results with empirical evaluations.
View details
Preview abstract
While prior works focused on the computational separation between Transformers and fully connected networks, here we focus on a purely statistical separation, and ask: What function class can Transformers learn with fewer samples compared to fully connected networks, even with infinite compute? We define a toy task---q-sparse token retrieval---and prove that the sample complexities of transformers that solve this task is smaller than those of feedforward and recurrent neural networks.
View details