Video understanding is a challenging problem with great impact on real-world applications. Yet, solutions so far have been computationally intensive, with the fastest algorithms running at few hundred milliseconds per video snippet on powerful GPUs. We use architecture search to build highly efficient models for videos - Tiny Video Networks - which run at unprecedented speeds and, at the same time, are effective at video recognition tasks. The Tiny Video Networks run faster than real-time e.g., at less than 20 milliseconds per video on a GPU and are much faster than contemporary video models. These models not only provide new tools for real-time applications such as in mobile vision and robotics, but also enable fast research and development for video understanding. The project site is available at https://sites.google.com/view/tinyvideonetworks.