The tradeoff between word error rate (WER) and latency is very important for online automatic speech recognition (ASR) applications. We want the system to endpoint and close the microphone as quickly as possible, without degrading WER. For conventional ASR systems, endpointing is a separate model from the acoustic, pronunciation and language models (AM, PM, LM), which can often cause endpointer problems, with either a higher WER or larger latency. In going with the all-neural spirit of end-to-end (E2E) models, which fold the AM, PM and LM into one neural network, in this work we look at foldinging the endpointer into the model. On a large vocabulary Voice Search task, we show that joint optimization of the endpointer with the E2E model results in no quality degradation and reduces latency by more than a factor of 2 compared to having a separate endpointer with the E2E model.