Theodore Ts'o
Theodore Ts'o is the first North American Linux Kernel Developer,
having started working with Linux in September, 1991. He also served
as the tech lead for the MIT Kerberos V5 development team, chair for
the IP Security working group in the IETF, and was the architect at
IBM in charge of bringing bring real-time Linux in support of
real-time Java to the US Navy. He previously served as CTO for the
Linux Foundation, and is currently employed at Google. Theodore is a
Debian Developer, and is the maintainer of the ext4 file system in the
Linux kernel. He is the maintainer and original author of the
e2fsprogs userspace utilities for the ext2, ext3, and ext4 file
systems.
Authored Publications
Sort By
Evolving Ext4 for Shingled Disks
Abutalib Aghayev
Garth Gibson
Peter Desnoyers
15th USENIX Conference on File and Storage Technologies (FAST 17) (2017), pp. 105-120
Preview abstract
Drive-Managed SMR (ShingledMagnetic Recording) disks offer a plug-compatible higher-capacity replacement for conventional disks. For non-sequential workloads, these disks show bimodal behavior: After a short period of high throughput they enter a continuous period of low throughput.
We introduce ext4-lazy, a small change to the Linux ext4 file system that significantly improves the throughput in both modes. We present benchmarks on four different drive-managed SMR disks from two vendors, showing that ext4-lazy achieves 1.7-5.4x improvement over ext4 on a metadata-light file server benchmark. On metadata-heavy benchmarks it achieves 2-13x improvement over ext4 on drive-managed SMR disks as well as on conventional disks.
View details
Preview abstract
Disks form the central element of Cloud-based storage, whose demand far outpaces the considerable rate of innovation in disks. Exponential growth in demand, already in progress for 15+ years, implies that most future disks will be in data centers and thus part of a large collection of disks. We describe the “collection view” of disks and how it and the focus on tail latency, driven by live services, place new and different requirements on disks. Beyond defining key metrics for data-center disks, we explore a range of new physical design options and changes to firmware that could improve these metrics.
We hope this is the beginning of a new era of “data center” disks and a new broad and open discussion about how to evolve disks for data centers. The ideas presented here provide some guidance and some options, but we believe the best solutions will come from the combined efforts of industry, academia and other large customers.
View details