Sedpack - Scalable and efficient dataset Library

Google Scholar

Abstract

ML models have shown significant promise in their ability to identify side channels leaking from a secure chip. However, the datasets used to train these models present unique challenges. Existing file formats often lack the ability to record metadata, which impedes the reusability and/or reproducibility of published datasets. Moreover, training pipelines for deep neural networks often require specific patterns to iterate through the data that are not provided by these file formats.

In this presentation, we talk about the lessons learned in our research on side-channel attacks, and share insights gained from our mistakes in data structuring and iteration strategies. Additionally, we present Sedpack, our open-source dataset library which encapsulates these learnings to minimize oversights. Sedpack is optimized for speed, as we will demonstrate with some preliminary benchmarks. It can scale to larger-than-local-storage datasets, as these are becoming larger and larger with PQC. And it is not limited to ML pipelines either, as it can be easily used for classical attacks too. Join us to try Sedpack, with our hope to save you time in your side-channel research efforts. To get you started, we also publish several datasets in this format that we used in our publication Generalized Power Attacks against Crypto Hardware using Long-Range Deep Learning, CHES 2024.