On The Local Geometry of Deep Generative Manifolds

Ibtihel Amara
Golnoosh Farnadi
Mohammad Havaei
ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling

Abstract

Is it possible to evaluate a pre-trained generative model, especially large text-to-image generative models, without access to its original training data or human evaluators? We propose a novel approach to addressing this challenge by introducing a self-assessment framework based on the theory of continuous piecewise affine spline generators. We investigate the use of three theoretically inspired geometric descriptors for neural networks – local scaling ($\psi$), local rank ($\nu$), and local complexity ($\delta$) – to characterize the uncertainty, dimensionality, and smoothness on the learned manifold, using only the network weights and architecture. We demonstrate the relationship of these descriptors with generation quality, aesthetics, diversity, and bias, providing insights into how these aspects manifest for different sub-populations under the generated distribution. Moreover, we observe that the geometry of the data manifold is influenced by the training distribution, enabling us to perform out-of-distribution detection, model comparison, and reward modeling to control the output distribution. We believe our framework will help elucidate the relationship between the learned data manifold geometry, the training data, and the downstream behavior of pre-trained generative models.
×