Learning Transferable Features for Implicit Neural Representations
Abstract
Implicit neural representations (INRs) have demonstrated success in a variety of
applications, including inverse problems and neural rendering. An INR is typically
trained to capture one signal of interest, resulting in learned neural features that
are highly attuned to that signal. Assumed to be less generalizable, we explore the
aspect of transferability of such learned neural features for fitting similar signals.
We introduce a new INR training framework, STRAINER that learns transferrable
features for fitting INRs to new signals from a given distribution, faster and with
better reconstruction quality. Owing to the sequential layer-wise affine operations
in an INR, we propose to learn transferable representations by sharing initial
encoder layers across multiple INRs with independent decoder layers. At test
time, the learned encoder representations are transferred as initialization for an
otherwise randomly initialized INR. We find STRAINER to yield extremely powerful
initialization for fitting images from the same domain and allow for a ≈ +10dB
gain in signal quality early on compared to an untrained INR itself. STRAINER
also provides a simple way to encode data-driven priors in INRs. We evaluate
STRAINER on multiple in-domain and out-of-domain signal fitting tasks and inverse
problems and further provide detailed analysis and discussion on the transferability
of STRAINER’s features. Our demo can be accessed here.
applications, including inverse problems and neural rendering. An INR is typically
trained to capture one signal of interest, resulting in learned neural features that
are highly attuned to that signal. Assumed to be less generalizable, we explore the
aspect of transferability of such learned neural features for fitting similar signals.
We introduce a new INR training framework, STRAINER that learns transferrable
features for fitting INRs to new signals from a given distribution, faster and with
better reconstruction quality. Owing to the sequential layer-wise affine operations
in an INR, we propose to learn transferable representations by sharing initial
encoder layers across multiple INRs with independent decoder layers. At test
time, the learned encoder representations are transferred as initialization for an
otherwise randomly initialized INR. We find STRAINER to yield extremely powerful
initialization for fitting images from the same domain and allow for a ≈ +10dB
gain in signal quality early on compared to an untrained INR itself. STRAINER
also provides a simple way to encode data-driven priors in INRs. We evaluate
STRAINER on multiple in-domain and out-of-domain signal fitting tasks and inverse
problems and further provide detailed analysis and discussion on the transferability
of STRAINER’s features. Our demo can be accessed here.