We present a Message Passing based Learning Protocol (MPLP) for artificial neural networks. With this protocol, every synapse (weights and biases), and activation is considered an independent agent, responsible for ingesting incoming messages, updating their own states, and outputting n-dimensional messages for their neighbours. We show how this protocol can be used instead of a traditional gradient-based approach for traditional feed-forward neural networks, and present a framework capable of generalizing neural networks to explore more flexible architectures. We meta-learn the MPLP through end-to-end gradient-based meta-optimisation. Finally, we discuss where the strengths of MPLP lay, and where we foresee possible limitations.