Felipe M. G. França
Electronics Engineer from Universidade Federal do Rio de Janeiro (1982), M.Sc. in Computer Science from Universidade Federal do Rio de Janeiro (1987) and Ph.D. in Neural Systems Engineering from Imperial College of Science Technology And Medicine (1994). He is a Visiting Faculty Researcher at Google and a retired Full Professor at the Systems Engineering and Computer Science Program, COPPE, Universidade Federal do Rio de Janeiro. From June 2022 to June 2024 he was a Researcher at Instituto de Telecomunicações, Universidade do Porto, Portugal. He has published over 300 scientific papers and 2 granted patents. He has successfully supervised 33 PhD students and 67 MSc students, which are all in industry, govern and academic positions, in Brazil, China, Canada, Germany, Japan, Portugal, Sweden, USA and the UK. He has experience in Computer Science and Electronics Engineering, acting on the following subjects: artificial intelligence, artificial neural networks, weightless neural networks, computer architecture, dataflow computing, distributed algorithms, collective robotics, intelligent transportation systems. As a System Analyst/Research Assistant at PESC, COPPE, UFRJ, from 1984 to 1996, he participated in the NCP I - Design and Implementation of High-performance Parallel Computer, FINEP project, when he introduced the idea of a "communication virtual processor" in 1988. This concept was further rediscovered with the name of "active messages". As Assistant Professor, he proposed a new approach in asynchronous digital design such that any existing circuit designed under the perspective of synchronous timing is a candidate to conversion to asynchronous operation at a very low cost in terms of circuit design and generally with the reward of performance gains. This pioneering work marks the beginning of GALS - Globally Asynchronous Locally Synchronous Circuits, resulting in the first ever patent granted to a Brazilian university in the Computer Science area.
Authored Publications
Sort By
Preview abstract
Mainstream artificial neural network models, such as Deep Neural Networks (DNNs) are computation-heavy and energy-hungry. Weightless Neural Networks (WNNs) are natively built with RAM-based neurons and represent an entirely distinct type of neural network computing compared to DNNs. WNNs are extremely low-latency, low-energy, and suitable for efficient, accurate, edge inference. The WNN approach derives an implicit inspiration from the decoding process observed in the dendritic trees of biological neurons, making neurons based on Random Access Memories (RAMs) and/or Lookup Tables (LUTs) ready-to-deploy neuromorphic digital circuits. Since FPGAs are abundant in LUTs, LUT based WNNs are a natural fit for implementing edge inference in FPGAs.
WNNs has been demonstrated to be an energetically efficient AI model, both in software, as well as in hardware. For instance, the most recent DWN – Differential Weightless Neural Network – model demonstrates up to 135× reduction in energy costs in FPGA implementations compared to other multiplication-free approaches, such as binary neural networks (BNNs) and DiffLogicNet, up to 9% higher accuracy in deployments on constrained devices, and culminate in up to 42.8× reduction in circuit area for ultra-low-cost chip implementations. This tutorial will help participants understand how WNNs work, why WNNs were underdogs for such a long time, and be introduced to the most recent members of the WNN family, such as BTHOWeN , LogicWiSARD, COIN, ULEEN and DWN, and contrast to BNNs and LogicNets.
View details