What is “Brain Engineering”?
"If I cannot build it, I do not understand it." That was Nobel laureate Richard Feynman -- and by his metric, we understand a bit about physics, less about chemistry, and precious little about biology.
The Brain Engineering Laboratory has as its goal a fundamental understanding of the brain: its mechanisms, operation, and behaviors. Brain circuits are circuits, albeit relatively unusual ones from an engineering standpoint. Enormous advances in recent years have provided data on brain architectures and mechanisms, advancing hypotheses of how thought arises from neural operation, and enabling initial derivation and formal analysis of candidate constituent algorithms carried out by brain circuitry.
Inevitably, as a scientific field arrives at an understanding of its object of study, we are able to use the information in a proactive way: to construct synthetic models of the system; to enhance its effectiveness; sometimes to fix it when it breaks. For instance, as biological systems have become increasingly understood, new approaches have arisen to diagnose diseases, to develop methods for treating them, and to build devices that mimic and can even supplant their operation, such as prosthetic cochleas and retinas.
The fields of medicine and pharmacology have grown from these fundamental biological findings. The future of brain science will be no less productive, and no less dramatic in its effect on our understanding, and its influence on our day to day lives. We verge on understanding the brain as an engineering system, moving toward increasing capabilities to engineer brains.
How are “Brain Engineering” and “Neural Networks” Related?
The fields of ML, AI, and artificial neural networks (ANNs) all were once relatively distinct, but the terminology has become increasingly tangled as ML and ANN systems have directly been applied to what initially were AI topics. The current generation of systems exhibits fluent natural language use via Transformer architectures, enabling users to converse and collaborate with these systems.
How do transformers resemble brains, if at all? Are there two (or more) distinct ways of achieving fluent language use, or are brains and transformers tapping into the same underlying mechanisms? Do language abilities also confer logic and reasoning? If not, how are these combined in humans, and how could they be achieved in transformers, or related architectures? How was it that all deep learning systems up until transformers were unable to produce competent language use; what was it about transformers that abruptly achieved these abilities?
Our lab investigates detailed designs of real brains: their actual anatomical cell types and circuit layouts, and their intrinsic physiological activity patterns. Not simply convenient artificial neural network designs nor artificially "spiking" cells, but the actual structures and patterns that arise naturally in real brain circuitry (e.g., Pathak et al., 2024); how these can give rise to perceptual and cognitive abilities (Bowen et al., JoV 2022; Rodriguez & Granger JoV 2020; Granger arXiv 2020); what novel computational algorithms and architectures appear to emerge from these circuits, beyond those of standard artificial networks (Bowen, Granger, Rodriguez AAAI 2023; Bowen & Granger U.S. Patent 2023; Hokenmaier et al., IEEE Microelectronics 2024).
In general, then, how is it that brains achieve the uniquely human array of abilities: language, logic, reasoning? As these questions are addressed, we come closer to i) the engineering goals of producing systems with various intelligent capabilities, as well as ii) the scientific goals of understanding brains, including understanding how brain challenges and disorders may be treated or cured. We build artificial systems, but we also increasingly grow our understanding of the natural system: the existence proof that human-level intelligence clearly can exist.
How do you know that the brain areas you study are actually carrying out the computational operations that you think they are?
Of course, we don't know; we and all scientists generate hypotheses from the extant data, and continue to test the hypotheses against new data as it arises. We study particular brain circuits "bottom up", hoping that a circuit's natural operation will suggest the computation that it is carrying out. As we construct simplified models, we try to be alert to biological features that, if added in, are consistent with (or even enhance) the hypothesized functions, or are inconsistent -- indicating whether we may be on a fruitful track. We also attempt as much as possible to identify predictions arising from the models that can be tested via biological or behavioral means (e.g., Pathak et al., 2024), though it is rare that models are able to make specific enough predictions, or that such predictions are testable with any current methods. Nonetheless, as new biological data occur, we continue to check the model against the known constraints, to either strengthen the model or modify it, or, if necessary, discard a refuted model for a particular brain structure, and begin again.
If your models are actually doing what brain circuits do, then do they turn out to be useful methods for applications?
The algorithms derived from various brain areas have turned out to be so unexpectedly effective and efficient that they have found use in a variety of real-world applications, ranging from military to industrial to medical uses. Many brain circuit derivations have led to algorithms that are direct alternatives for some standard artificial network methods, and, as might be expected from a highly evolved biological system, are substantially more power- and cost-efficient and robust than artificial systems (e.g., Chandrashekar & Granger 2012; Granger, Rodriguez, Bowen, Felch 2017; Bowen, Tofel, Parcak, Granger 2017; Lohn-Jaramillo et al, 2021; 2022; Bowen, Rodriguez, Sowinski, Granger 2022; Bowen, Granger, Rodriguez 2023).