
Hard-Constrained Neural Networks
October 17, 2025 @ 11:00 am - 12:00 pm
Navid Azizan (MIT)
E18-304
Event Navigation
Abstract: Incorporating prior knowledge and domain-specific input-output requirements, such as safety or stability, as hard constraints into neural networks is a key enabler for their deployment in high-stakes applications. However, existing methods often rely on soft penalties, which are insufficient, especially on out-of-distribution samples. In this talk, I will introduce hard-constrained neural networks (HardNet), a general framework for enforcing hard, input-dependent constraints by appending a differentiable enforcement layer to any neural network. This approach enables end-to-end training and, crucially, is proven to preserve the network’s universal approximation capabilities, ensuring model expressivity is not sacrificed. We demonstrate the versatility and effectiveness of HardNet across various applications: learning with piecewise constraints, learning optimization solvers with guaranteed feasibility, and optimizing control policies in safety-critical systems. This framework can be used even for problems where the constraints themselves are not fully known and must be learned from data in a parametric form, for which I will present two key applications: data-driven control with inherent Lyapunov stability and learning chaotic dynamical systems with guaranteed boundedness. Together, these results demonstrate a unified methodology for embedding formal constraints into deep learning, paving the way for more reliable AI.
Bio:
Navid Azizan is the Alfred H. (1929) and Jean M. Hayes Assistant Professor at MIT, where he holds dual appointments in the Department of Mechanical Engineering (Control, Instrumentation & Robotics) and the Schwarzman College of Computing’s Institute for Data, Systems & Society (IDSS) and is a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). Previously, he held the Esther and Harold E. Edgerton (1927) Chair. His research interests broadly lie in machine learning, systems and control, mathematical optimization, and network science. His research lab focuses on various aspects of reliable intelligent systems, with an emphasis on principled learning and optimization algorithms, with applications to autonomy and sociotechnical systems. He obtained his PhD in Computing and Mathematical Sciences (CMS) from the California Institute of Technology (Caltech), in 2020, his MSc in electrical engineering from the University of Southern California in 2015, and his BSc in electrical engineering with a minor in physics from Sharif University of Technology in 2013. Prior to joining MIT, he completed a postdoc at Stanford University in 2021. Additionally, he was a research scientist intern at Google DeepMind in 2019. His work has been recognized by several awards, including Research Awards from Google, Amazon, MathWorks, and IBM, and Best Paper awards and nominations at conferences including ACM Greenmetrics and the Learning for Dynamics & Control (L4DC). He was named in the list of Outstanding Academic Leaders in Data by the CDO Magazine for two consecutive years in 2024 and 2023, received the 2020 Information Theory and Applications (ITA) “Sun” (Gold) Graduation Award, and was named an Amazon Fellow in AI in 2017 and a PIMCO Fellow in Data Science in 2018. His mentorship has been recognized with the Frank E. Perkins Award for Excellence in Graduate Advising (MIT Institute Award) in 2025 and the UROP Outstanding Mentor Award in 2023. Early in his academic journey, he was the first-place winner and a gold medalist at the 2008 National Physics Olympiad in Iran. He founded and co-organized the popular “Control Meets Learning” Virtual Seminar Series during the pandemic.