NeurIPS 2025 Workshop on Constrained Optimization for Machine Learning

Workshop at the Conference on Neural Information Processing Systems (NeurIPS) 2025

About the Workshop

As AI systems are increasingly deployed in safety-critical domains—including credit scoring, medical diagnosis, and autonomous systems—there is a growing demand to ensure their fairness, safety, robustness, and interpretability, alongside stronger calls for regulation. Constrained optimization offers an accountable framework for enforcing these requirements by embedding them directly into the training process, steering models to satisfy explicit constraints. This framework facilitates compliance with regulatory, industry, or ethical standards, which can be easily verified by checking constraint satisfaction.

This workshop explores constrained optimization as a principled method for enforcing desirable properties in machine learning models. It brings together experts in optimization, machine learning, and trustworthy AI to address the algorithmic and practical challenges of scaling constrained methods to modern deep learning settings, which are often large-scale, non-convex, and stochastic.

We invite contributions that advance the state of the art in Constrained Learning. For details, see the Call for Contributions.

Follow us on .

News

  • The list of accepted contributions is now available on the Papers page.
  • We are accepting extended abstracts for oral and poster presentations. See the Call for Contributions for more details.
  • If you are interested in reviewing for the workshop, please fill out the reviewer nomination form by August 21st.

Dates & Deadlines

The workshop will be held on December 7th, 2025 in San Diego, California as part of NeurIPS 2025.

  • Extended abstract submission deadline: Aug 21, 2025 Aug 28, 2025 (AOE)
  • Author notification: Sep 22, 2025
  • Notification to oral presenters: Sep 29, 2025
  • NeurIPS financial assistance application deadline: Oct 1, 2025
  • NeurIPS early registration deadline: Oct 11, 2025 (AOE)
  • Camera-ready version: Oct 31, 2025 (AOE)

Speakers

Frank E. Curtis

Lehigh University

Short Bio

Frank E. Curtis is a Professor in the Department of Industrial and Systems Engineering at Lehigh University. His research focuses on the design, analysis, and implementation of numerical methods for solving large-scale nonlinear optimization problems. He received an Early Career Award from the U.S. Department of Energy (DoE), and has received funding from the U.S. National Science Foundation (NSF). He received the 2021 SIAM/MOS Lagrange Prize in Continuous Optimization. He was awarded the 2018 INFORMS Computing Society Prize. He currently serves as Area Editor for Continuous Optimization for Mathematics of Operations Research and serves as an Associate Editor for Mathematical Programming, SIAM Journal on Optimization, and Mathematical Programming Computation.


Talk Title

Stochastic Algorithms for Nonlinearly Constrained Optimization

Abstract

I will present the latest contributions of my research group on the design and analysis of stochastic algorithms for solving nonlinearly constrained optimization problems. The signature feature of our algorithms is that they handle constraints as constraints, rather than through penalty or augmented Lagrangian functions. I will discuss how we have seen that our algorithms can accelerate the performance of stochastic algorithms for constrained optimization and informed learning, and discuss various extensions that we have explored of our core algorithmic methodology. I will close with various open questions that remain to be explored.

Luiz Chamon

Hi! Paris & École Polytechnique

Short Bio

Luiz F. O. Chamon is an assistant professor (tenure-track) and Hi! PARIS chair holder in the center for applied mathematics (CMAP) of École polytechnique, France. He received the Ph.D. degree in electrical and systems engineering from the University of Pennsylvania (Penn), USA. He received both the best student paper and the best paper awards at IEEE ICASSP 2020. In 2022, he received the Young Investigators award from the Division of Engineering and Applied Sciences, Caltech. In 2025, he received the S.S. Chern Young Faculty Award. He is currently an ELLIS Scholar of the European Laboratory for Learning and Intelligent Systems. His research interests include optimization, signal processing, machine learning, statistics, and control.


Talk Title

The 5 W's and H of constrained learning

Abstract

Machine learning (ML) and artificial intelligence (AI) now automate entire systems rather than individual tasks. As such, ML/AI models are no longer responsible for a single top-line metric (e.g., prediction accuracy), but must face a growing set of potentially conflicting system requirements, such as robustness, fairness, safety, and alignment with prior knowledge. These challenges are exacerbated in uncertain, data-driven settings and further complicated by the scale and heterogeneity of modern ML/AI applications that involve from static, discriminative models (e.g., neural network classifiers) to dynamic, generative models (e.g., Langevin diffusions used for sampling). This keynote defines WHAT constitutes a requirement and explains WHY incorporating them into learning is critical. It then shows HOW to do so using constrained learning and illustrates WHEN and WHERE this approach is effective by presenting use cases in ML for science, safe reinforcement learning, and sampling. Ultimately, this talk aims to convince you (WHO) that constrained learning is a key tool to building trustworthy ML/AI systems, enabling a shift from a paradigm of artificial intelligence that is supposed to implicitly emerge from data to one of engineered intelligence that explicitly does what we want.

Ferdinando Fioretto

University of Virginia

Short Bio

Ferdinando (Nando) Fioretto is an assistant professor of Computer Science at the University of Virginia. He leads the Responsible AI for Science and Engineering (RAISE) lab, whose research focuses on addressing foundational challenges to advance artificial intelligence, privacy, safety, and the intersection between machine learning and optimization for scientific applications. His work has been recognized with the 2025 DARPA disruptive ideas award, the 2022 Caspar Bowden PET award, the IJCAI-22 Early Career spotlight, and several best paper awards. Nando is also a recipient of the NSF CAREER award, the Google Research Scholar Award, the Amazon Research Award, and the ACP Early Career Researcher Award in Constraint Programming. He is a board member of the Artificial Intelligence Journal (AIJ) and an associate editor of the Journal of Artificial Intelligence Research (JAIR).


Talk Title

Constraint-Aware Generative Models

Abstract

Generative AI has recently attracted significant attention for its potential to accelerate a broad range of scientific and engineering domains. However, while these models produce statistically plausible outputs, they often fail to adhere to physical principles, conservation laws, or safety constraints. Such violations result in suggested designs that may be impractical, unstable, or even hazardous. This talk presents our current efforts to address these challenges by introducing a new class of training-free, constraint-aware diffusion models that integrate differentiable optimization techniques with generative modeling. We will review the mathematical foundations for incorporating both static and dynamic constraints into diffusion models, extend these results for the case of discrete diffusion models, and present case studies of inverse design in microstructural materials, protein-pocket design, multi-robot motion planning, and synthetic chemistry with safe and reliability constraints.

Emily Ruth Diana

Carnegie Mellon University

Short Bio

Emily Ruth Diana is an Assistant Professor in the Operations Research group at CMU's Tepper School of Business and a faculty affiliate of the AI Measurement Science and Engineering (AIMSEC) initiative at CMU. She received her Ph. D. in Statistics and Data Science from the Wharton School of the University of Pennsylvania, where she was advised by Michael Kearns and Aaron Roth. Her research focuses on the intersection of ethical algorithm design and socially responsible machine learning. She is the recipient of the 2022 Wharton School's J. Parker Memorial Bursk Prize for Excellence in Research and the 2024 FORC Best Paper Award, and has been recognized as both a Rising Star in EECS by MIT and a Future Leader in Data Science by the University of Michigan.


Talk Title

Minimax Fairness in Strategic Classification

Abstract

In strategic classification, agents manipulate their features, at a cost, to receive a positive classification outcome from the learner's classifier. The goal of the learner in such settings is to learn a classifier that is robust to strategic manipulations. While the majority of works in this domain consider accuracy as the primary objective of the learner, in this work, we consider learning objectives that have group fairness guarantees in addition to accuracy guarantees. We work with the minimax group fairness notion that asks for minimizing the maximal group error rate across population groups. We formalize a fairness-aware Stackelberg game between a population of agents consisting of several groups, with each group having its own cost function, and a learner in the agnostic PAC setting in which the learner is working with a hypothesis class H. When the cost functions of the agents are separable, we show the existence of an efficient algorithm that finds an approximately optimal deterministic classifier for the learner when the number of groups is small. This algorithm remains efficient, both statistically and computationally, even when H is the set of all classifiers. We then consider cost functions that are not necessarily separable and show the existence of oracle-efficient algorithms that find approximately optimal randomized classifiers for the learner when H has finite strategic VC dimension. These algorithms work under the assumption that the learner is fully transparent: the learner draws a classifier from its distribution (randomized classifier) before the agents respond by manipulating their feature vectors. We highlight the effectiveness of such transparency in developing oracle-efficient algorithms. We conclude with verifying the efficacy of our algorithms on real data by conducting an experimental analysis.



FAQ

Questions?

Contact us at constrainedml@gmail.com or @constrained_ml.