Don’t Trade Off Safety: Diffusion Regularization for Constrained Offline RL

TL;DR: A new approach to learning offline safe reinforcement learning with high performance and safety guarantee

Abstract: Constrained reinforcement learning (RL) seeks high-performance policies under safety constraints. We focus on an offline setting where the agent has only a fixed dataset—common in realistic tasks to prevent unsafe exploration. To address this, we propose Diffusion-Regularized Constrained Offline Reinforcement Learning (DRCORL), which first uses a diffusion model to capture the behavioral policy from offline data and then extracts a simplified policy to enable efficient inference. We further apply gradient manipulation for safety adaptation, balancing the reward objective and constraint satisfaction. This approach leverages high-quality offline data while incorporating safety requirements. Empirical results show that DRCORL achieves reliable safety performance, fast inference, and strong reward outcomes across robot learning tasks. Compared to existing safe offline RL methods, it consistently meets cost limits and performs well with the same hyperparameters, indicating practical applicability in real-world scenarios.

Authors

Junyu Guo

Zhi Zheng

Donghao Ying

Ming Jin

Shangding Gu

Costas Spanos

Javad Lavaei

Published

December 1, 2025