Ming Jin
Home
Research
Publications
Teaching
Group
News & Updates
NSF supports Embodied Optimization project
Our project on Embodied Optimization for Decision-Making in Dynamic and Uncertain Environments has been selected for support by NSF. Thanks, NSF!
Apr 1, 2025
Safe Reinforcement Learning tutorial at IJCAI 2024
Safe Reinforcement Learning tutorial at IJCAI 2024 Slides
Aug 1, 2024
IJCAI 2024 Workshop: Trustworthy Interactive Decision Making with Foundation Models
Join us at IJCAI 2024 Workshop on Trustworthy Interactive Decision Making with Foundation Models (Call for Contributions)
Mar 1, 2024
CCI supports project on LLMs for Supply Chain Cybersecurity
Our project on LLMs for Supply Chain Cybersecurity, in collaboration with Prof. Peter Beling, has been selected for support by Commonwealth Cyber Initiative (CCI).
Jan 1, 2024
NSF SLES supports safe RL for power systems
Our project on safe RL for power systems, in collaboration with Prof. Javad Lavaei, has been selected for support by NSF under the Safe Learning-Enabled Systems (SLES)…
Nov 1, 2023
Paper: Optimization Autoformalism using LLMs
Paper on Optimization Autoformalism that uses large language models to craft optimization solutions for decision-making.
Aug 15, 2023
Amazon–VT Initiative supports Safe RL for Interactive Systems
Our project on Safe RL for Interactive Systems with Stakeholder Alignment has been selected for support by the Amazon–VT Initiative in Efficient and Robust Machine Learning.
Aug 1, 2023
Paper accepted: Certified robustness for neural ODE (L-CSS)
Paper on certified robustness for neural ODE accepted in IEEE Control Systems Letters (L-CSS).
Mar 1, 2023
Two papers at IFAC 2023: Sobolev training theory and decision-focused VI
Two papers theoretical analysis of Sobolev training and decision-focused variational inequality at IFAC World Congress 2023.
Mar 1, 2023
Paper: Derivative-free meta blackbox optimization on manifold (L4DC 2023, oral)
Paper on derivative-free meta blackbox (nonconvex) optimization on manifold at L4DC 2023 (oral presentation).
Mar 1, 2023
USENIX Security 2023: adversarial ML (sifting clean from poisoned)
One paper adversarial ML (sifting out clean data from poisoned data) at USENIX Security 2023.
Feb 1, 2023
ICLR 2023: three papers on meta-safe RL, data valuation, and adversarial ML
Three papers on meta-safe reinforcement learning (spotlight), model-agnostic data valuation (spotlight), and adversarial ML (certified robustness against UAP/backdoors) at…
Jan 1, 2023
No matching items