CARE: Constraints-Abiding Explainable Reinforcement Learning

CARE aims to devise inherently interpretable safe RL methods providing transparency with regard to operational constraints.

The main activities, include:

– Study symbolic representations for constrained RL policy models.

– Design, implement and validate an interpretable safe RL-based solution in constrained settings.

CARE is an ENFIELD Exchange Scheme project on Human-Centric AI, in collaboration with Eidhoven University of Technology ( TU/e ) and  Institute for Systems and Computer Engineering, Technology and Science (INESC TEC) and will deliver symbolic models for inherently interpretable safe RL method in constrained settings. 

Duration

2024-2025

Innovations foreseen:

 – Extend safe RL methods to exploit symbolic models so as to safely adhere to operational constraints that are either learnt from demonstrations or are made explicit in some form (depending on the data available from the use case to be decided).

–  Devise an inherently interpretable safe RL method that is able to provide explanations explicating, among other contextual factors, operational constraints.

Impact:

  • Train interpretable safe policies with respect to constraints, allowing inspection of operational constraints learnt and adherence to these constraints
  • Increase humans’ abilities to maintain control in safety critical settings by offering human-understandable explanations adhering to operational needs and constraints.
  • Increase situational awareness and system trustworthiness by offering explanations that indicate explicitly the operational constraints ensuring safety of operations.

Leave a Reply