CARE aims to devise inherently interpretable safe RL methods providing transparency with regard to operational constraints.
The main activities, include:
– Study symbolic representations for constrained RL policy models.
– Design, implement and validate an interpretable safe RL-based solution in constrained settings.
CARE is an ENFIELD Exchange Scheme project on Human-Centric AI, in collaboration with Eidhoven University of Technology ( TU/e ) and Institute for Systems and Computer Engineering, Technology and Science (INESC TEC) and will deliver symbolic models for inherently interpretable safe RL method in constrained settings.
2024-2025
– Extend safe RL methods to exploit symbolic models so as to safely adhere to operational constraints that are either learnt from demonstrations or are made explicit in some form (depending on the data available from the use case to be decided).
– Devise an inherently interpretable safe RL method that is able to provide explanations explicating, among other contextual factors, operational constraints.