Our work focuses on developing, evaluating, and validating novel DRL and IRL methods that allow intelligent agents to perform tasks in collaboration with humans, aligning with human preferences and objectives, also with respect to constraints, promoting AI trustworthiness, safety and humans’ situation-awareness in jointly performing tasks.
Interpretability, transparency and explainability of decision making are within our focus.
Specific Topics
- Imitation learning & Inverse Reinforcement Learning with Deep Machine Learning Methods
- Modelling Trajectories in any domain, by means of mixtures of policies' models.