Sports are straightforward for humans to play. However, none of the existing robot
learning algorithms achieve the human-level abilities to play sports. In this project,
we investigate the integration of dynamics prediction and control in AI agents for
playing sports to make them more adaptable to dynamic changes. Our contributions
are as follows. First, we develop a new learning environment called Tennis2D for developing reinforcement learning (RL) algorithms to play tennis. Tennis2D contains
realistic dynamics and has dynamic adaptation challenges for single-agent and multiagent RL. Second, we proposed a new dynamics prediction model called Contrastive
Dynamics Model (CDM). The model exploits contrastive representation learning to
estimate unknown physical parameters and incorporates estimated parameters to
improve prediction accuracy. Lastly, we proposed Dynamics-aware RL (DynARL) that
utilises model-based prediction and model-free learning. DynARL is found to improve the generalisation performances of both in-distribution and out-of-distribution
dynamics. The DynARL-trained agents are found to be more robust to windy circumstances.