This slide is a recap of what an MDP is: a formal way to describe an agent repeatedly making decisions in a stochastic environment. The key idea is the Markov property: given the current state and chosen action, the distribution over the next state and reward does not depend on earlier history.

$$ p(s',r\mid s,a) \doteq \Pr\{S_t = s',\, R_t = r \mid S_{t-1} = s,\, A_{t-1} = a\} $$

Meaning of the symbols