# What does Markov assumption state about the history of sequences?

Does the Markov assumption say that the conditional probability of the next state only depends on the current state or does it say that the conditional probability depends on a fixed finite number of previous states?

As far as I understand from the related Wikipedia article, the probability of the next state s′ to appear only depends on the current state s.

However, in the book "Artificial Intelligence: A Modern Approach" by Russell and Norvig, on page 568, they say: "Markov assumption — that the current state depends on only a finite fixed number of previous states".

To me, the second statement seems contradictory to the first, because it may mean that a state can depend on the history of states as long as the number is fixed. For example, the current state depends on the last state and the state before the last state, which is 2 sequential previous states (a finite number of states).

Is Markov assumption and Markov property the same?

``  p(st+1∣st,st−1:1)=p(st+1∣st),∀t``