- Keras Reinforcement Learning Projects
- Giuseppe Ciaburro
- 340字
- 2021-08-13 15:26:05
One-dimensional random walk
In a one-dimensional random walk, we study the motion of a point-like particle that is constrained to move along a straight line in one of only two directions (right and left). For each (random) movement, it can move one step to the right with a fixed probability p or to the left with a q probability. Each step is of equal length, and is independent of the others, as shown in the following diagram:

The position of the point after n steps—identified by its abscissa, X(n)—obviously contains a random term. We want to calculate the probability after n movements that the particle will return to the starting point (it should be noted that nothing assures us with any certainty that the point will actually return to that position). To do this, we will use the X(n) variable, which gives the abscissa of the straight line after the particle has moved n steps to the left. Obviously, this is a discrete random variable with a binomial distribution.
This variable takes the following scheme: at every instant, n takes a step to the right or left according to the result of a random variable, Z(n), which takes on +1 value with probability of p> 0 and a -1 value with a probability of q, with p + q = 1, as shown in the previous diagram. Suppose that the random Zn variable with n = 1, 2,... are independent, and all have the same distribution. Then the position of the particle at the n instant is given by the following equation:
The Xn variable represents a Markov chain because, to determine the probability that the particle is in a certain position in the next moment, we only need to know where it is at the current moment, even if we are aware of where it was in all moments prior to the current one.