Machine Learning News Hubb
Advertisement Banner
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
Machine Learning News Hubb
No Result
View All Result
Home Artificial Intelligence

Rubik and Markov. Here we obtain the probability of… | by Eduardo Testé | Aug, 2023

admin by admin
September 15, 2023
in Artificial Intelligence


A Markov process on depth’s classes

To find p(d|N) we imagine the depth classes as sites of a Markov process. Let me explain:

Randomly moving the cube faces is described as a Markov process (one dimensional random walk) between depth classes. Image by the author.

A depth class d is the set of all the cube’s states at a depth d (minimal number of moves to the solved state). If we randomly chose a state in a depth class d, and we turn a random face with a random move, that will give us either a state in the class d + 1 , with a probability p_d, or a state in the class d -1, with a probability q_d. In the quarter-turn metric there are no self-class moves.

That defines a Markov process, where a particular site is a whole depth class. In our case, only contiguous d classes are one-jump connected. To be precise, this is a discrete-time birth-death Markov chain. Because the amount of sites is finite, the chain is also irreducible and ergodic, and a unique stationary distribution exist.

We assume equally distributed probabilities for the selection of the random moves at each time. That induces some transition probabilities p_d, q_d (to be computed) between the depth classes. The amount of random moves N is the discrete time of the Markov process. This is also a one-dimensional random walker: at every site (depth class number d), the probability of going forward is p_d, and the probability of going backwards is q_d. This one dimensional chain is, roughly speaking, the “radial” direction in the Rubik’s graph (organized in the depth-radial layout).

The transition matrix

Any Markov processes is encoded in a transition matrix M. The (i,j) entry of M is the probability of jumping from site i to site j. In our case only the following entries are different from zero:

Here p_0 = 1: from the depth class 0 (containing just the solved state) we can only jump to the depth class 1 (there is no class -1). Also, q_26 = 1: from the depth class 26 we can only jump to depth class 25 (there is no class 27). For the same reason, there are no p_26 or q_0.

The stationary distribution

We mapped the action of randomly moving the cube to a one-dimensional depth-class random walker jumping back and forth with probabilities q_d and p_d. What happens after a long walk? or, how many times does the walker visit a particular site after a long walk? In real life: how often is a depth class visited when the cube undergoes random turns?

In the long run, and no matter what the starting point was, the time the walker spends in the depth class d is proportional to the population D(d) of that depth class. This is the main point here:

the (normalized) depth-population list D(d) should be interpreted as the vector representing the stationary distribution of our depth class Markov process.

Mathematically, D(d) is a left eigenvector of M

This matrix equation will give us 26 linear equations, from which we will get the p_i’s and q_i’s.

Taking into account that p_0 = q_26 = 1, we can rewrite these as

Detailed balance equations. Image by the author.

These are known as detailed balance equations: the flux, defined to be the stationary site population times the jumping probability, is the same in both directions. The solutions are:

and p_i is obtained using p_i + q_i = 1.

Some conditions on the population of a depth class

There is something interesting about these solutions. Because q_i is a probability we should have that

and that translate into the following condition for the distribution D_k:

This is a tower of inequalities that the depth-population D_k should fulfill. Explicitly, they can be organized as:

In particular, the last two inequalities are

Because D_27 = 0, we get that the lower and upper bound are equal, so

Or:

The sum of the population of the even sites should be equal to the sum of the population of the odd sites!

We can see this as a detailed balance between even and odd sites: every move is always to a different and contiguous depth class. Any jump will take you from the odd depth class (the class of all the odd depth classes) to the even depth class (the class of all the even depth classes). So the odd to even class jump occur with probability 1 (and vise versa). Being the probabilities one in both direction, their population should be equal by detailed balance.

For the same reason the Markov process will attain a period-two “stationary distribution” that switches between even and odd sites after every move (discrete time N).

A problem with the data

The depth-population D_d reported in the source of the data that we are planning to use is approximate for d = 19,20,21,22,23,24. So there is no guarantee it will satisfy all these conditions (inequalities). Don’t be surprised if we get some probabilities q_i out of the range [0,1] (as it is the case!). In particular, if we try and check the last condition (the even-odd population equality) it is off by a big number! (update: see note at the end)

A way out

The odd class seem to be underpopulated (this is a consequence of the approximation the authors chose to report the data). To make things work (get probabilities in the range [0,1]), we decide to add the previous big number to the population of the depth class 21 (the odd class with the greatest population, or, the one that will notice that addition the least). With this correction, all the obtained probabilities seems to be correct (which means the inequalities are also satisfied).

The jumping probabilities are:

p_i = 
{1., 0.916667, 0.903509, 0.903558, 0.903606, 0.903602, 0.90352, 0.903415,
0.903342, 0.903292, 0.903254, 0.903221, 0.903189, 0.903153, 0.903108,
0.903038, 0.902885, 0.902409, 0.900342, 0.889537, 0.818371, 0.367158,
0.00342857, 6.24863*1e-12, 0.00022, 0.0833333}
# i from 0 to 25

q_i =
{0.0833333, 0.0964912, 0.0964419, 0.096394, 0.0963981, 0.0964796,
0.096585, 0.096658, 0.0967081, 0.0967456, 0.0967786, 0.0968113,
0.0968467, 0.0968917, 0.0969625, 0.0971149, 0.0975908, 0.0996581,
0.110463, 0.181629, 0.632842, 0.996571, 1., 0.99978, 0.916667, 1.}
# i from 1 to 26

Notice that almost all the first p_i (up to i = 21) are close to 1. These are the probabilities of going away from the solved state. The probabilities of going closer to the solved state (q_i) are almost 1 for i greater than 21. This puts in perspective why it is difficult to solve the cube: the random walker (or the cube’s random mover) will be “trapped forever” in a neighborhood of the depth class 21.



Source link

Previous Post

Flight Forecast: An In-depth Analysis and Predictive Modeling of Small U.S. Airlines | by Linh Cao | Sep, 2023

Next Post

MicroAI™ Launches New Product — “Machine Intelligence”

Next Post

MicroAI™ Launches New Product — "Machine Intelligence"

5 Amazing & Free LLMs Playgrounds You Need to Try in 2023

Visual ChatGPT Explained - Edge AI and Vision Alliance

Related Post

Artificial Intelligence

Genius Cliques: Mapping out the Nobel Network | by Milan Janosov | Sep, 2023

by admin
October 1, 2023
Machine Learning

Detecting Anomalies with Z-Scores: A Practical Approach | by Akash Srivastava | Oct, 2023

by admin
October 1, 2023
Machine Learning

What are SWIFT Payments and How Does It Work?

by admin
October 1, 2023
Artificial Intelligence

Speed up your time series forecasting by up to 50 percent with Amazon SageMaker Canvas UI and AutoML APIs

by admin
October 1, 2023
Edge AI

Unleashing LiDAR’s Potential: A Conversation with Innovusion

by admin
October 1, 2023
Artificial Intelligence

16, 8, and 4-bit Floating Point Formats — How Does it Work? | by Dmitrii Eliuseev | Sep, 2023

by admin
September 30, 2023

© Machine Learning News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.