Symmetry of the cube

Discussion in 'Physics & Math' started by arfa brane, Jan 22, 2019.

  1. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    This switching model of course speaks to solving the cube. But also to coding theory, Cayley hash tables and encryption, among other things.
    It represents, I think, a fairly general method of solving the problem of finding a path through a graph, by finding n paths for n inputs to n outputs, in parallel or sequentially, through a switching network fabricated from subnetworks (i.e. a switching fabric). The parallel solution speaks to parallel computation, or n algorithms acting together (so along the same timelike surface).

    So what you really do when you solve or try to solve, the Rubik's cube puzzle, is glue some abstract points to a boundary on which the identity acts to preserve a partial solution. So if you take a "scrambled" 3 x 3 x 3 cube and reconnect say, two vertices and an edge into a 3 x 1 row or column of "points on a boundary", you preserve this partial solution and act on the remainder with the root of the identity, on a "lower boundary".

    This idea is made concrete by physically gluing a vertex to something (an "upper boundary", as say a metal rod with a shaped end you can glue a corner to).

    Do this with the smallest puzzle, the 2 x 2 x 2. Now there are 7 permutable elements remaining, you want to glue at least one of these to the upper boundary and leave them there. That is, you quotient the space of rotations by reducing the number of elements you can rotate, until there is only one rotation left. Then you're in the first fibre, and you can close the "outputs" by leaving the puzzle one graph move from the permutation identity, a string of letters.
    (in the abstract, of course; physically you can claim the puzzle is solved, thus removing the identity permutation by asserting you don't have to go there).
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Neddy Bate Valued Senior Member

    Messages:
    1,840
    I don't really understand anything in this thread, but I was hoping it would eventually get around to talking about solving the Rubik's cube. I can solve a scrambled 3x3x3 cube by doing one layer at a time (top layer, middle layer, bottom layer).

    That sounds similar to what you are talking about above, because once a layer is solved, it is important to use a sequence of moves which ultimately preserves the solved layer(s). I don't know how anyone ever determined those particular kinds of sequences of moves on their own, because I had to learn them from other people. I doubt that it was originally done by trial and error, so I think there must have been a more systematic approach to figuring them out.

    There are some sequences of moves which I did figure out on my own, from trial and error. For example, when you have a solved cube that you do not want to scramble, there are a number of different sequences of moves which can be performed which will ultimately bring the cube back around to being solved again. The most obvious one would be to rotate a particular face four times in the same direction, but there are also quite a few other such sequences of moves which are far more interesting.

    Some sequences will take more moves than others in order to bring the cube back to being solved. Some sequences result in interesting color patterns during the process, but other sequences do not appear to generate any particular patterns until the cube is back to solved. I wonder if there are different 'classes' of sequences, or if there is a mathematical model for how they work. Perhaps that is what you are doing with this thread, but I don't know, because as I said, I don't understand it.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    A coincidence, I have to say. I'm looking at the different classes of what I'd prefer to call algorithms, as words that correspond to walking through a larger graph--which isn't there of course, but the algorithms are, because you need time and so do algorithms.

    Well, that isn't strictly true either: algorithms only make sense in a timelike way: there is a context of stepwise computation, some information is being "processed", logical variables are being "tested", if they don't change over time, there isn't any "sequence", no direction from an input to an output.

    So the big graph, the poset graph, becomes timelike when you map algorithms to it. Since it was produced by a computer program the timelike surfaces in it have all vanished . . . ergo it's the spacelike derivative of an algorithm.

    What you are doing, solving the puzzle, is walking through this graph, in an abstract context. The big problem you have finding any solution is first of all, you didn't keep a record of the moves that "scrambled" the puzzle--you did a free fall to "somewhere" in the graph.

    A computer algorithm, of course, does a controlled descent--it tracks where it is--and uses some group theory tricks, multiplying a lot of results without counting all of them, and saving processing time.

    So this physical puzzle has its own built-in timelike surfaces: these are the union of all the lines lying on each inner surface as the surfaces rotate against each other. The timelike surface is then a 3-dimensional surface of revolution that looks like a catenoid, or has a 2d surface that is a catenary in 1 dimension.. Or like a soap film between a pair of loops, but solid. It vanishes everywhere along the path from one permutation to another, in the "cube group".
     
    Last edited: Mar 1, 2019
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    But here's an interesting observation about all that. If the graph is really "timeless" only because it has been calculated by a machine, then does it exist, and in what sense?

    Is the graph a proof of something? Something like, there has to be an algorithm defined on this graph because it has to be a physical derivative of some timelike function (possibly periodic like a system clock), such that any timelike surfaces vanish, or in other words the numbers don't exist until something counts them . . .?

    So is the existence of the number 1, a proof that something counted to one? Is a countable set a proof that "something" can count, and output a number or set of numbers?
     
  8. Neddy Bate Valued Senior Member

    Messages:
    1,840
    Attempting to answer my own questions / curiosity about 'classes' of algorithms...

    Starting with a solved Rubik's cube, the algorithms which can be repeated until the cube is back to solved again appear to be either of a symmetric type, or a non-symmetric type. The symmetric algorithms seem to be the ones which bring about symmetric color patterns which are easily identifiable. The non-symmetric algorithms seem to have no identifiable color patterns but only because they are not symmetric, and so the "pattern" is not really an identifiable pattern.

    Another distinction would be algorithms which use 90 degree turns as opposed to algorithms which exclusively use 180 degree turns. By starting with a solved cube and exclusively using 180 degree turns on any chosen face of the cube, (and never 90 degrees), the cube never gets extremely scrambled. The resulting color patterns always consist of mixtures of opposite colors, (colors which are normally on opposites sides of a solved cube). Eventually even a beginner should be able to get these types of scrambles back to solved. Especially so if the algorithms used are always symmetric, but even if they are not, it really makes it easier to use algorithms which exclusively use 180 degree turns.

    With algorithms which use 90 degree turns, the cube will quickly become very scrambled looking, unless the algorithms used are always symmetric. So we seem to have four classes of algorithms which scramble a solved cube in different ways:

    1. Symmetric type, exclusively using 180 degree turns, which never results in a very scrambled cube, (the symmetric color patterns make it easy to return to solved again)
    2. Non-symmetric type, exclusively using 180 degree turns, which never results in a very scrambled cube, (the color patterns make it easy to return to solved again, even though the color patterns are not always symmetric)
    3. Symmetric type, using 90 degree turns, which soon results in a scrambled cube, (but one with some color patterns, which make it possible to eventually return to solved again)
    4. Non-symmetric type, using 90 degree turns, which soon results in a very scrambled cube, (which make it nearly impossible to return to solved again -- something like my three layer process will be needed)
     
  9. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    You might have noticed that the moves which exclude 90° rotations are even more "symmetric" if you restrict the number of faces, say by not rotating the upper or lower faces, but only those around the cube, like permuting the sides of an open box. This leaves the symmetry in one of 3 copies of the symmetric group on 4 letters, otherwise known as \( S_4 \). You have three choices for the first 180° rotation corresponding to the first element in each copy. Combining two of these and not using the third leaves you in one of the three because the second choice of a 180° rotation has to be from another copy--you choose which one by choosing two 180° rotations on adjacent faces, then stay in that copy, and there are 24 elements in it.
    The G-set is then the four sets of 3 x 1 columns or 'edges' of the box, all the edges are parallel. So effectively you choose four parallel edges of the cube (there are three ways to do that) and permute them.

    Do this for each of the three copies of \( S_4 \), call these say an X, a Y, and a Z copy. You get 24 elements no matter what you do. This 3 x \( S_4 \) structure is a group product of the underlying symmetries, one in each dimension, called a wreath product.

    When you use only 90° rotations, that puts you in \( S_8\), with respect to the 8 "corners" of the cube, which is what the graph I've called \( \mathfrak G \) represents. Include the rest of the 3 x 3 x 3 puzzle and the graph is larger. This is because the people who proved the maximum distance (God's number) for the 3 x 3 x 3 is 20, whereas for the 2 x 2 x 2 it's 14.

    \( \mathfrak G \) is a map of the numbers of equivalence classes for the 2 x 2 x 2, with 3-coloured vertices. If you use 1-coloured vertices, everything except the positions disappears from the graph. This is the map of the (2,2,2) triangle group, or 8 octants on the sphere.

    What I've been trying to explore here, is basically what the consequences are of keeping everything positive, not using complex numbers, etc.
    I think the result there indicates that complex numbers, roots of unity and so on, are just too useful to avoid. As long as the graph (a projection, recall) is in the positive quadrant, so the 3-dimensional graph is in the positive octant of the n-sphere.

    And the way I've formulated the multiplication tables of the group generators, as polynomials, I think is ok, but I see that writing out the tables when the iterant gets to 11 is going to be a job I should hand over to a computer program, if I get around to writing one.
     
    Last edited: Mar 2, 2019
  10. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    The significance of my four basis vectors is this: since each is an outer product of the canonical basis in \( \mathbb R^2 \), then their multiplication, in the tables or anywhere, includes multiplication in \( \mathbb R^2 \), because of how multiplication is defined over 2 x 2 or n x n matrices.

    So with the two relations and the graphs I made, I can now invoke the relations as being functions acting on vertices. I define these as follows:

    \( R(e_i) = \bar e_j | i = j \)
    \( C(e_i) = \bar e_j | i \ne j \)

    \( C(R(e_i)) = e_j | i \ne j \)
    \( R(C(e_i)) = e_j | i \ne j \)
    And we see that when C acts anywhere the indices change. Only R by itself doesn't do this, so we can characterise an identity acting on indices.

    I should also, since the multiplication table has zeros in it (actually the 2 x 2 o-matrix), construct a relation that deals with a vertex having a weight of 0, or a way to avoid this, if that's what I need to do. One approach is to see that the 0-matrix is its own row- and column-transpose, and moreover its own matrix transpose.

    The matrix transpose as a function can also apply here, after all the vertices in the graphs are matrices. When I include that, things get more interesting because of the extra d.f.
     
  11. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    Then you realise the matrix transpose is another way to define the relation R∧C.
    It's the same relation, but only for the \( \bar e_j \), these are the basis elements with different upper and lower indices. Another kind of relation is then the metric distance between them, which I can write as \( \bar e_i | {-\epsilon}_{ij} | \bar e_j,\, i \ne j \).

    "Upstairs" as long as I don't rotate everything, is \( e_i | -\delta_{ij} | e_j, \, i \ne j \). So everything is invariant under vertical reflections.
     
  12. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    How to characterise a 5-fold symmetry in the Rubik's cube (or any n x n x n cube puzzle):

    Choose a pair of adjacent faces. Then you have 6 vertices to permute. If you rotate each face layer by 90° in the same direction and call two adjacent rotations in this direction a single element of the permutation group, then one vertex is left in the same position and the five others 'orbit' around it.

    By fixing one vertex element and permuting the positions of five others, you get the position identity acting on all six vertices, so that they return to where they started every 5 iterations of the group element, which abstractly is one of: \( \{g_ig_j | i,j \in \{1,2,3\}\}\).

    This can be represented as six 3rd roots of unity, one in each position (since each vertex has 3-fold rotational symmetry). The fixed vertex is a centre, and the five other 3rd roots 'orbit' around this centre.

    So again abstractly, you draw a star graph with 5 terminals around a central point, the centre is where the fixed vertex goes as one of six 3rd roots.

    Then each vertex is a 3rd root, and each of 6 vertices has a position and orientation in the star graph of 6 x 3rd roots as a 5th root of the centre + 1 central root. Since the centre has three distinct roots, the whole root system has 15 distinct roots.
     
  13. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    . . . the relations as functions acting on vertices. I can ask why should I now introduce scalar multiplication to these functions? Another question I can ask is, why not?

    It's easy to show that the relation C, is R acting on the right:
    \( R(e_i) = \bar e_j | i = j \)
    \( C(e_i) = \bar e_j | i \ne j \)

    \( C(R(e_i)) = e_j | i \ne j \)
    \( R(C(e_i)) = e_j | i \ne j \)

    \( e_i(R) = \bar e_j | i \ne j \)
    \( R(e_i(R)) = e_j | i \ne j \)

    So I can now introduce the old complex roots of unity, and I get a kind of algorithmic structure--graph moves--in an algebra of vertices acting on edges and vice-versa (a dual action, or co-action).

    So that I can just define a general root, or better, a pair of roots \( W = \{\alpha^m, \beta^n | \alpha^m = \beta^n = 1 \}\).

    Then I can have \( \alpha R(e_i) = \alpha\bar e_i, \alpha R(\alpha \bar e_i) = \alpha^2 e_i,\, . . . \), and \( e_j(\beta R) = \beta \bar e_j,\, . . .\)
     
    Last edited: Mar 5, 2019
  14. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    Now if the \( {\alpha}^p, {\beta}^q \) are complex numbers, then R is a complex number, or an almost complex number. It's the 2 x 2 matrix \(\begin{pmatrix} 0 && 1 \\ 1 && 0 \end{pmatrix} \), acting on the left or the right by ordinary matrix multiplication.

    So I want to show how to construct a graph, a star graph, of the 5th roots of unity each vertex a 3rd root, such that the central point is degree 5 but is its own star product, or *-product; the *-product of a vertex element in this star graph depends on whether its a basis element, or whether it gets multiplied by a scalar.

    So we have already:

    \( (e_i)^* = e_i, \,(\bar e_i)^* = 0 \)
    Hence
    \( (R(e_i))^* = R^*(e_i)^* = R^*(e_i) = \{ e_i, \bar e_j \}\)
    \( ((e_i)R)^* = e_i(R)^* = \{ e_i, \bar e_j \} \)

    And so
    \( R( (R(e_i))^* ) = RR^*(e_i) = \{ e_i, \bar e_j \} = R( (R(e_i))^* )^* = R^*( (R(e_i))^* )\)
    I have two cyclic groups, \( 2 \times \mathbb Z/4 \), using a slightly different notation.
    So multiplicatively, these are two 4th roots of unity, say \( u^4 = v^4 = 1 \).

    So \( u^* = \{ u^0, u^1, u^2, u^3 \}, v^* = \{v^0, v^1, v^2, v^3 \} \).

    But I want these two cylic roots to combine together, I want a graph product as some action on two order 4 graphs, that gives me an order 6 graph, with 5 terminals and one centre.


    If I start with the central vertex, I need it to be a 3rd root, so that means the graph product will have the orbit of this element (it oscillates between the same pair of vertices) in it as a function of R.

    This is: \( R(e_1) = \alpha^{\frac {1} {2}}\bar e_1 \), where \( e_1\) is chosen as the centre of the graph product--when the two 4th roots act on each other, or when their actions intersect.

    So we have: \( R^*(e_1) = \{ e_1, \,\bar e_1,\, \alpha^*e_1,\, \alpha^* \bar e_1 \}; R^2 = \alpha \)

    And if \( \alpha^3 = 1 \) then \( \alpha^2 = R^4, R^6 = 1 \)

    for the central vertex.
     
    Last edited: Mar 7, 2019
  15. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    Going around the loop again:

    The ideas I'm trying to use here are along the lines of proving there is a minimal algorithm which traverses the whole of \(\mathfrak G^n \).
    The graph itself, as mentioned, is the output of an algorithm whose purpose was to count all the elements of the permutation group and assign them to equivalence classes; the equivalence relation here is, "is the same distance from (0,0)".

    There are lots of fancy sounding names for the graph's structure, because it has a basis--a vector basis--variously the set of permutations is of a principal G-bundle, a sliced 3-ball, a foliation of the n-sphere, etc.

    But if what you want is a set of data types you can write a program with, you can call them what you like. The problem of writing a program--any program--depends on various preconditions.
    These include the existence of computers, and the choice of computer must have an instruction set, memory and so on. You will need to be able to encode some sequence of instructions--a list--so the computer can iterate over it; the system clock and certain "system level instructions" is a path that takes a program to a controlled descent into a graph. You have to leave a trail, or trace, in local memory, of where you've been and how you got there.

    Then, once ordinary 3-space has the graph in it, you can bump the number of dimensions. Next up is the 4-sphere (viz, the hypercube Rubik's puzzles), but why stop there?

    And this might have something to do with what information is, how to encrypt messages efficiently, and how to find a path through a switching network such that no blocking occurs (there are no delays) . . .
     
  16. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    I'm being sort-of intentionally sloppy with the notation again.

    \( (e_i)^* = e_i, \,(\bar e_i)^* = 0 \)
    Hence
    \( (R(e_i))^* = R^*(e_i)^* = R^*(e_i) = \{ e_i, \bar e_j \}\) . . . should be \( (R(e_i))^* = R^*(e_i)^* = R^*(e_i) = \{\bar e_j e_i\}^* \)
    a set of finite strings composed of a single iterated "character";

    So \( ((e_i)R)^* = e_i(R)^* = \{ e_i\bar e_j \}^* \)

    And so
    \( R( (R(e_i))^* ) = RR^*(e_i) = \{ e_i \bar e_j \}^* = R( (R(e_i))^* )^* = R^*( (R(e_i))^* )\)

    So we have: \( R^*(e_1) = \{ e_1 \bar e_1 \alpha^1 e_1 \alpha^1 \bar e_1 \alpha^2 e_1 \alpha^2 \bar e_1 \}^*; R^2 = \alpha \)

    It's sort-of excusable if you think of the uncorrected version as the only characters in any string--multiplication will turn any multiples into zeros, so these are the only two characters and they have to alternate.
     
    Last edited: Mar 8, 2019
  17. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    Show me the numbers.

    Because I can use the 'data structure' of a graph--I can represent different graphs as different objects, in an object-oriented context--I don't need to write a program that uses complex numbers, or use a computer with "real" data types or numbers with sign and exponent.

    Nay, I say, a complex number is a graph. A minimal graph of a complex number has two vertices in it, all I need is a copy of \( K_2 \), and some rules about colouring the vertices, so my graph walks through a larger graph. I can use a symbolic language which doesn't need real or complex vectors in it, because it captures numbers in an iterant (an upper index or exponent for the symbol), over a set of graph walks.

    But it's all discrete so I don't need pairs of real numbers, in fact these aren't actually real numbers but an interpretation of a string of bits. So in general all the proposed programming language needs is a set of discrete graphs, with an algebra defined on them, which need not involve the actual multiplication of 2 x 2 matrices, but their meet and join over a discrete lattice with 16 matrices in it.
     
  18. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    So that, since I can assert a language exists which is sets of graphs, or sets of sets of vertices and edges, with a set of operations (R on the left or right), in which I can write programs that walk a \( K_2 \) graph through a root system, I can also assert a minimal Turing machine exists which does the equivalent.

    Moreover and notwithstanding . . .
    This TM has a representation as a graph where vertices are states, and (directed) edges are transitions between states. There is an input and an output state, and the program walks through this graph. There is a possibly infinite set of graphs equivalent to this minimal graph, but one and only one is minimal.
     
  19. Willem Registered Senior Member

    Messages:
    253
    Where are the real and imaginary lines? I took out the link to the picture in the quote.
     
  20. arfa brane call me arf Valued Senior Member

    Messages:
    6,259
    If you want to see an imaginary line, just multiply a real line by √-1.
    I don't know what else I can tell you.
     

Share This Page