Is Boundary Logic interesting?

Discussion in 'Physics & Math' started by arfa brane, Aug 21, 2018.

  1. arfa brane call me arf Valued Senior Member

    Suppose I claim that logic is as useful as we need it to be, at the time we need it.
    Computers are useful, programs that run correctly are useful, quantum algorithms are useful. That's why all the algebra and different representations of quantum symmetries.

    It's why there are people trying to build 'topological' quantum computers.
    Or it's why there are so many papers on arxiv about braid groups, the Yang-Baxter equation, and TL algebras.

    This is all the stuff computer graduates will need to know about if they want to program a QC "correctly and usefully". Like how you can do this if you know one or two computer languages, and maybe can write assembler code. Maybe you know about the electronics as well and how transistors work.
    Last edited: Nov 16, 2018
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Speakpigeon Registered Senior Member

    Ah, see, that's much better. Still, I'm sure these people do a lot of useful things but that doesn't entail that all that they do is useful. Maybe many ideas about logic are not useful, for example. Usefulness is an empirical matter and as such difficult to assess easily and properly beyond the kind of informal assessment we all do all the time. Science overall has proved it's usefulness but that doesn't mean that all scientific work is useful, except perhaps, trivially, in being an example of what you shouldn't do.
    If I had been you, I would have defended the usefulness of boundary logic as being part of the theoretical research which is necessary for science because it may end up finding a useful application. However, it's arguable that was also true of all the metaphysics that went on prior to the advent of science, possibly even some of the religious ideas at the time. I guess, "interesting" should mean now you're convinced it will prove useful at some point. But then, only specialists of it would come to believe that.
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. arfa brane call me arf Valued Senior Member

    I don't know that it's about believing if something is useful.

    If you want to be a computer programmer, it will be useful to know about a few languages. You might also be interested in electronics, but it won't be useful to you if you write software unless that software is going to be used to design electronic circuits, say.

    Being interesting doesn't entail being useful as you say more or less. What's useful is obviously context dependent. Claiming something is useful means you also claim to know what it's good for, what kind of tool it is.

    Louis Kauffman:
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. arfa brane call me arf Valued Senior Member

    So my claim now is that LoF is something that can be derived from an interpretation of Jordan's curve theorem, and the set of points you get when you construct a straight line that passes completely through some closed curve. The curve can be as convoluted or "folded" back on itself as you like, but can have no self-intersections.
    The line passes completely through the curve if both ends of the line can be extended indefinitely without the extensions intersecting the curve again. The line can't be tangent anywhere to this curve.

    But if you take this set of points to be 2n points in an algebraic space and rotate one half by 180°, keeping the 'trace' intact, you have an expression that you can reduce by removing pairs of points, with an arrow signifying where to do this as part of the original curve gets straightened or smoothed, "shrink fit" the curve this way and you should see the two rules.

    But every element of every TL algebra on 2n points is closed by the trace, so it's closed when n is odd, so the conjecture is that TLn or Mn the diagram monoid on n points, with trace closure, has LoF arithmetically bound by the "arrow/removal of two points/smoothing" operation.
    Last edited: Nov 19, 2018
  8. iceaura Valued Senior Member

    How does that deal with the self-referential or "imaginary" valued expressions in LoF?
  9. arfa brane call me arf Valued Senior Member

    A good question. I'm not sure.

    I'm still not satisfied with what I've seen. Complex numbers are used directly in TLn to parameterise loops. TLn(x) counts the number of free loops with powers of x. A choice for x is, like the choice of a trial program loop invariant, something intuitive. It's the kind of x that means an unknown but which has an exponent because the algebra is multiplicative.

    You are free to choose x to be whatever is convenient, the logic of TL is not something that looks at all Boolean. But is the square root of not Boolean?
    Inversion or logical complement is an involution; an involute function in some TLn, where we can draw a diagram and count loops by hand, isn't an easy thing to make out, but, I suppose the second rule in BL says (( )) 'contracts' to a point or vanishes, the first rule says ( ) ( ) means one loop vanishes, the other doesn't.

    Bricken says the second rule is an involution like logical inversion-- not not. I can get my head around doing Kauffman's curve-slicing-then-restoring exercise, I can even get that inside each cap (or cup), is nothing, or that if there is anything inside, it's "false". One problem with that is that a loop inside a loop is the same as a loop outside a loop, in TL, because they can move around. There's a slight problem possibly with the context-switch between TL logic and BL.

    I.e. between \( ( \,)\; (\,) = ((\, )) = x^2 \) and the two rules of LoF.
    P.S. perhaps you want the square root of false (resp. true) to be like a superposition--partly true and partly false, but in one of two ways such that true and false are two things that are involutions.
    Last edited: Nov 20, 2018
  10. iceaura Valued Senior Member

    The first difficulty is that the second degree expressions in LoF are often not representable on the plane without crossings, and Jordan curves (planar) do not cross.
  11. arfa brane call me arf Valued Senior Member

    Is Boundary logic a way to look at how much logic lives in the plane? That is, how much logic is "in" those systems which are based on planar representations? Logic with ambient isotopy (?)

    There is a connection between those planar representations of 'physical systems' and (some form of) logic, clearly. The surprise is that there is a connection between braid groups and planar diagram monoids.

    If TLn(x) and its connection to the Yang-Baxter equation means quantum (non-)logic has a planar representation (which is a feature of braid diagrams despite the crossings being embedded in three dimensions), then imaginary true and imaginary false must show up as they do in quantum experiments.

    Perhaps a direct representation of imaginary truth-values, rather than being a kind of switching algorithm in a complex-valued domain, is a different ballpark if you want the representation to be planar.
  12. iceaura Valued Senior Member

    Laws Of Form, published fifty years ago, features a chapter describing the extension of its logic to self-referential expressions which cannot be accurately represented on the plane.
  13. arfa brane call me arf Valued Senior Member

    I still think it's a bit of a shame Spencer Brown didn't show how to use his imaginary truth-values in a real world example.

    I know Kauffman does over the ideas in Laws of Form, but again no connections to some kind of switching operation that has a real world example, like a railway system.

    Although TL has connections to quantum 'topology', the Yang-Baxter equation (a big deal, apparently), and to other kinds of algebras with planar representations (of fundamental or irreducible operators or generators), there is a paper I've seen that says there is a way to connect TL as a category to Cut-elimination, with a lambda calculus (i.e. another kind of algebra).
  14. iceaura Valued Senior Member

    He did.

    He illustrated the use of them in a circuit that could count without time delays or clock regulation, which according to LoF he patented an expanded version of in 1965, and which he claims was at the time of writing in use by British Railways.

    Yet another response to your queries that consists of referring to that one 140 page introductory book now fifty years old.

    (This was discussed in one of your links above, in which a more recent analyst you linked to pointed out that such circuits required nonstandard transistors and manufacturing procedures to take full advantage of the imaginary truth values (available, existing tech, but custom) and thus would be uneconomical to introduce into current mass produced gear - at which juncture I pointed out that this looked like a qwerty phenomenon, one of the obvious possible explanations for the absence of Boundary Logic in today's engineering and mass market gear).
  15. arfa brane call me arf Valued Senior Member

    Where in this 140 page introductory book does he use or describe a circuit that counts without time delays or a clock?
  16. iceaura Valued Senior Member

    Chapter 11, pages 65 - 68, in my trade paperback edition.

    Or you could refer back to Parmalee's post 48, page 3 of this thread.

    We are beginning to repeat, here. May I repeat the suggestion that you read the book?
    Last edited: Dec 5, 2018
  17. parmalee peripatetic artisan Valued Senior Member

    Here's a link to a pdf of Laws of Form from archive_dot_org: of Form.pdf

    It's the 1972 edition--I think the more recent ones just have updated prefaces.

    The flip-flop is described from page 78 (of the pdf; page 54 of the book) onwards, and also in one of the appendices.
  18. parmalee peripatetic artisan Valued Senior Member

  19. arfa brane call me arf Valued Senior Member

    Maybe I should say it's a shame that Spencer Brown's work remains obscure. With his 140 page introductory text and a handful of patents, reconstructing his use of imaginary logic in a consistent and understandable "non-obscure" way so it corresponds to standard logical arguments, this isn't easy for me.

    Perhaps iceaura or parmalee have done this and can explain Spencer Brown's ideas. A problem I came across in his book is that you have to learn what all his non-standard terminology means when you already know what a switching function is.

    You say Spencer Brown devotes three whole pages to this counting circuit? I have looked at them but it's hard to understand; it's hard to relate what he says about his equations to his diagrams. It just looks like too much work and I'm not happy to have to try to understand his reasoning (by first learning the semantics of the language he uses). Overall his book is somewhat rambling and unfocused.

    Nonetheless others like Kauffman and Bricken have tried to explain LoF in their own words. And it's still somewhat obscure. In particular the difference in language and tone between what Kauffman says about say, knot theory, and what he says about LoF is to me, remarkable. It's just a notation, right? Why should a notation, a representation of some kind of logic, be so mysterious or metaphysical? Logic is just logic, I don't think logic is meant to say anything more than what we want it to, type of thing.

    Sure, it suggests the perceiver of a system is part of the system, but plenty of other contexts exist where this is also true, quantum mechanics is one of these.
  20. arfa brane call me arf Valued Senior Member

    TLn as a diagram monoid is something you can deduce (to some extent) from this diagram:

    Please Register or Log in to view the hidden image!

    Again, start with the upper left diagram. The lower diagram represents a 180° rotation of the set of points on the left line. This 'operation' has an algebraic representation which labels the set of lines ("cups") below the arrow as a morphism that takes 6 + 6 points to 0 points (with 'time' pointing downward in the diagram). The 6 points to the right of the arrow represent the dual space of the 6 points to the left. Call the left-hand set A, the right-hand set A*.

    Now do some more stretching on the right so you have 6 vertical lines and the right capform above the line (A*). Draw another pair of lines directly above the two in the diagram, and repeat the 180° rotation so the right capform is directly above the left one (as a cupform).
    To be a bit more rigorous, relabel the two upper lines A and A*, the two lower lines B and B* (so you can distinguish them in the algebra).

    The new construction represents a composition of elements from TL6. The 6 + 6 → 0 morphism at the bottom and the 0 → 6 + 6 morphism at the top are in a sense, a consequence of going from n to 2n points; in fact you can add points freely (hence the "free additive" algebra).

    Since you want a multiplicative algebra, then addition is (re)defined as \( n + n: n \otimes n \), and horizontal 'composition' is a tensor product.

    But all that isn't relevant to the logic you get when you remove pairs of points by 'lifting' a part of the curve above the lines, although such an operation also has an algebraic rep. in TLn.

    Why do planar diagrams represent lots of physics? Who can say? I would hazard that neither of Temperley or Lieb thought their ideas would be as useful as they've turned out to be.
    Last edited: Dec 14, 2018
  21. arfa brane call me arf Valued Senior Member

    And so, LoF and Boundary algebras are obviously connected.

    But someone else is asking the question: where does logic come from?
    Which is like asking what difference is there between numbers and logic, or whether one or the other existed "first", since they seem to be consequences of each other.

    In my opinion, your Honour.
  22. arfa brane call me arf Valued Senior Member

    About topological entanglement.

    This can be represented firstly in an heuristic way, then more formally as an algebra which is planar, and which is a subset of more general algebras over objects called tangles.

    That is, TL and BL are planar tangles, with extra structure--the algebra of abstract states.
    Tangles only make sense in two or more dimensions, there is a shall we say, semantic restriction.

    Moreover, eversion is possible in three dimensions, because three is the minimum you need to twist a boundary over itself. Otherwise in two dimensions, all you have is intersection.

    Representing twist diagrammatically and algebraically is (for some reason) important if you want it to describe say, Bell states of a pair of particles.

Share This Page