Is Boundary Logic interesting?

Discussion in 'Physics & Math' started by arfa brane, Aug 21, 2018.

  1. iceaura Valued Senior Member

    Messages:
    30,994
    That would make it identical with the representation of "false".
    Instead, one can extend the BL notation and arithmetic to include - and assign a truth value to - self referential statements. Which Brown does, in LoF.
    It appears to clear up a confusion.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    This confusion you speak of. Is it related at all to how LoF appears to make "nothing" into something, even saying it's ok to say A is nothing, and maybe (A) (B) is too, etc.

    How do you make a mark or define a boundary, a thing that obviously has structure, in nothing? This to me is the strongest criticism I can level at the notation, and still it works.

    may the Void be with you.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,270
    I see it the other way round: LoF is characterized by not making nothing into "something", i.e., by the very action of not designating nothing with a symbol--naming what can't be named. At first, this seems kinda unfamiliar, in the context of logic systems at least (though not so with iconic systems), but it correlates more readily with other fields--from semiotics to anthropology to philosophy (I'm thinking mostly non-analytic schools, here--esp. Continentals from Agamben to Derrida, or neo-Wittgensteinians like Diamond, Cavell, et al) to ethology (again, Uexkuell's umwelt).

    W. Bricken

    Let the Void-space be your canvas.
     
    Last edited: Sep 12, 2018
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    From this, I get the idea that the semantic rule: (( )) =
    . . . is not labeling the void, but something else. I also assume it calls into question what the "=" means.

    I can interpret the rule ( ) ( ) = ( ) as saying "making the same distinction twice is the same as making it once", or somesuch. Or I can think of it as a reduction rule.
    I can leave the whole void thing alone as long as I don't have an expression which evaluates (reduces) to nothing.

    Say I have:
    1. ( ) ( ) = ( )
    1a. ( ( ) ( ) ) = ( ( ) )
    2. ( (( )) ) = (( ( ) )) = ( )
    => ( ) ( ) = ( (( )) ) = (( ( ) ))

    Bricken says I'm allowed to use symbols to label any expression, any combination that follows the rules. So he's saying labeling (( )) is allowed.
    Indeed the operations of involution: ((A)) = A, occlusion: (( )A) = (( )), and pervasion: (A) A = ( ) A, don't seem to make sense unless A is an expression which is either "( )", or "(( ))".
    Something happens when you introduce symbols to an otherwise iconic "language"; not recognising this (never mind characterising what exactly changes) is also a mistake--labeling "nothing", or writing "This sentence doesn't exist", are logical mistakes (perhaps you need to make them). Clearly the sentence in quotes does exist; it took me some time and energy to type it out, it has an information content, but logically it's a contradiction or a lie.
     
  8. iceaura Valued Senior Member

    Messages:
    30,994
    It means "interchangeable in all defined expressions", basically.
    Grounding interpretations of symbols in instruction, action, and the like - rather than static states of being - brings clarity to BL. imho
    Which may be why the step from arithmetic to algebra is explicitly recognized and justified, and a chapter devoted to it, in LoF.
    These issues and questions of yours are dealt with, explicitly, in LoF among other places.
    It has one of the truth values available in the space of self referential statements.
    The interpretation of "existence" in logical arguments, and therefore in the arithmetic and algebra of BL, is complex - again, LoF addresses that issue (not thoroughly).
     
    Last edited: Sep 13, 2018
  9. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,270
    Again, I'm seeing echoes of Wittgenstein here: meaning is use. Our ability to define something with abstractions is irrelevant; rather, our ability to use the term correctly, in a meaningful way, is what matters.

    It seems the biggest obstacle here, and Bricken mentions this repeatedly as do commentators on LoF, is a lack of familiarity.
     
  10. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    What does LoF say about such things as inductive proofs of program correctness?

    Would it be useful to the task of finding so-called loop invariants? This isn't a straightforward thing to do, usually. It's something people do because it's a hard problem. If a boundary algebra simplifies things, maybe it would make it easier to derive an invariant.

    I notice LoF isn't a hot topic, there are only a handful of authors. Louis Kauffman is one (he's a topologist, knot theory etc). What I've read of Kauffman's writings on LoF is sometimes a bit too metaphysical for me. In fact the metaphysical "cosmic implications" are quite a common theme.

    Another topic I uncovered is that of imaginary logical values: the "square root" of not, say. Again not many logicians have written a whole lot about it.
     
    Last edited: Sep 17, 2018
  11. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,270
    There seemed to have been a lot more stuff on LoF on the net up to the early/mid-aughts. I honestly don't know what become of all that stuff, but, yeah, Louis Kauffman's page provides a fair number of links, as do the enolagia pages-- http://www.enolagaia.com/GSB.html -- compiled by Randall Whitaker. From what I can tell, he seems to have salvaged a lot of stuff that had disappeared from the net.

    I think you'll find more stuff that interests you on Kauffman's page; Whitaker's links seem to be more of the systems theory, cybernetics, Maturana/Varela, and psychoanalytics variety--not to mention some John Lilly stuff (you know, the dolphin and isolation tank guy--he was a bit of a nutter, but some of his work is quite substantive). I'm certainly partial to the latter stuff (from work philosophy (mostly continental), anthropology, critical animal studies, and so forth, but I can certainly appreciate that a lot of people tend to regard that a nonsensical crap

    Edit: Just a suggestion, when reading the latter stuff (or even some of Kaufmann's), try to abandon some preconceptions, and or biases, about the metaphysical and "cosmic implications. " Most of this work seems pretty grounded, and pretty free of woo. Fields like semiotics, and biosemiotics, and zoosemiotics, and so forth, have kinda gotten a bad rap. Some of it deserved, frankly, but by no means all. The field emerges largely from Peirce, de Saussure, Uexkuell, et al, don't let the mentions of Lacan or Barthes, for instance, cloud your judgement.

    In the notes to chapter 11 (with the flip-flop circuit), Brown notes that the circuit, which was used by British Railways, suggests that the application may in fact be the first instance of imaginary values used for such a purpose. He writes:

    LoF, p. 123 (in the copy I've got).
     
    Last edited: Sep 18, 2018
  12. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Kauffman likes to start with a geometric kind of object and develop it into an algebraic kind (this is according to him, what topology is really about). He uses a so-called re-entrant mark as the "solution" to the problem X = (X):
    --http://homepages.math.uic.edu/~kauffman/Laws.pdf
     
  13. iceaura Valued Senior Member

    Messages:
    30,994
    A chapter or two in LoF - again.

    One finds little analysis of the square roots of negative numbers by those using Roman Numeral notation.
    The notations used in standard formal logic likewise make such analysis difficult. At one point, not too long ago, self-referential or cyclic expressions were simply disallowed.
     
  14. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,270
    This sounds like second-order cybernetics <<<, I think?

    "... a brain is required to write a theory of a brain. From this follows that a theory of the brain, that has any aspirations for completeness, has to account for the writing of this theory. And even more fascinating, the writer of this theory has to account for her or himself. Translated into the domain of cybernetics; the cybernetician, by entering his own domain, has to account for his or her own activity. Cybernetics then becomes cybernetics of cybernetics, or second-order cybernetics"
    ---Heinz von Foerster

    Also:

    "... essentially your ecosystem, your organism-plus-environment, is to be considered as a single circuit."
    ---Mead and Bateson (from an interview in CoEvolution Quarterly--Stewart Brand's extension of the Whole Earth Catalog)
     
  15. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Returning to the topic of program correctness. Here, you can apply a calculus of correction proofs.

    But, it's something that complicates even quite simple while or for loops, the nature of correctness proofs is such that writing a program (a correct program) to "find" loop invariants is hard, so humans do it by using "intuition".

    For example, a proof for the following code section of some program, a while loop plus one initial assignment runs to a page or two of the calculus in question (based on Floyd-Hoare logic).

    i := 0
    while ( i ≤ n - 1 )
    {
    i := i + 1
    A(i) := A(i) + B(i)
    }
    And here, from my tutorial notes, is a 'trial solution' for the invariant:

    ( ∀j ∈ {1,2, ..., i} : A(j) = Aο(j) + B(j) )∧( ∀j ∈ {i+1, ..., n} : A(j) = Aο(j) )∧( i ≤ n )∧( i ≥ 0 ), where Aο(j) is a copy of the array A's initial values. This copy doesn't actually need to exist, only the proof "requires" it.
     
    Last edited: Sep 24, 2018
  16. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Look at (for) differences. There is the obvious difference between 0 and 1 (off and on), but computationally there is also the difference between doing nothing and doing something, branching or not branching, etc. For a logic of programs, you need 1 to be a program that does nothing, i.e. that only halts. So a program that doesn't halt or fails is in the state 0.

    This "state" can then be mapped to any program that halts, with the help of a calculus of partial and total correctness, along with the algebra that comes along with making programs into algebraic 'functions', binary operators on (ordered) pairs of sets of conditions.

    roughly, if α and β are pre- and post-conditions, resp., then α P β says P is a relation between them; you could also consider α β to represent a logical boundary for P.
    And there's a sense that you "go through" α to get to β. P is like a path between these logical predicates, such that α P ¬β is a logical contradiction.

    More concretely, you want (if αPβ then αP) to be true when (if αP then αPβ) is true.
    This is program algebra, or program-as-proposition.
     
    Last edited: Sep 27, 2018
  17. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Why look at program correctness? LoF implies that there is a simplification of "ordinary" logic (predicate calculus, say); program correctness proofs definitely fall on the other side, you complicate things initially, derive an invariant and then simplify.

    Why is an algorithm to derive a useful invariant (which is one that implies the required result when the loop terminates) hard to write or design?

    This Floyd-Hoare logic for programs defines a semigroup structure which is the set of all (possible) programs under composition. A semigroup has no identity or inverses, but defining 1 as a program that "skips" or say, executes a NOP instruction, you get an object which acts like an identity:

    P1 = 1P = P
    So that:

    P0 = 0P = 0
    . . . which says P followed by an abort is the same as an abort followed by P, is a kind of expression of a required logical symmetry.

    The semigroup structure means you can easily derive "laws" of composition and union, for all programs P, Q, R:

    (S1) P(QR) = (PQ)R (composition is associative)
    (S2) P0 = 0P = 0
    (S3) P1 = 1P = P
    (U1) (P ∪ Q)R = PR ∪ QR (composition distributes over union* of programs)
    (U2) P(Q ∪ R) = PQ ∪ PR
    (U3) P ∪ Q = Q ∪ P (union is commutative, like addition)
    (U4) (P ∪ Q) ∪ R = P ∪ (Q ∪ R) (and associative)
    (U5) P ∪ 0 = P
    (U6) P ∪ P = P
    You can also define Si and Uj in terms of implication.
    If P(QR) then (PQ)R
    If P0 then 0
    etc.
    * Note that union of programs can just be considered a second kind of operation or logical connective, composition is "AND-like", union is "OR-like", in the context of relations. You want the algebra to have a ring structure, a semigroup that gets an identity but still has no inverse programs, plus an additive Abelian group.

    Also interesting is how the operation of union "interacts with" this identity (program). First you need to define the closure of a program, this is another operation which corresponds to iteration. The Kleene closure of P is P*; it satisfies the following 'special' predicates:

    (U5*) P* = 1 ∪ PP* (an iteration of P is the union of 1 with PP* which is P followed by P*)
    (U6*) αP = αPα => αP* = αP*α (if α holds before and after P, then it holds before and after an iteration of P)​

    U5* is self-referencing, but "divide both sides" by P* and you have: 1 = 1 ∪ P. U6* is the first axiom that requires notation of preconditions and postconditions. In fact these are just left out of the definitions for convenience; it's understood that composition PQ actually means αPβQγ, for some conditional predicates α, β, γ.
    Now the re-entrant stuff:

    (1*) If program R = if α then P else Q, then R = αP ∪ α'Q.
    (2*) If program R = while α P, then R = (αP)*α'. Where α' is the complement of α.

    The complements of conditionals are useful, the complements of programs (?), probably not so much. So one way to look at all this is you get to write nice succinct logical forms for if then else, and while statements.


    What about other kinds of loops? Apparently they all have the same form in this algebra i.e. all look like 2*.
     
    Last edited: Sep 28, 2018
  18. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    It really is all about differences, and how you have different differences (different domains for sets of certain "computational" functions).

    What is meant by the difference between 0 and 1 depends on a context. This is especially apparent in computer languages where there is always a difference between assignments and tests of conditional expressions. Assignment is a function that doesn't need logical premises, but the calculus requires it to have a precondition and a postcondition so there is a triple like α P β.

    But α has to be β[E/x] if x is the program variable being assigned a value (i.e. if P = {x := E}).
    A logical test is "assignment-like" in that any test of a logical expression returns 0 or 1 (interpreted in the logical subspace of P as false or true).
     
  19. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    So the proof calculus is based on a number of distinctions. It also says quite a lot about what a program is: something that can be reduced to an ordered set of algebraic expressions, that because it 'requires' conditional statements (Boolean expressions) has at least a meet-join lattice structure over these statements.

    You can distinguish three kinds of program statements: expressions (E), conditional expressions (B), and commands (C). There are three distinct syntactic domains in "program-space".

    Each of E, B, and C have Backus-Naur forms; recursive (or re-entrant) expressions in BNF generate a grammar.

    Floyd-Hoare Logic, and its associated algebra sort-of make a claim that any programming language can be described by its axioms, all of which can be written as if-then statements. If-then-else can of course be represented like this:

    For abstract programs U, V, P, Q, R.
    U = if α then P
    V = if α' then Q
    R = if α then P else Q

    => R = UV.
    Then since, if α then P else Q = αP ∪ α'Q, clearly the algebraic union of programs is needed in the calculus because branching is needed in the program space. Indeed a computer with an instruction set that doesn't include branching and its control doesn't sound very useful.

    Likewise a computer that could not branch back to the beginning of a code section so it can iterate the section doesn't sound very useful. Computers need controlled branching and looping. What you would have left in the algebra is simple sequential composition, and really tedious grammar.

    Now, the idea that a F-H triple (which I write as) α P β, is heuristically a path P from α to β implies that P is a path made out of discrete steps (instructions, commands, tests, etc). Perhaps it therefore lends itself to Kaufmann's writings on a discrete ordered calculus, in which the "heuristic" of a commutator is the notational basis.

    Here is a link to one such at arxiv: https://arxiv.org/pdf/quant-ph/0303058.pdf

    P.S. The only notation in the program algebra I've posted here from my notes, that looks like it corresponds vaguely to a commutator, is the following expression:

    αP ∪ α'Q := α[P,Q]
    . . . it sort of says you can define the difference between two branches like a commutator element. There's another def. too, for looping, but it's additional notation and doesn't really add anything except convenience.
     
    Last edited: Oct 2, 2018
  20. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Logic. It's something that comes equipped with structure and symmetries.

    Whatever boundary logic asserts about its arithmetic of differences, F-H logic asserts that if P is a program that runs and halts on a computer, then it has a precondition and a postcondition even if these aren't in the encoding, because you can put them in as say, comments an interpreter or compiler will ignore.

    Moreover, F-H logic asserts there is, for any program P, an invariant which implies the postcondition and is implied by the precondition. This assertion is the key to deriving a logical proof that P not only halts (is totally correct) but satisfies its design specification (i.e. some predicate).

    Solutions for loop invariants are hard to find, branch invariants not so hard, and assignment invariants are easy.

    Suppose P = {x := E}, where E is an arithmetic expression. F-H says P is really αPβ, and β is really β∧(x = E).
    But α is α∧(E = E). So that β∧(x = E)∧(E = E) is also true.

    By inspection the invariant is (E = E) whatever α or β might also "contain". Real simple.
    Oops, forgot that now you need to show that α and β are equivalent except that α "replaces x with E", so you can write say, β[E/x] {x := E} β.

    Branching and looping make this a lot less trivial, why though?
     
    Last edited: Oct 2, 2018
  21. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Here's something that came up about notation (as I recall it): if we assume P is a program which is guaranteed to halt, then {P} is notation for a program that runs on some computer, i.e. we distinguish between a program-as-contract, and a program-as-transaction, type of thing.

    So now the commutator-like form of αP ∪ α'Q can be written like this: [P,Q]:= {P}Q ∪ P{Q}, which says branch P is taken and branch Q is not, or branch Q is taken and branch P is not. There is an assumed conditional expression, just as in P = αP says if P halts then there is a precondition which must be true before it starts, or, if P then α{P}.

    Looping can be expressed like this: (α : P) := (αP)*α' = {αP}*α'.

    Just sayin'
     
    Last edited: Oct 3, 2018
  22. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Now some even more out-there shit.

    By 'bracketing' a program with logical predicates, whether it runs on a computer or is only guaranteed to run and halt, you are giving it "logical coordinates", in the same sense a vector or tensor in geometric theories is given a system of coordinates.

    Hence, a "theory of program correctness" is a "theory" of invariants of a symmetric group.
     
  23. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    It may have occurred to some that the axioms I laid out above for some "laws" of programming define a logical space with composition and union of programs, along with iteration and branching.

    But because there are no inverse programs or program complements (why not?), the algebra is not Boolean.
    Suppose you introduce inverses anyway. Define the inverse of a program P as the same set of ordered instructions in P but backwards, such that if \( P^{-1} \) is the inverse of \( P \), then \( PP^{-1} = 1\), and composition forms a group.

    Then what does \( P \cup P^{-1} \) equal? Do we have a problem because programs aren't in general, run in reverse?

    Further, invariants for loops which, as in the example program above, are linear, have a straightforward (after learning the rules) inductive solution. What if the array index i is a random number from 0 to n? Does the theory of program correctness extend to statistical programs? This is an active area of research, apparently. Here is a thesis from Cambridge on that very subject: https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-566.pdf
     

Share This Page