Is Boundary Logic interesting?

Discussion in 'Physics & Math' started by arfa brane, Aug 21, 2018.

  1. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Sure, but I can start with a BL expression, rewrite it in Boolean logic, transform it and then write it as another BL expression. It shouldn't matter apparently, what kind of logic I use in between as long as it corresponds to BL. Bricken does say more than once that there are several Boolean algebras available, for instance.

    --http://iconicmath.com/mypdfs/bl-containment-v47.091209.pdf

    (ibid)
     
    Last edited: Aug 28, 2018
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. iceaura Valued Senior Member

    Messages:
    30,994
    They arise as natural terms, descriptive, metaphorical in a sense, in the course of the proofs and demonstrations. Just the ordinary English words are involved - no specific source.

    Brown taught his theory while it was still in development, and at least some of the terminology came from student slang or shorthand reference.

    Again: Brown's somewhat more patiently intuitive approach and notation - although not suited to keyboarding, as the basic symbol does not exist in any font I know of - clears up some of these matters.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. iceaura Valued Senior Member

    Messages:
    30,994
    It does. You lose significant advantages in notational clarity, concision, and so forth.
    I compared it to Arabic numerals replacing Roman ones, above. It's not that much of an exaggeration.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    The flip-flop described in chapter 11 (of LoF) seems to support what you are saying here, but... I'm curious about implementation. Bob Pease, the late IC designer and minor folk hero amongst hardware hackers, wrote:

    https://www.electronicdesign.com/digital-ics/whats-all-flip-flop-stuff-anyhow

    OK. But he's criticizing something that was written more than a quarter century prior (the article is from '94), so... (Also, crankiness was kind of his schtick)

    I guess I'm just asking about implementation in the "hard sciences." LoF is cited frequently by Maturana and Varela (The Tree of Knowledge esp.), also Luhman, but cognition, language, systems theory are a ways from circuit design and programming.
     
  8. iceaura Valued Senior Member

    Messages:
    30,994
    No idea. I'm out of touch.
    Three:
    Pease is talking about a fairly specific engineering circumstance that looks, from the outside, like a possible qwerty catch.

    What little commentary on LoF's approach I've seen from hard science and engineering folks has been (like Pease's) secondhand, in a sense - looking at some final result they have had provided to them, whose origin and justification and so forth is new to them, and finding it either over-hyped and flawed for their needs or somehow surprising in its validity - surprising in that it "works". There isn't much visible evidence of hard scientists or engineers being raised with this approach, being familiar with it and able to use it. And anyone who used it would face a communication barrier - they would have to translate their arguments, etc, into the standard vernacular. (US rocket scientists at the top of the field, people designing satellite and planet exploration and military gear at the very high end of sophistication, may still be using feet and inches and pounds and miles per hour as they were just a few years ago).

    One sees odd hints of stuff - a couple years ago I ran into a circuit design problem posed as a contest, like a sudoku with a prize, in a magazine on a coffee table in a high tech corporation's lobby (a physical magazine, yes), in notation and layout almost identical to that in the last chapter of LoF. Something like a zeppelin or Stirling cycle lobby may exist. Influence unknown.
     
  9. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Despite advantages or disadvantages, there seems to be a fairly natural way to go from BL to Boolean arithmetic, hence Boolean algebra, hence a Boolean ring. Although it's only one way (the paths are many).

    But I have to accept that the use of two numeric symbols, 0 and 1, is an abuse of notation. I can say I have an isomorphism only if I make the "mistake" of defining 0 = (( )), where the RHS in BL is actually nothing.

    Now Bricken says ( ) ( ) = ( ), (( )) = , are a pair of arithmetic rules like addition and multiplication. the addends and multiplicands are boundaries, not numbers or logical variables. There is no concept of value or numeric distance between variables. There is only the concept of difference or distinction in the "arithmetic".

    It needs to be decorated with T/F or 1/0 to get propositional calculus, but then you are strictly not in BL any more so there is no isomorphism (or simple change of notation). Which is not to say BL can't be the foundation of some formal logic or solution to a logic problem--the five liars problem or the three wise men or Einstein's puzzle . .
     
    Last edited: Aug 29, 2018
  10. iceaura Valued Senior Member

    Messages:
    30,994
    There's a chapter in LoF that takes you through the process - which is "simple" - of laying out traditional logic problems in the new notation, solving them, and translating back. Bricken also addresses that, in the links above.
    There are some fairly deep issues involved, which can be ignored in practice.
    One doesn't move from BL to propositional calculus.
     
  11. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Interesting. If one doesn't "move" between them, how then does one lay out traditional problems in the new notation?

    I mean, Bricken illustrates that BL corresponds to PC, even if there is no isomorphism. There isn't because one is obliged to change BL, or "decorate" it as I say, with symbols.

    But there's the thing, I can rename a symbol like "( )" with the symbol "1", and it clearly doesn't change anything. As soon as I try to label the void I'm obviously outside BL, except if 1 = ( ) is valid, why isn't 0 = (( ))? Why does the fact that "(( ))" means "there is no symbol" make it impossible to find an isomorphism between BL and PC?
    Using "1" doesn't change BL, but using "1,0" does; obviously it isn't a simple change of notation but a change of logic.
     
    RADII likes this.
  12. iceaura Valued Senior Member

    Messages:
    30,994
    One moves from the other notations and descriptions to the new notation - or simply from the problem, not yet set up in any formal notation, directly to the new notation. There is no need to move "back", or "between" - although it is possible, if you want to, as proved.
     
  13. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    From what I think I've understood about what the Linguistics people are saying here, BL is an iconic system--the "sign" is a thing that represents a real boundary, an enclosure)--and Boolean 2 ( i.e. the set {0,1} with extra structure, a meet-join lattice) is a symbolic one--a pair of "signs" with more abstract meaning.

    The third (there are only three types of sign, apparently) index type I guess means the sign indicates something. I think this index type of sign is the one that includes any physical kind of signal. Anyhoo, what you do when you introduce symbols like "1", "A", etc, into BL is connected to what information is, its entropy and so on.

    It has to be or we can't talk about the information content of a string of characters. Since, in BL, all 'iconic' strings are either the empty string or "( )", it has "infinite" information entropy, since in either case an infinite string can be substituted which is still either empty or a single boundary icon. In BL though, linear strings don't exist; "writing" ( ( ) ( ) ) on a line is just a topological (graphical) convenience. But using the rules of BL, that's the same amount of information as an empty string.

    Or Something.
     
    Last edited: Sep 4, 2018
  14. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    From the link on icons, indices, and symbols:
    Maturana/Varela (drawing from Brown's LoF) and Bateson seem to disagree here. (As would most contemporary ethologists and persons working in animal studies/critical animal studies, I think.) . Bateson wrote:
    http://www.yavanika.org/classes/reader/batesonplay.pdf

    Discussing play among dogs and wolves, he says "the playful nip denotes a bite" and he emphasizes the inherent paradox in that playing animals are not only emphatically not meaning what they say, but they are communicating something (the bite) which does not (yet, at least) even exist. Maturana/Varela write of animals employing or generating "linguistic domains" which are the basis of language, but not necessarily (yet?) language (?).

    So the differences between human and non-human communication systems are differences of degree rather than of kind. Somehow (with M/V, at least) they arrive at this through Brown's system, whereas the explanation of signs in the link seems derived more from Peirce. Though the author of the link also seems to focussing largely upon "word-like symbols."
     
    Last edited: Sep 4, 2018
  15. iceaura Valued Senior Member

    Messages:
    30,994
    In LoF, the "sign" is interpreted as an instruction - it represents an action.
    When used in interpreting the arithmetic of BL - and meaning derives from context - the boundary involved is hypothetical as all basic interpretative entities of mathematics are.
    (If you had three apples, and then you give one to Johnny, how many apples would you have?)
    A significant enough difference in degree would become a difference in kind, at some point. This might be a distinction drawn between some animals and the rest, rather than between humans and animals, naturally. Can a dog use a play nip to suggest not "this is play" but "remember when you were a puppy"?
     
  16. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Quite. An example is ordinary words, such as "bear", the interpretation depends on whether you mean it as a noun or a verb.

    But information and interpretation (of it) are different things. Information is also context-dependent, however.
    Yes, obviously it's heuristic, so is using logic gates to add binary "numbers" together, the context is differences between inputs and outputs.
     
  17. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    Agreed. The difference in kind is more apparent and meaningful when discussing, say, bees and rodents (and the respective complexity of their nervous systems, concentration of neural tissue, etc.)

    I don’t think we presently have sufficient knowledge or understanding to really answer this. Autonoetic consciousness has been observed in a number of species, mammals and birds esp., and there is considerable evidence of metamemory (knowledge of what one remembers) in animals—this paper—https://www.repository.cam.ac.uk/bi...bridge-repository-.pdf?sequence=1&isAllowed=y —addresses the validity of using Morgan’s Canon to argue against episodic memory in non-human animals, noting particularly that the Canon (as with all principles of parsimony) is not evidence itself and cannot be used to flat out deny a more complex explanation.

    Borrowing Bateson’s language again, the real significance of what a person says often lies in the kinesics and paralinguistics, and not so much in the words themselves. Yet, with the notion of thinking and conveying “outside” language—how do we talk about such (apart from simply describing the gestures, body language, etc.)? And how might the dog communicate, “remember when you were (or I was) a puppy?” For us, the world exists by way of language; at least, we necessarily (attempt to) explain and comprehend by way of language.

    That said—re: can a dog say “remember when you were a puppy?”—I, personally, would answer in the affirmative. I would support this by relating innumerable personal anecdotes, as well as the anecdotes and supportive evidence of others. But also, we have a shared history and established social relationship that goes back 30-40 thousand years (spoken language is not that much older). Yet I think few would disagree with the contention that dogs, generally, have a better understanding of what humans are conveying than do (most) humans with respect to understanding what dogs are saying. (I think this inability in humans is actually learned or conditioned; historically, more of us were certainly better at it). Wittgenstein, consistent with “saying only what can be said,” doesn’t say that the lion can’t talk; rather, he says we couldn’t understand her if she did talk. But... some of us do.


    (Incidentally, Brown’s “severance” (and Maturana/Varela’s “distinction(ing)”) also seems to share a lot with Jakob von Uexkuell’s umwelt, from which most all subsequent study of non-human animal signifying acts and processes derive.)
     
  18. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    So, linguistically the significance of the difference between an icon and a symbol has something to do with what boundary logic is.

    The arithmetic is the logic; start with the two rules and build arbitrary iconic strings:

    ( ) ( ) = ( ) -> ( ( ) ( ) ) = ( ( ) ), . . .

    Bricken then kind of blithely introduces the idea that symbols can be used for any arbitrary ('parens') expression, but doesn't comment on the fact that now there is another kind of semantic difference, the one between A and B, or any other symbol used for a "pure" BL form. A and B have information entropy because they might be each "equal to" ( ), or not.
     
  19. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    With regards to communicating (verbally) an iconic system to another person, the symbolic is inevitable. If you're teaching a class full of students, say, and you've got this-- ((( a ( b )) ( a c )) --on the chalkboard, how are you gonna say it? Most everyone is gonna say, "if a then b, else c." How else could a person express it aloud?

    Not sure what precisely that has to do with your observation, really, but... I think what you're saying would be true for any iconic system.

    (Sorry, I'm not sure where I was going with that, apart from noting the necessity of adding abstractions when conveying something communicated in such a manner.)
     
  20. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Well, sure. Perhaps the teacher could point out that the latter statement isn't really what the expression says. (I think you have one too many left parentheses in that expression)

    Just by assuming that a = ( ), say, then you have (( ( ) ( b )) ( ( ) c )). The symbols a, b, and c don't change anything structural; the notational change is an isomorphism. But, any of the symbols might be literally nothing. Hence, their existence is false, a convenience.

    But obviously Boolean variables do exist. You could probably safely claim that boundary logic is a way to simplify Boolean expressions. More, BL can be applied to many Boolean systems. I guess because iconic logic is somehow really a semi-logic.
     
  21. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Another take on why BL is useful is because it can separate true from false logical statements. The false statements literally don't exist.
    Hence that BL is semilogical, or perhaps quasilogical. It doesn't have two states, it has boundaries "acting on" boundaries.

    --https://www.jstor.org/stable/10.14321/contagion.22.1.0065?seq=1#page_scan_tab_contents
     
  22. iceaura Valued Senior Member

    Messages:
    30,994
    Alternatively, Boolean notation could be described as a voluntary complication, useful in certain specialized circumstances but very awkward if dealing with - say - multivalued (added "dimensions") reasoning, self-referential expressions, and the like.

    It's difficult to do arithmetic in Boolean notation - so the advantages of arithmetical sophistication are largely unavailable.
    That would make it more rigorously or fully "logical" - since the positing of two states creates a third (undefined), which is difficult to either handle (this sentence is false) or avoid (self-reference is often accidental).
    The "boundaries" don't act. They can be described as themselves actions.
     
  23. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Well, "this sentence is false" becomes "this sentence doesn't exist".
    Also, more than a few critics of BL say that it's unavoidably self referential; indeed it seems to be a kind of basis for reentrant computation, in which every valid "expression" is reduced.
    Isn't that merely a semantic distinction? Recall that BL is a logic of distinction, its basis is difference and information is also based on difference--between characters in a string, between strings in a language, between languages, . . ..
     

Share This Page