Chatgpt Dialog

Discussion in 'Intelligence & Machines' started by mathman, Jun 11, 2023.

  1. Seattle Valued Senior Member

    Messages:
    8,874
    I'm not sure what you were expecting from it? Did you think it was a sentient being?

    Please Register or Log in to view the hidden image!

    It's a language learning model. It is pretty good (compared to what came before) in taking human input and replying in kind.

    It is pretty good at finding associations and working with that. It is quick and can rome through the internet more quickly than you can.

    My approach has been to figure out what its strengths and weaknesses are and working with that. You can't (or shouldn't IMO) try with one example and then give up on it just based on that.

    It isn't the end all, be all of computing but it is pretty useful in many ways. Ask it to summarize every subject that you studied in college. My approach would be to have it do that for one subject. If it isn't getting at the details that you want, ask follow up questions. You like the current response then ask it to do that for xxx and list every course that you took.

    Have it limit it to one page. Now read those pages and you just got a pretty good refresher course of your university days.

    Do that for subjects that you didn't study. That is useful as well.

    Maybe you are thinking about politics? Make some statement that reflects your viewpoint and ask it to pick out the flaws. Go back and forth a few times. You might find it helpful or interesting. It's not that it's going to be "right" and you're going to be "wrong". It just might be an interesting enough exercise to be worth your time.

    Maybe you heard someone mention a novel that you read long ago and you remember that you liked it but you can't remember the details. Ask ChatGPT to summarize it in one page.

    Maybe you didn't have a lot of history in school and now you are more interested. Ask Chat to summarize the civilizations from x to y. Tell it go give more detail (or less detail). Ask for more information about a specific era that you are most interested in.

    Ask it to critique Max Tegmark

    Please Register or Log in to view the hidden image!



    It can do math. It can program (code). You will probably have to check over it. It's helpful if it's a subject that you understand somewhat in the first place.

    Sometimes it's just 100% wrong. The various models differ as well. There is ChatGPT 3.5, ChatGPT 4, Bravo, BingChat. They can all do different things. They can all be wrong or annoying.

    What are you comparing them to though? Google Search or HAL from 2001 A Space Odyssey?")

    Maybe you are feeling especially opinionated on a subject and the person you are talking to is really opinionated in the opposite direction. Run your thoughts through Chat first and you might catch a few opinions that you now realize were a bit off base.

    I don't know if you have ever checked out Quora? The questions are usually stupid. The answers are usually pretty good but it's a tough forum to spend any time on. Just imagine if all questions were first run by Chat and the first answer to every question was the Chat response.

    In most cases, there would be no need for any other responses.

    Please Register or Log in to view the hidden image!

     
    Last edited: Feb 5, 2024
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. DaveC426913 Valued Senior Member

    Messages:
    18,961
    How can you make these statements in the same post?

    The math required to match 10 teams once each across five boards is less than trivial*. It hardly even qualifies as math; it's just combinations, akin making sure every setting at a table has one knife and one fork.

    So, the idea that "it can do math and even code", but cannot "place each of ten teams on five boards" is virtually contradictory. And that's even after being corrected multiple times.


    *it's just a little tedious
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Seattle Valued Senior Member

    Messages:
    8,874
    I can do math. If it an algebra problem and it can solve it. I didn't say it could also all mathematical word problems. It's not good at everything. There is also not just 1 "it". Some do things others can't do.

    Sometimes and some situations it's very bad at doing something. So, it can't solve your word problem?

    Please Register or Log in to view the hidden image!

    It's hard to make a blanket statement from that, isn't it?

    Ask it to code a basic ping pong routine and it can do it. You have to know enough code to use a physics module and to use a graphics module to add shoulders so the ball doesn't go flying all over the room.

    If you don't know anything about it you can still use it to tell you what you don't know, what you need to know and eventually you could make it work. It will tell you that you need a compiler and how to find one.

    I just don't know how one could conclude that ChatGPT is lame if that is what you view is.

    Please Register or Log in to view the hidden image!

    Of course it doesn't actually "understand" anything, it's not sentient but you already knew that.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Seattle Valued Senior Member

    Messages:
    8,874
    I haven't checked every permutation but is this wrong?


    Certainly! Creating a Round Robin schedule for a 10-team dart tournament with 5 boards can be done as follows. I'll represent teams as A, B, C, D, E, F, G, H, I, and J. Each letter corresponds to a team.

    Round 1:

    • Board 1:
      • Match 1: A vs. B
      • Match 2: C vs. D
    • Board 2:
      • Match 3: E vs. F
      • Match 4: G vs. H
    • Board 3:
      • Match 5: I vs. J
    Round 2:

    • Board 1:
      • Match 1: A vs. C
      • Match 2: B vs. D
    • Board 2:
      • Match 3: E vs. G
      • Match 4: F vs. H
    • Board 3:
      • Match 5: I vs. J
    Round 3:

    • Board 1:
      • Match 1: A vs. D
      • Match 2: B vs. C
    • Board 2:
      • Match 3: E vs. H
      • Match 4: F vs. G
    • Board 3:
      • Match 5: I vs. J
    Round 4:

    • Board 1:
      • Match 1: A vs. E
      • Match 2: B vs. F
    • Board 2:
      • Match 3: C vs. G
      • Match 4: D vs. H
    • Board 3:
      • Match 5: I vs. J
    Round 5:

    • Board 1:
      • Match 1: A vs. F
      • Match 2: B vs. E
    • Board 2:
      • Match 3: C vs. H
      • Match 4: D vs. G
    • Board 3:
      • Match 5: I vs. J
    This pattern repeats for rounds 6 through 9. Adjust the team matchups according to the same logic, making sure each team plays every other team exactly once. In the final round (Round 10), you'll have a similar setup.

    This schedule ensures that every team plays every other team exactly once and distributes the matches evenly across 5 boards.
     
  8. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    I haven't experimented much with Chat GPT (or the various other AI bots), but my understanding is that it can't really do maths. It has probably seen various mathematical proofs in its training data, so when it comes to predicting what ought to come next in the sentence/paragraph etc., it can regurgitate those proofs to a greater or lesser degree. But it is not able to reason mathematically, or anything like that. I assume that it's just as likely to "hallucinate" incorrect maths as it is to hallucinate non-existent "facts".

    If you tell it that it got some maths wrong, it won't go looking through the logic to find or correct the mistake, because it didn't use logical reasoning to construct the maths in the first place. What it will do, instead, is to use the "correction" as a new prompt and then do what it did before: generate a plausible-sounding sequence of words and maths that is in no way guaranteed to be correct or true.

    A number of people have suggested that these large language model AIs should be given the capacity to use online tools like Wolfram Alpha to do actual math when required, which would no doubt make them a bit more competent and reliable when it comes to maths. They would effectively be farming out the parts of their "reasoning" that require that particular type of knowledge and processing.
     
  9. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    Write4U likes this.
  10. Write4U Valued Senior Member

    Messages:
    20,099
    Yes, it is also unreasonable to expect mathematical expertise from a human trained in languages. Why should we demand this capacity of a language-oriented AI?
    Yep, just like a person going to college for advanced instruction in mathematics. Where is the problem?

    Training an AI is exactly the same as training a human intelligence. "Garbage in --> Garbage out".
    Yes just like a human brain it can and must have specialized training for performing specialized tasks.
    It takes most humans some 25 years to "earn" a title indicating the quality and depth of knowledge one has "acquired" during the "study phase" of training the "intelligence", human or artificial.
    How would one know if an AI understands anything?
    Note that human exams of understanding a complex subject are seldom 100% correct. So at what point does complete understanding become manifest? With time and study? Are self-training AI capable of understanding anything at all and by what minimum standard?
    Question:
    a) How would one know if an AI is becoming sentient, if you do not believe that it understands what it is saying?
    b) Does a human baby understand anything other than hunger and discomfort?
    c) How do we judge human understanding as compared to other complex data processors?
    d) How do we judge AI complex data as compared to human complex data processing?
     
  11. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    Write4U:
    I agree.
    What "we" want is a more capable AI, able to handle language and mathematics.

    Already, it is clear that some people believe that ChatGPT can do math, even though it really can't. If they are going to rely on it to do math, they will be misled.

    An AI able to interact using natural language and able to do some maths would be a more useful general-purpose AI. That's the motivation. Isn't it obvious?
    Did I mention a problem?
    No. It's not "exactly the same". Even a superficial understanding of how large language models are trained would give you some appreciation of how different the training processes are, compared to the training of human beings.
    That is true.
    That's a very difficult question to answer, actually.

    Looking at the design of an artificial neural network is a bit like looking at a human brain. We know in principle how the different parts interact with one another. We know how the "processing" works, at the base level of neurons or connection weights. But there's no way to unpack exactly how the trained knowledge is encoded in the neural network (or the brain). In other words, even the designers of AIs can't say exactly how they "know" what they know.

    The Turing test was supposed to be a test of understanding. If an AI could successfully pass itself off as a human being in a conversation, then we could conclude that the AI understood enough about language and the world to at a least pass as something that understands it (at least as well as the human conversing with it). But is that real understanding?

    Let me ask you a question: how would you know if a particular human being understands anything?

    On the philosophical implications of this, you might like to google "Searle's chinese room".
    If, as you suggest, human understanding of a subject cannot be 100% correct, how could we ever know that something has "complete" understanding? Does it even matter?
    Maybe you should try to define what it means to you to understand something. Is there any reason an AI can't meet your criteria?
    Sentience is not the same as understanding. That's a different question.

    How do you that a human being has become sentient?
    Yes. In fact, human babies are born with quite a lot of built-in capacities for understanding a number of things. You've heard of instincts, I assume. Also, a baby is not a blank slate when it is born. It already has a brain that has been forming for 9 months.
    You tell me. How would you do it?
    Judge in what way? What are you trying to measure?
     
  12. Seattle Valued Senior Member

    Messages:
    8,874
    I'll not argue vehemently that LLM can do math but I don't think it's really accurate to say that it can't do it at all. You do have to check it and it's not good at doing it. So I agree with all of the above comments. It is about language after all but if you are using a query with some basic math it can generally do it. It may not be doing it efficiently and it is often wrong but it is often right as well.

    Some LLMs also do interact with math modules. I don't know which do or don't though. The ones I've played around with are ChatGPT 3.5, Bravo, and Bing Chat. My interest is not using them for math however.
     
  13. Write4U Valued Senior Member

    Messages:
    20,099
    But you are asking for an AI that can do it all and that is overreach. Humans get qualified in specialties, perhaps we should allow AI to develop their own special areas "of interest".
    But you are talking about a ChatCPT, that is expert in languages. This is indicated by the label ChatGPT, not MathGPT.
    And there are already MathGPT that are expert in maths but less proficient in languages. As with humans there is only a limited amount of memory required for either discipline. Apparently the new GPT6 has astounding capabilities.
    [/quote] An AI able to interact using natural language and able to do some maths would be a more useful general-purpose AI. That's the motivation. Isn't it obvious?[/quote] Oh, but there are already general purpose AI for everyday use, but you are asking advanced expertise in both areas and that is unfair. We don't ask this of humans.

    This is encouraging;

    https://www.indiatoday.in/technolog...e-unsolvable-math-problems-2476284-2023-12-15


    Why do AI models hallucinate?


    https://www.ibm.com/topics/ai-hallucinations#


    And surprise, a MathGPT,

    MATHGPT: FIRST-YEAR STUDENTS BUILD AI-POWERED MATH TUTOR AT HARVARD HACKATHON

    FRI 09.08.23 / SARAH OLENDER
    more..
    https://www.khoury.northeastern.edu...d-ai-powered-math-tutor-at-harvard-hackathon/
     
    Last edited: Feb 7, 2024
  14. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    Write4U:

    I would imagine that's the target for AI researchers. Why do you think it's overreach?
    That would require an AI that is able to understand things well enough to decide, in the first place.
    Yes. Wasn't I clear? I said that if people think ChatGPT understands math, they are mistaken.
    Are there? Is a MathGPT accessible publically?
    I don't know what you mean.
    Like what?
    Didn't you just say there's only a limited amount of memory available, implying that therefore general purpose AI is impossible?


    What is this general purpose AI you are thinking of?

    Why? The memory problem?
    Advanced expertise in more than one area, you mean? Don't we?
    Try to focus. You're drifting off topic again.
     

Share This Page