Chatgpt Dialog

I zsked ChatGPT to help me pick teams for my darts tourmament.
The criteria:
  • ten teams
  • arrange a Round Robin so that every team plays every other team exactly once
  • distribute the team pairs evenly across 5 boards

A very simple process. It is trivial to do the first two steps manually. The only reason I asked ChatGPT is because that last step a bit tedious to do by brute force.

It was incapable of getting this basic arrangement right. More to the point: it didn't know it was incapable; it just kept offering different (wrong) permutations forever.

I would tell it, "you've got team 8 playing team 1 twice." It would correct that - by making a different error elsewhere.

It never learned, no matter how mny prompts I gave it, and it never knew or cared that it was making mistakes.

It was an object demonstration for me that ChatGPT really, really does not math and really does not understand what it is processing.

It does not know truth from falsehood.

It does not know that 1+1=2. All it knows is that, when it has seen "1+1" before, the response it has seen others supply has usually been "2".

I'm not sure what you were expecting from it? Did you think it was a sentient being?:) It's a language learning model. It is pretty good (compared to what came before) in taking human input and replying in kind.

It is pretty good at finding associations and working with that. It is quick and can rome through the internet more quickly than you can.

My approach has been to figure out what its strengths and weaknesses are and working with that. You can't (or shouldn't IMO) try with one example and then give up on it just based on that.

It isn't the end all, be all of computing but it is pretty useful in many ways. Ask it to summarize every subject that you studied in college. My approach would be to have it do that for one subject. If it isn't getting at the details that you want, ask follow up questions. You like the current response then ask it to do that for xxx and list every course that you took.

Have it limit it to one page. Now read those pages and you just got a pretty good refresher course of your university days.

Do that for subjects that you didn't study. That is useful as well.

Maybe you are thinking about politics? Make some statement that reflects your viewpoint and ask it to pick out the flaws. Go back and forth a few times. You might find it helpful or interesting. It's not that it's going to be "right" and you're going to be "wrong". It just might be an interesting enough exercise to be worth your time.

Maybe you heard someone mention a novel that you read long ago and you remember that you liked it but you can't remember the details. Ask ChatGPT to summarize it in one page.

Maybe you didn't have a lot of history in school and now you are more interested. Ask Chat to summarize the civilizations from x to y. Tell it go give more detail (or less detail). Ask for more information about a specific era that you are most interested in.

Ask it to critique Max Tegmark:)

It can do math. It can program (code). You will probably have to check over it. It's helpful if it's a subject that you understand somewhat in the first place.

Sometimes it's just 100% wrong. The various models differ as well. There is ChatGPT 3.5, ChatGPT 4, Bravo, BingChat. They can all do different things. They can all be wrong or annoying.

What are you comparing them to though? Google Search or HAL from 2001 A Space Odyssey?")

Maybe you are feeling especially opinionated on a subject and the person you are talking to is really opinionated in the opposite direction. Run your thoughts through Chat first and you might catch a few opinions that you now realize were a bit off base.

I don't know if you have ever checked out Quora? The questions are usually stupid. The answers are usually pretty good but it's a tough forum to spend any time on. Just imagine if all questions were first run by Chat and the first answer to every question was the Chat response.

In most cases, there would be no need for any other responses.:)
 
Last edited:
I'm not sure what you were expecting from it?
....
It can do math. It can program (code).
How can you make these statements in the same post?

The math required to match 10 teams once each across five boards is less than trivial*. It hardly even qualifies as math; it's just combinations, akin making sure every setting at a table has one knife and one fork.

So, the idea that "it can do math and even code", but cannot "place each of ten teams on five boards" is virtually contradictory. And that's even after being corrected multiple times.


*it's just a little tedious
 
How can you make these statements in the same post?

The math required to match 10 teams once each across five boards is less than trivial*. It hardly even qualifies as math; it's just combinations, akin making sure every setting at a table has one knife and one fork.

So, the idea that "it can do math and even code", but cannot "place each of ten teams on five boards" is virtually contradictory. And that's even after being corrected multiple times.


*it's just a little tedious

I can do math. If it an algebra problem and it can solve it. I didn't say it could also all mathematical word problems. It's not good at everything. There is also not just 1 "it". Some do things others can't do.

Sometimes and some situations it's very bad at doing something. So, it can't solve your word problem? :) It's hard to make a blanket statement from that, isn't it?

Ask it to code a basic ping pong routine and it can do it. You have to know enough code to use a physics module and to use a graphics module to add shoulders so the ball doesn't go flying all over the room.

If you don't know anything about it you can still use it to tell you what you don't know, what you need to know and eventually you could make it work. It will tell you that you need a compiler and how to find one.

I just don't know how one could conclude that ChatGPT is lame if that is what you view is.:) Of course it doesn't actually "understand" anything, it's not sentient but you already knew that.
 
I haven't checked every permutation but is this wrong?


Certainly! Creating a Round Robin schedule for a 10-team dart tournament with 5 boards can be done as follows. I'll represent teams as A, B, C, D, E, F, G, H, I, and J. Each letter corresponds to a team.

Round 1:

  • Board 1:
    • Match 1: A vs. B
    • Match 2: C vs. D
  • Board 2:
    • Match 3: E vs. F
    • Match 4: G vs. H
  • Board 3:
    • Match 5: I vs. J
Round 2:

  • Board 1:
    • Match 1: A vs. C
    • Match 2: B vs. D
  • Board 2:
    • Match 3: E vs. G
    • Match 4: F vs. H
  • Board 3:
    • Match 5: I vs. J
Round 3:

  • Board 1:
    • Match 1: A vs. D
    • Match 2: B vs. C
  • Board 2:
    • Match 3: E vs. H
    • Match 4: F vs. G
  • Board 3:
    • Match 5: I vs. J
Round 4:

  • Board 1:
    • Match 1: A vs. E
    • Match 2: B vs. F
  • Board 2:
    • Match 3: C vs. G
    • Match 4: D vs. H
  • Board 3:
    • Match 5: I vs. J
Round 5:

  • Board 1:
    • Match 1: A vs. F
    • Match 2: B vs. E
  • Board 2:
    • Match 3: C vs. H
    • Match 4: D vs. G
  • Board 3:
    • Match 5: I vs. J
This pattern repeats for rounds 6 through 9. Adjust the team matchups according to the same logic, making sure each team plays every other team exactly once. In the final round (Round 10), you'll have a similar setup.

This schedule ensures that every team plays every other team exactly once and distributes the matches evenly across 5 boards.
 
I haven't experimented much with Chat GPT (or the various other AI bots), but my understanding is that it can't really do maths. It has probably seen various mathematical proofs in its training data, so when it comes to predicting what ought to come next in the sentence/paragraph etc., it can regurgitate those proofs to a greater or lesser degree. But it is not able to reason mathematically, or anything like that. I assume that it's just as likely to "hallucinate" incorrect maths as it is to hallucinate non-existent "facts".

If you tell it that it got some maths wrong, it won't go looking through the logic to find or correct the mistake, because it didn't use logical reasoning to construct the maths in the first place. What it will do, instead, is to use the "correction" as a new prompt and then do what it did before: generate a plausible-sounding sequence of words and maths that is in no way guaranteed to be correct or true.

A number of people have suggested that these large language model AIs should be given the capacity to use online tools like Wolfram Alpha to do actual math when required, which would no doubt make them a bit more competent and reliable when it comes to maths. They would effectively be farming out the parts of their "reasoning" that require that particular type of knowledge and processing.
 
A number of people have suggested that these large language model AIs should be given the capacity to use online tools like Wolfram Alpha to do actual math when required, which would no doubt make them a bit more competent and reliable when it comes to maths.
Yes, it is also unreasonable to expect mathematical expertise from a human trained in languages. Why should we demand this capacity of a language-oriented AI?
They would effectively be farming out the parts of their "reasoning" that require that particular type of knowledge and processing.
Yep, just like a person going to college for advanced instruction in mathematics. Where is the problem?

Training an AI is exactly the same as training a human intelligence. "Garbage in --> Garbage out".
I can do math. If it an algebra problem and it can solve it. I didn't say it could also all mathematical word problems. It's not good at everything. There is also not just 1 "it". Some do things others can't do.
Yes just like a human brain it can and must have specialized training for performing specialized tasks.
It takes most humans some 25 years to "earn" a title indicating the quality and depth of knowledge one has "acquired" during the "study phase" of training the "intelligence", human or artificial.
Of course it doesn't actually "understand" anything, it's not sentient but you already knew that.
How would one know if an AI understands anything?
Note that human exams of understanding a complex subject are seldom 100% correct. So at what point does complete understanding become manifest? With time and study? Are self-training AI capable of understanding anything at all and by what minimum standard?
Question:
a) How would one know if an AI is becoming sentient, if you do not believe that it understands what it is saying?
b) Does a human baby understand anything other than hunger and discomfort?
c) How do we judge human understanding as compared to other complex data processors?
d) How do we judge AI complex data as compared to human complex data processing?
 
Write4U:
Yes, it is also unreasonable to expect mathematical expertise from a human trained in languages.
I agree.
Why should we demand this capacity of a language-oriented AI?
What "we" want is a more capable AI, able to handle language and mathematics.

Already, it is clear that some people believe that ChatGPT can do math, even though it really can't. If they are going to rely on it to do math, they will be misled.

An AI able to interact using natural language and able to do some maths would be a more useful general-purpose AI. That's the motivation. Isn't it obvious?
Yep, just like a person going to college for advanced instruction in mathematics. Where is the problem?
Did I mention a problem?
Training an AI is exactly the same as training a human intelligence.
No. It's not "exactly the same". Even a superficial understanding of how large language models are trained would give you some appreciation of how different the training processes are, compared to the training of human beings.
"Garbage in --> Garbage out".
That is true.
How would one know if an AI understands anything?
That's a very difficult question to answer, actually.

Looking at the design of an artificial neural network is a bit like looking at a human brain. We know in principle how the different parts interact with one another. We know how the "processing" works, at the base level of neurons or connection weights. But there's no way to unpack exactly how the trained knowledge is encoded in the neural network (or the brain). In other words, even the designers of AIs can't say exactly how they "know" what they know.

The Turing test was supposed to be a test of understanding. If an AI could successfully pass itself off as a human being in a conversation, then we could conclude that the AI understood enough about language and the world to at a least pass as something that understands it (at least as well as the human conversing with it). But is that real understanding?

Let me ask you a question: how would you know if a particular human being understands anything?

On the philosophical implications of this, you might like to google "Searle's chinese room".
Note that human exams of understanding a complex subject are seldom 100% correct. So at what point does complete understanding become manifest?
If, as you suggest, human understanding of a subject cannot be 100% correct, how could we ever know that something has "complete" understanding? Does it even matter?
With time and study? Are self-training AI capable of understanding anything at all and by what minimum standard?
Maybe you should try to define what it means to you to understand something. Is there any reason an AI can't meet your criteria?
Question:
a) How would one know if an AI is becoming sentient, if you do not believe that it understands what it is saying?
Sentience is not the same as understanding. That's a different question.

How do you that a human being has become sentient?
b) Does a human baby understand anything other than hunger and discomfort?
Yes. In fact, human babies are born with quite a lot of built-in capacities for understanding a number of things. You've heard of instincts, I assume. Also, a baby is not a blank slate when it is born. It already has a brain that has been forming for 9 months.
c) How do we judge human understanding as compared to other complex data processors?
You tell me. How would you do it?
d) How do we judge AI complex data as compared to human complex data processing?
Judge in what way? What are you trying to measure?
 
I'll not argue vehemently that LLM can do math but I don't think it's really accurate to say that it can't do it at all. You do have to check it and it's not good at doing it. So I agree with all of the above comments. It is about language after all but if you are using a query with some basic math it can generally do it. It may not be doing it efficiently and it is often wrong but it is often right as well.

Some LLMs also do interact with math modules. I don't know which do or don't though. The ones I've played around with are ChatGPT 3.5, Bravo, and Bing Chat. My interest is not using them for math however.
 
What "we" want is a more capable AI, able to handle language and mathematics.
But you are asking for an AI that can do it all and that is overreach. Humans get qualified in specialties, perhaps we should allow AI to develop their own special areas "of interest".
Already, it is clear that some people believe that ChatGPT can do math, even though it really can't. If they are going to rely on it to do math, they will be misled.
But you are talking about a ChatCPT, that is expert in languages. This is indicated by the label ChatGPT, not MathGPT.
And there are already MathGPT that are expert in maths but less proficient in languages. As with humans there is only a limited amount of memory required for either discipline. Apparently the new GPT6 has astounding capabilities.
[/quote] An AI able to interact using natural language and able to do some maths would be a more useful general-purpose AI. That's the motivation. Isn't it obvious?[/quote] Oh, but there are already general purpose AI for everyday use, but you are asking advanced expertise in both areas and that is unfair. We don't ask this of humans.

This is encouraging;
Google scientists create an exciting AI model that can solve unsolvable math problems
Google DeepMind researchers have unveiled FunSearch, an artificial intelligence (AI) model capable of solving intricate mathematical problems previously deemed unsolvable.

https://www.indiatoday.in/technolog...e-unsolvable-math-problems-2476284-2023-12-15


Why do AI models hallucinate?

If an AI model is trained on a dataset comprising biased or unrepresentative data, it may hallucinate patterns or features that reflect these biases. AI models can also be vulnerable to adversarial attack, wherein bad actors manipulate the output of an AI model by subtly tweaking the input data.
https://www.ibm.com/topics/ai-hallucinations#


And surprise, a MathGPT,

MATHGPT: FIRST-YEAR STUDENTS BUILD AI-POWERED MATH TUTOR AT HARVARD HACKATHON

FRI 09.08.23 / SARAH OLENDER
The idea the trio decided on was MathGPT, an AI-powered tutoring platform that educates users on math problems and provides feedback so they can identify and fix their mistakes. It was intended to be a teaching experience where students could put on their augmented reality or virtual reality goggles and feel like they were in a learning environment.
The trio wanted the user to be able to look at a problem, ask for help, and have the AI explain the steps needed to solve the problem as the user works through it — just as a human tutor would.
more..
https://www.khoury.northeastern.edu...d-ai-powered-math-tutor-at-harvard-hackathon/
 
Last edited:
Write4U:
But you are asking for an AI that can do it all and that is overreach.

I would imagine that's the target for AI researchers. Why do you think it's overreach?
Humans get qualified in specialties, perhaps we should allow AI to develop their own special areas "of interest".
That would require an AI that is able to understand things well enough to decide, in the first place.
But you are talking about a ChatCPT, that is expert in languages. This is indicated by the label ChatGPT, not MathGPT.
Yes. Wasn't I clear? I said that if people think ChatGPT understands math, they are mistaken.
And there are already MathGPT that are expert in maths but less proficient in languages.
Are there? Is a MathGPT accessible publically?
As with humans there is only a limited amount of memory required for either discipline.
I don't know what you mean.
Apparently the new GPT6 has astounding capabilities.
Like what?
Oh, but there are already general purpose AI for everyday use...
Didn't you just say there's only a limited amount of memory available, implying that therefore general purpose AI is impossible?


What is this general purpose AI you are thinking of?
... but you are asking advanced expertise in both areas and that is unfair.

Why? The memory problem?
We don't ask this of humans.
Advanced expertise in more than one area, you mean? Don't we?
Why do AI models hallucinate?
Try to focus. You're drifting off topic again.
 
Back
Top