Car is working, even if not everybody likes it. It's clear the gaskets are too expensive to fix right now, but not the brakes, since all that requires is a change to the rules.
A car really shouldn't be used without working gaskets. And if someone isn't going to fix those, why add more things to the list.
I don't want to discuss people either. I want to discuss ideas.
Then debate what is written. That's it. The idea is only formed when you take the symbols on the screen and they form in
your head. Until then, whether typed by a person or just a copy of LLM output, there is no "idea" as far as you're concerned.
But ideas do not come from chatbots; they come from people. At best, they are generated by what chatbot users put in, so they are the ones I want to talk with.
So you want to chat
with people, rather than chat
about things. Okay.
We know for a fact that chatbots do not produce logical arguments; they mimic - they copy - logical arguments. Badly. We should not have the site turn into a big game of broken telephone.
This is factually wrong.
They are better at producing logical arguments than many on this site. That they don't
understand them, and that it is just predictive text, doesn't negate that it produces. Forgers still produce. Mimics still produce.
Yes it does. Not because it comes from a chatbot per se, but because we are basically arguing with a million monkeys on a million typerwiters.
So what?!
If what is produced makes sense, why do you care? If it doesn't make sense, call it out. If a poster keeps not making sense, report them. Simples.
Imagine you, Sarkus, were presented with a manuscript that looked a lot like Shakespeare's Hamlet but were told it came from the room of a million monkeys with a million typewriters.
Are you going to treat it as you would a Shakespearean work, going through it stanza by stanza, analyzing first whether it is a "real" line from Shakespeare's Hamlet,not a random typo and only then deciding to analyze its style? Or are you going to say, "Look, the monkeys are extremely error-prone; we know this. You know what I do trust? I trust this published Hamlet paperback by Penguin Classics. It may still have errors, but they will be the exception, rather than the rule."
This is disingenuous, methinks. We are not talking here about whether the LLM produces things accurately. If I need to rely on something I would go to a trusted source. Just as if any poster here makes a claim we ask for a trusted source for that claim.
So we're not here talking about the LLM being considered a trusted source. I mean, if what you're against is "well, chatGPT says so then it must be true!" then sure, I'd agree, but this is again no different to saying "well, if DaveC says so then it must be true!".
So what were talking about, afaict, is just someone delegating their thought for predictive text. So, no the Hamlet example isn't not really applicable to this forum.
I get you. In principle. If we were immortal and retired. But in practice, we should cut out the low quality of stuff to raise the quality of the site.
Oh, I agree. Unfortunately for this site, chatGPT is often more sensible, less fallacious, and also less dishonest. It's not creative, for sure, but then, well, other posters.
We should force users to do their own critical thinking - something that is lacking in posts of LLM regurgitation.
No. We shouldn't
force anybody to do anything of the sort. We should
encourage, sure. But, to me at least, an argument presented is an argument presented, irrespective of source.
This a pathetic excuse for a discussion.
And you'd be more than justified in putting the poster on ignore. But not all LLM output is quite so ridiculous. Again, if you disagree with the facts, post sources, and if the person/LLM refuses to budge then report them. Simples.
But it didn't. The user has been indoctrinated into thinking the idea makes sense because they think chatbots are smart.
And the person would quickly find themselves on ignore lists, warned, and possibly banned, all within
existing rules.
Chatbot is short-circuiting their critical thinking (that's their post #1), and then encouraging them to commit an 'appeal to authority' fallacy (that's everything they post from post #3 onward).
If only critical thinking was a requirement for this site!

If they are using the LLM as an authority then that is an issue, but the LLMs don't do that, the poster does.
Again, nobody said anything about "debating people". That is a category error.
Unfortunately it's what you're saying when you deny input produces by things that are not-people.
But I assume you do want to debate thoughts.
No, ideas. Which, for me, are created when I make sense of the symbols I see on the screen. What put those symbols there I couldn't care less. If i respond and get a reasonable reply, the discussion continues. If not, I'll get bored and move on.
They are not arguments. They mimic human speech patterns. There is a qualitative difference. Don't make the same mistake so many are making about what chatbots are doing.
I'm not making any such mistake, but you are misunderstanding what an argument is.
Okay, is this an argument?
P1: Bob is a dog.
P2: all dogs are blue.
C: Bob is blue.
I consider this to be a valid argument. Not sound, but certainly valid. Assuming you agree that it
is an argument (specifically a syllogism) then what specifically turns it in to a non-argument when I tell you that it was produced by chatGPT?
If you don't agree there is a fundamental difference, go back to my million monkeys example and ask yourself how many monkey manuscripts you will patiently wade though, knowing they are just ersatz simalcra.
I've explained above why I consider that to be disingenuous example.
Question: does it make a difference if I tell you that much of this response was suggested by chatGPT? If it doesn't, and you think this post an acceptable response, are you really not just talking about formatting and use of colloquial language?
Just asking.
