Site rule addendum: restrictions on AI use - an open discussion

If you don't like what someone posts, either it's content or it's source (e.g. LLM) then, here's an idea: don't respond to it. Say "thanks very much, but I'd like to have a discussion with a human and not an LLM", and leave it at that. If the person almost exclusively uses LLMs in lieu of thinking for themselves, then put them on ignore.
That has the overall effect of degrading the quality of the site, which affects memberage and activity.

I mean, you could say the same things about spam and porn. Just ignore it.

I can't see that having a rule specifically to disallow it is worthwhile,
Well, to encourage the use of this site for its intended purpose: discussion.
 
That has the overall effect of degrading the quality of the site, which affects memberage and activity.
So does not adequately moderating the existing rules. And thus this site is where it is. One more rule inadequately moderated is not going to affect things.
I mean, you could say the same things about spam and porn. Just ignore it.
Porn is different, as there are likely age restrictions, and legality, that one needs to consider.

The difference between spam and the use of LLMs would seem to be two-fold:
- FIrstly, relevance. Spam is rarely, if ever at all, relevant to the issues being discussed. The use of the LLM is neutral with regard relevance - i.e. it is the use to which it's put.
- Secondly, there would be the issue of prevalence. Sure, there was one thread where I've seen a user do nothing but post ChatGPT responses, but that is the only thread where it would seem to have been an issue. There are other instances in other threads, but the usage is hardly prevalent. Unless I just don't frequent the threads where it's becoming an issue.
Well, to encourage the use of this site for its intended purpose: discussion.
The use of LLMs can achieve that. It may not be the person's own thoughts or words, but, well, they are words. If there is an argument against proposition X, why should it matter if that argument is put together by a person or by an LLM? Is the idea of discussion to debate ideas, or to debate people?
 
So does not adequately moderating the existing rules. And thus this site is where it is. One more rule inadequately moderated is not going to affect things.
Justifying inaction due to other problems.

"My gasket head is leaking; there's no point in fixing the brakes."


Porn is different, as there are likely age restrictions, and legality, that one needs to consider.
OK, soft porn.

It may not be the person's own thoughts or words, but, well, they are words.
I don't think the forum should encourage the discussion of just any words. If that were so, we might as well start a dictionary forum, and go through the words alphabetically.

No, I think the words of thoughts of people should be a goal.

If there is an argument against proposition X, why should it matter if that argument is put together by a person or by an LLM?
Because they're not arguments. They're predictive chat.

Because this isn't a forum for talking to third parties by way of mouthpieces. The third parties are not here to explain - let alone defend - themselves.


It is not endemic to artificial sources, by the way. Historically, people try to quote human authors, and find themselves unable to defend them because it is not their words or their thoughts.

"Well Aristotle said X, and who are you to disagree?"
"The problem is that Aristotle was unaware of Y. We don't know what he thinks. Let's ask him!"
"You can't; he's dead."
"That's right. He is not here. You are here, making your argument here, in this thread. It is your assertion being made, and up to you to defend them, here, now."



Is the idea of discussion to debate ideas, or to debate people?
LLMs do not have "ideas". That's the point.

A user is certainly free to be inspired by their own discussions with an LLM. They can be inspired to write a post about their ideas, inspired by the LLM, but passed through their own critical minds, and into their own words, which they take personal responsibility for. Think of it like pre-processing a nugget of an idea.

Finally, when their own ideas are challenged, a user can say "I thought of that" rather than "I have no idea. Let me go insert your question verbatim into my favourite chatbot and copy-paste the answer back here."
 
Justifying inaction due to other problems.
No, suggesting that the focus should be on correcting the existing problems, especially when I don't see that the LLMs are a problem.
Add the LLMs to the list if you want, but if the car isn't working because of the gaskets, and you're not even working on them, then why switch to the brakes??
OK, soft porn.
It's still not relevant to discussion, so should be excluded.
The product of LLMs can be relevant, just as a Googe search could be relevant.
I don't think the forum should encourage the discussion of just any words. If that were so, we might as well start a dictionary forum, and go through the words alphabetically.

No, I think the words of thoughts of people should be a goal.
Then we disagree on the purpose of discussion here. I come to discuss issues. Not people.
Because they're not arguments. They're predictive chat.
And that, DaveC, is a genetic fallacy - arguing against the source of what is said by the poster rather than what they actually post.

Let's be quite clear: the argument is the set of statements, or premises, and the conclusion thereby reached. Whether it comes from predictive chat or not doesn't alter that.

So if an argument is presented, one shouldn't care whether it is put together by a person or by an LLM. However it gets to what the member posts, the argument (and it is an argument) is presented and should surely be discussed on its merits.
Because this isn't a forum for talking to third parties by way of mouthpieces. The third parties are not here to explain - let alone defend - themselves.
The member who makes the post is there to explain themselves. If someone posts something then they have to defend it. It shouldn't matter what the source of what they type/post is. If what they post is nonsense, argue against it. Point out the flaws. If the poster doesn't respond, so be it. If they come back with more, who cares about the source? If what the person posts is bollocks, though, report the poster for talking bollocks. If they come back with a reasonable argument, debate the argument, irrespective of where it has originally come from.
It is not endemic to artificial sources, by the way. Historically, people try to quote human authors, and find themselves unable to defend them because it is not their words or their thoughts.
And at the point they can't continue defending what they type, treat them the same as anyone else who can't defend themselves. If you feel they're not addressing points, or misrepresenting (because if they don't understand things they may well do), or being factually incorrect and not correcting, then report them. Still, the source of what they produce should not matter.
LLMs do not have "ideas". That's the point.
So what? If an idea forms in your head, who cares whether the source of the words that put it there was an LLM or a person. You shoud focus on that, not the source. If you disagree with it, debate it. If the poster digs themself into a hole with it and refuses to budge, report them. The same as anyone else who does likewise regardless of source.
A user is certainly free to be inspired by their own discussions with an LLM. They can be inspired to write a post about their ideas, inspired by the LLM, but passed through their own critical minds, and into their own words, which they take personal responsibility for. Think of it like pre-processing a nugget of an idea.

Finally, when their own ideas are challenged, a user can say "I thought of that" rather than "I have no idea. Let me go insert your question verbatim into my favourite chatbot and copy-paste the answer back here."
I honestly can't see the issue. But then I'm here to debate / discuss ideas, not people. I'll look at whatever argument is posted, not the source. If I find a poster can't defend what they type, I'll react accordingly. Irrespective of what the source is.

But, meh, whatever.
 
No, suggesting that the focus should be on correcting the existing problems, especially when I don't see that the LLMs are a problem.
Add the LLMs to the list if you want, but if the car isn't working because of the gaskets, and you're not even working on them, then why switch to the brakes??
Car is working, even if not everybody likes it. It's clear the gaskets are too expensive to fix right now, but not the brakes, since all that requires is a change to the rules.

Regardless, this is personal tastes, mine vs yours. Neither of us is wrong. Except you, because this is my thread. :D

It's still not relevant to discussion, so should be excluded.
The product of LLMs can be relevant, just as a Googe search could be relevant.

Then we disagree on the purpose of discussion here. I come to discuss issues. Not people.
I don't want to discuss people either. I want to discuss ideas.

But ideas do not come from chatbots; they come from people. At best, they are generated by what chatbot users put in, so they are the ones I want to talk with.

And that, DaveC, is a genetic fallacy - arguing against the source of what is said by the poster rather than what they actually post.
No. I am not against chatbots simply because they are chatbots.

We know for a fact that chatbots do not produce logical arguments; they mimic - they copy - logical arguments. Badly. We should not have the site turn into a big game of broken telephone.


Let's be quite clear: the argument is the set of statements, or premises, and the conclusion thereby reached. Whether it comes from predictive chat or not doesn't alter that.
Yes it does. Not because it comes from a chatbot per se, but because we are basically arguing with a million monkeys on a million typerwiters.

Imagine you, Sarkus, were presented with a manuscript that looked a lot like Shakespeare's Hamlet but were told it came from the room of a million monkeys with a million typewriters.

Are you going to treat it as you would a Shakespearean work, going through it stanza by stanza, analyzing first whether it is a "real" line from Shakespeare's Hamlet,not a random typo and only then deciding to analyze its style? Or are you going to say, "Look, the monkeys are extremely error-prone; we know this. You know what I do trust? I trust this published Hamlet paperback by Penguin Classics. It may still have errors, but they will be the exception, rather than the rule."

The member who makes the post is there to explain themselves. If someone posts something then they have to defend it. It shouldn't matter what the source of what they type/post is. If what they post is nonsense, argue against it. Point out the flaws. If the poster doesn't respond, so be it. If they come back with more, who cares about the source? If what the person posts is bollocks, though, report the poster for talking bollocks. If they come back with a reasonable argument, debate the argument, irrespective of where it has originally come from.
I get you. In principle. If we were immortal and retired. But in practice, we should cut out the low quality of stuff to raise the quality of the site.

We should force users to do their own critical thinking - something that is lacking in posts of LLM regurgitation.

And at the point they can't continue defending what they type, treat them the same as anyone else who can't defend themselves. If you feel they're not addressing points, or misrepresenting (because if they don't understand things they may well do), or being factually incorrect and not correcting, then report them.
It is a poor use of time of anyone on SciFo.

"To be or not to ba."
"This is wrong."
"To be or not to bb."
"This is also wrong."
"Chabot says it is bb, so it's bb."
"The correct line is 'to be or not to be'."
"Hang on. I'm putting that into my chatbot. It says to be or not to bc."
"The correct line is 'to be or not to be'."
"Chatbot disagrees."

This a pathetic excuse for a discussion.

So what? If an idea forms in your head,
But it didn't. The user has been indoctrinated into thinking the idea makes sense because they think chatbots are smart.

Chatbot is short-circuiting their critical thinking (that's their post #1), and then encouraging them to commit an 'appeal to authority' fallacy (that's everything they post from post #3 onward).

I honestly can't see the issue. But then I'm here to debate / discuss ideas, not people.
Again, nobody said anything about "debating people". That is a category error.

But I assume you do want to debate thoughts.


I'll look at whatever argument is posted,
They are not arguments. They mimic human speech patterns. There is a qualitative difference. Don't make the same mistake so many are making about what chatbots are doing.


If you don't agree there is a fundamental difference, go back to my million monkeys example and ask yourself how many monkey manuscripts you will patiently wade though, knowing they are just ersatz simalcra.
 
Last edited:
Car is working, even if not everybody likes it. It's clear the gaskets are too expensive to fix right now, but not the brakes, since all that requires is a change to the rules.
A car really shouldn't be used without working gaskets. And if someone isn't going to fix those, why add more things to the list.
I don't want to discuss people either. I want to discuss ideas.
Then debate what is written. That's it. The idea is only formed when you take the symbols on the screen and they form in your head. Until then, whether typed by a person or just a copy of LLM output, there is no "idea" as far as you're concerned.
But ideas do not come from chatbots; they come from people. At best, they are generated by what chatbot users put in, so they are the ones I want to talk with.
So you want to chat with people, rather than chat about things. Okay.
We know for a fact that chatbots do not produce logical arguments; they mimic - they copy - logical arguments. Badly. We should not have the site turn into a big game of broken telephone.
This is factually wrong.
They are better at producing logical arguments than many on this site. That they don't understand them, and that it is just predictive text, doesn't negate that it produces. Forgers still produce. Mimics still produce.
Yes it does. Not because it comes from a chatbot per se, but because we are basically arguing with a million monkeys on a million typerwiters.
So what?!
If what is produced makes sense, why do you care? If it doesn't make sense, call it out. If a poster keeps not making sense, report them. Simples.
Imagine you, Sarkus, were presented with a manuscript that looked a lot like Shakespeare's Hamlet but were told it came from the room of a million monkeys with a million typewriters.

Are you going to treat it as you would a Shakespearean work, going through it stanza by stanza, analyzing first whether it is a "real" line from Shakespeare's Hamlet,not a random typo and only then deciding to analyze its style? Or are you going to say, "Look, the monkeys are extremely error-prone; we know this. You know what I do trust? I trust this published Hamlet paperback by Penguin Classics. It may still have errors, but they will be the exception, rather than the rule."
This is disingenuous, methinks. We are not talking here about whether the LLM produces things accurately. If I need to rely on something I would go to a trusted source. Just as if any poster here makes a claim we ask for a trusted source for that claim.
So we're not here talking about the LLM being considered a trusted source. I mean, if what you're against is "well, chatGPT says so then it must be true!" then sure, I'd agree, but this is again no different to saying "well, if DaveC says so then it must be true!". ;)

So what were talking about, afaict, is just someone delegating their thought for predictive text. So, no the Hamlet example isn't not really applicable to this forum.
I get you. In principle. If we were immortal and retired. But in practice, we should cut out the low quality of stuff to raise the quality of the site.
Oh, I agree. Unfortunately for this site, chatGPT is often more sensible, less fallacious, and also less dishonest. It's not creative, for sure, but then, well, other posters. ;)
We should force users to do their own critical thinking - something that is lacking in posts of LLM regurgitation.
No. We shouldn't force anybody to do anything of the sort. We should encourage, sure. But, to me at least, an argument presented is an argument presented, irrespective of source.
This a pathetic excuse for a discussion.
And you'd be more than justified in putting the poster on ignore. But not all LLM output is quite so ridiculous. Again, if you disagree with the facts, post sources, and if the person/LLM refuses to budge then report them. Simples.
But it didn't. The user has been indoctrinated into thinking the idea makes sense because they think chatbots are smart.
And the person would quickly find themselves on ignore lists, warned, and possibly banned, all within existing rules.
Chatbot is short-circuiting their critical thinking (that's their post #1), and then encouraging them to commit an 'appeal to authority' fallacy (that's everything they post from post #3 onward).
If only critical thinking was a requirement for this site! ;)
If they are using the LLM as an authority then that is an issue, but the LLMs don't do that, the poster does.
Again, nobody said anything about "debating people". That is a category error.
Unfortunately it's what you're saying when you deny input produces by things that are not-people.
But I assume you do want to debate thoughts.
No, ideas. Which, for me, are created when I make sense of the symbols I see on the screen. What put those symbols there I couldn't care less. If i respond and get a reasonable reply, the discussion continues. If not, I'll get bored and move on.
They are not arguments. They mimic human speech patterns. There is a qualitative difference. Don't make the same mistake so many are making about what chatbots are doing.
I'm not making any such mistake, but you are misunderstanding what an argument is.
Okay, is this an argument?
P1: Bob is a dog.
P2: all dogs are blue.
C: Bob is blue.

I consider this to be a valid argument. Not sound, but certainly valid. Assuming you agree that it is an argument (specifically a syllogism) then what specifically turns it in to a non-argument when I tell you that it was produced by chatGPT?
If you don't agree there is a fundamental difference, go back to my million monkeys example and ask yourself how many monkey manuscripts you will patiently wade though, knowing they are just ersatz simalcra.
I've explained above why I consider that to be disingenuous example.


Question: does it make a difference if I tell you that much of this response was suggested by chatGPT? If it doesn't, and you think this post an acceptable response, are you really not just talking about formatting and use of colloquial language?

Just asking. ;)
 
Question: does it make a difference if I tell you that much of this response was suggested by chatGPT?
No, that's fine. If I disagree with you, you're not going to hide behind a chatbot as your authority though. The buck stops with you, right?



If it doesn't, and you think this post an acceptable response, are you really not just talking about formatting and use of colloquial language?

Just asking. ;)
You said "much" of this response. Not all of it. I'm not talking about word count here - I'm saying you - with some degree of intelligence - have exercised executive control over what, ultimately, was communicated.

And you are taking personal responsibility for every single word you posted here, regardless of whether or not a chatbot offered them to you.

That's perfectly fine. It's what I have been saying all along.
 
No, that's fine. If I disagree with you, you're not going to hide behind a chatbot as your authority though. The buck stops with you, right?

...

You said "much" of this response. Not all of it. I'm not talking about word count here - I'm saying you - with some degree of intelligence - have exercised executive control over what, ultimately, was communicated.

And you are taking personal responsibility for every single word you posted here, regardless of whether or not a chatbot offered them to you.

That's perfectly fine. It's what I have been saying all along.
If you're content happy that the content can be dressed up as my own, but not if it's a straight cut/paste then it really is just formatting and language, rather than content, that you're railing against. And I don't find there should necessarily be rules on style, which your proposal would be.

As alluded to previously, I would probably get a better and more honest response from ChatGPT than some who frequent this site. Beyond that, just treat the poster as you would any other. Ignore them for reasons you would any other. Report them for reasons you would any other.

One of those reasons would be the poster not taking responsibility for what they write, or insisting upon chatGPT as an authority it doesn't have etc. This is no different regardless of poster, surely? If I just kept posting what "my brother" told me to write, and didn't understand it, and kept saying "well, if my brother says it then it must be correct" etc, then you'd tire of them, report them (eventually, I guess). Would that be any different?
 
If you're content happy that the content can be dressed up as my own, but not if it's a straight cut/paste then it really is just formatting and language, rather than content, that you're railing against.
That is your claim, not mine. I stated my opening post and several times thereafter that it encourages much more than "formatting and langauge"; it encourages review, critical thinking, editing, paring, brevity, clarity of thought, comprehension, executive control and personal ownership. All things in line with the ethos of SciFo. It would directly raise the content quality of the site for interested readers.

As alluded to previously, I would probably get a better and more honest response from ChatGPT than some who frequent this site.
The point bring that chatbot is not a member here.

Beyond that, just treat the poster as you would any other. Ignore them for reasons you would any other. Report them for reasons you would any other.
Rules are made to address common problems. There are many subrules to address specific common problems. Have a read; they are itemized. That way, threads don't have to get polluted with explaining them, long-form, every single time.

I would not care to have to explain to every user that tries it why we don't tolerate spam. The threads would be unreadable. And remember, it's not just the immediate two participants at-issue here.

One of those reasons would be the poster not taking responsibility for what they write, or insisting upon chatGPT as an authority it doesn't have etc. This is no different regardless of poster, surely? If I just kept posting what "my brother" told me to write, and didn't understand it, and kept saying "well, if my brother says it then it must be correct" etc, then you'd tire of them, report them (eventually, I guess). Would that be any different?
And if that were a frequent problem - frequent, specific, and employed by multiple users, enough that it has to be addressed in posts time and time again - yes, encapsulating it in a rule would be very useful.
 
Last edited:
That is your claim, not mine.
Again, as with others, this would be a case of you not realising quite what your words are saying; if one can dress up LLM output in format and language that makes it acceptable then it is, regardless of what you otherwise claim, just about formatting and language.
I stated my opening post and several times thereafter that it encourages much more than "formatting and langauge"; it encourages review, critical thinking, editing, paring, brevity, clarity of thought, comprehension, executive control and personal ownership. All things in line with the ethos of SciFo. It would directly raise the content quality of the site for interested readers.
"Encourages" is great. Banning content simply due to its source is, imo, not. Genetic fallacy.
And when you can't spot LLM input due to a camouflage of language and format, it rather makes the rule petty, as you are in essence just banning a certain formatting and language style.
If it looks like LLM output, and reads like LLM output, then bad. If it doesn't look or read like LLM output, then good. Their need be no thought, no critical thinking, no editing, paring, brevity, comprehension, beyond the formatting and language. But every poster takes personal ownership for what they post, irrespective of source. They take ownership by making the post.
The point bring that chatbot is not a member here.
No, but the poster is, and the poster is entitled to use whatever source they want. If the poster misbehaves by appealing to a fallacious authority, by not correcting mistakes, by showing repeated inability to understand: report them. Their source is surely up to them. If they want to post wholesale, let them. If you don't want to respond to what you think is LLM output, don't.
Rules are made to address common problems. There are many subrules to address specific common problems. Have a read; they are itemized. That way, threads don't have to get polluted with explaining them, long-form, every single time.
It's hardly a common problem, DaveC. There was one poster that I've been aware of that repeatedly trotted out LLM output, and I quickly ignored them. Simples. The same I would ignore any poster who doesn't understand what they're talking about.
And there's no need to pollute the thread with explaining the rules: just report them and their breach. If the moderator wants to intervene, they will. It's not rocket science.
I would not care to have to explain to every user that tries it why we don't tolerate spam. The threads would be unreadable. And remember, it's not just the immediate two participants at-issue here.
I'm not sure why this tangent. The issue is about whether something should be a rule, not whether it takes time to explain to every user the rule they've broken etc. Just report a post and move on. The issue is whether posting LLM output should be acceptable or not. If it's acceptable then your concern with having to explain a breach is moot, isn't it??
And if that were a frequent problem - frequent, specific, and employed by multiple users, enough that it has to be addressed in posts time and time again - yes, encapsulating it in a rule would be very useful.
It isn't a frequent "problem", because it is not against the rules. To accept that it is a "frequent problem" is to beg the question that it is, in the first instance, a problem. And I don't see it. There is poster behaviour already encapsulated in the rules that cover poor usage of LLM output. And given that you seem happy to accept LLM output that has an acceptably stylistic camouflage, I can't see that adding this to the list of rules makes sense.

Killing people = bad
Killing people but making it look like an accident = acceptable
;)

Anyhoo - I've said my piece on the matter, and we're probably just going in circles. :)
 
Last edited:
Chat GPT, just now. Question posed verbatim:

How much time would pass on Earth while a passenger in a spaceship made a fifty light year round trip journey at constant 1g acceration, experiencing only one subjective year of time?

Answer 1:
Approximately 59 Earth years would pass while a spaceship undergoing 1g constant acceleration and deceleration over a 50-light-year round trip allows the passenger to experience only 1 year of proper time.

Exit chatbot, so it forgets. Restart.

Answer 2.
About 2.09 Earth years will pass.

(Oh look ChatGPT just invented faster than light travel!!)

Exit chatbot, so it forgets. Restart.

Answer 3.
To make a 50 light-year round trip at 1g while experiencing only 1 year, the Earth time would be astronomically huge — millions of years.
 
Back
Top