Use of AI on sciforums. Policy discussion

James R

Just this guy, you know?
Staff member
The incidence of members of the forum pasting in responses from AI chatbots into threads on sciforums has been gradually increasing in recent months. A not-unusual pattern is that a person will query a chatbot about a topic that is under discussion and post the AI reply that they receive into the discussion thread more or less verbatim.

I would like to develop a policy to apply going forward as to what and what is not acceptable use of AI in sciforums discussions. To this end, I would appreciate your thoughts on the matter.

I think it should be a no-brainer than any AI content in a member's post should be clearly flagged as such by the person who posts it. We have already seen a number of incidences of people trying to pass off an AI-generated response as their own words, although this hasn't been very common so far.

The more general question of the extent to which AI-generated texts should be acceptable in discussions here - especially verbatim cut-and-pastes, which usually strike me as very low-effort on the part of the people who post them - is an important one.

There can be value in having AI summaries of particular lines of argument on certain topics, or in having some basics facts or definitions produced by an AI bot. However, I suspect that many people who post AI material here are not especially vigilant about the accuracy or reliability of the AI material they are posting under their user name.

I would like, therefore, to hear your thoughts and suggestions on how we could possibly work some guidelines on the use of AI into our Posting Guidelines. That is: what guidelines or policies would you like to see implemented (and, potentially, moderated), if any?
 
The incidence of members of the forum pasting in responses from AI chatbots into threads on sciforums has been gradually increasing in recent months. A not-unusual pattern is that a person will query a chatbot about a topic that is under discussion and post the AI reply that they receive into the discussion thread more or less verbatim.

I would like to develop a policy to apply going forward as to what and what is not acceptable use of AI in sciforums discussions. To this end, I would appreciate your thoughts on the matter.

I think it should be a no-brainer than any AI content in a member's post should be clearly flagged as such by the person who posts it. We have already seen a number of incidences of people trying to pass off an AI-generated response as their own words, although this hasn't been very common so far.

The more general question of the extent to which AI-generated texts should be acceptable in discussions here - especially verbatim cut-and-pastes, which usually strike me as very low-effort on the part of the people who post them - is an important one.

There can be value in having AI summaries of particular lines of argument on certain topics, or in having some basics facts or definitions produced by an AI bot. However, I suspect that many people who post AI material here are not especially vigilant about the accuracy or reliability of the AI material they are posting under their user name.

I would like, therefore, to hear your thoughts and suggestions on how we could possibly work some guidelines on the use of AI into our Posting Guidelines. That is: what guidelines or policies would you like to see implemented (and, potentially, moderated), if any?
I was skeptical but use it a lot now.

Agree that a search and results show be cited as Ai, preferably put in quotation marks to distinguish from the rest of the post.
Also cite the citations, Co pilot for instance will reference sources at the end of an answer. If it does not do this automatically then you can ask it to provide them.

Asking Ai a question then just posting what it spits out should not be allowed.
It's a supplement not a substitute.

Pretty much what you said, warnings for ignoring this rule.

As these things get more sophisticated they will be harder to spot.
 
I was skeptical but use it a lot now.

Agree that a search and results show be cited as Ai, preferably put in quotation marks to distinguish from the rest of the post.
Also cite the citations, Co pilot for instance will reference sources at the end of an answer. If it does not do this automatically then you can ask it to provide them.

Asking Ai a question then just posting what it spits out should not be allowed.
It's a supplement not a substitute.

Pretty much what you said, warnings for ignoring this rule.

As these things get more sophisticated they will be harder to spot.
Иногда ИИ выдаёт глупости, то, что не соответствует действительности.
 
Иногда ИИ выдаёт глупости, то, что не соответствует действительности.
It is good for stats as a rule, nothing too cutting edge in physics conceptually.
This is why it is a good idea to request sources so you can check which sites it searched then check for yourself.
 
It is good for stats as a rule, nothing too cutting edge in physics conceptually.
This is why it is a good idea to request sources so you can check which sites it searched then check for yourself.
Я буквально вчера спросила у Алисы (это наш отечественный ИИ): какая самая крупная немецкая овчарка в мире? И она мне ответила, что какая то там овчарка, в каком то питимнике, весом в 45 кг. А у меня щенок немецкой овчарки в 9 месяцев уже больше весил. И я знаю много взрослых кобелей, которые больше 60 весят. Т.е. Алиса свои сведения непонятно откуда берёт.
 
Я буквально вчера спросила у Алисы (это наш отечественный ИИ): какая самая крупная немецкая овчарка в мире? И она мне ответила, что какая то там овчарка, в каком то питимнике, весом в 45 кг. А у меня щенок немецкой овчарки в 9 месяцев уже больше весил. И я знаю много взрослых кобелей, которые больше 60 весят. Т.е. Алиса свои сведения непонятно откуда берёт.
Ask her what sources she searched.
 
Ask her what sources she searched.
Она пишет внизу источники. Какой то сайт там был. Она просто выбирает их наугад по сочетанию слов или предложений, я думаю.
 
Она пишет внизу источники. Какой то сайт там был. Она просто выбирает их наугад по сочетанию слов или предложений, я думаю.
Test her, some are better than others.

When I first tried it I decided to mess with it and gave it some maths which it answered incorrectly, now it is telling me to be more precise with my questions!
 
Test her, some are better than others.

When I first tried it I decided to mess with it and gave it some maths which it answered incorrectly, now it is telling me to be more precise with my questions!
У вас тоже есть Алиса? Вообще то, она неплохая, и я тоже стала часто к ней обращаться, особенно когда нужно быстро что то найти. Но всегда нужно понимать, что она тоже может ошибаться, и не принимать её ответы, как непреложную истину.
 
У вас тоже есть Алиса? Вообще то, она неплохая, и я тоже стала часто к ней обращаться, особенно когда нужно быстро что то найти. Но всегда нужно понимать, что она тоже может ошибаться, и не принимать её ответы, как непреложную истину.
I use co pilot. In just a few months.

Co pilot original. Made maths mistakes, some other mistakes, could not post mathematics formulae. Refs were hit and miss.

Upgrade. Three settings, quick search, deeper search and essay, 3 seconds, 30 seconds and ten minutes respectively. Guess which would suit the question, a paper comparison needs deeper for instance.

Upgrade, 4 settings now but it, not you chooses which to use based on the question. All refs cited, can do maths with formulas. I have not caught it out yet but I need to do more trials in that area.
 
I use co pilot. In just a few months.

Co pilot original. Made maths mistakes, some other mistakes, could not post mathematics formulae. Refs were hit and miss.

Upgrade. Three settings, quick search, deeper search and essay, 3 seconds, 30 seconds and ten minutes respectively. Guess which would suit the question, a paper comparison needs deeper for instance.

Upgrade, 4 settings now but it, not you chooses which to use based on the question. All refs cited, can do maths with formulas. I have not caught it out yet but I need to do more trials in that area.
У нас всё не так. Просто нажимаешь на Алису, и она выдаёт результат.
 
I have a visceral dislike (not necessarily shared by others) for these ai based passages that are posted and my response to them is to scroll past them ,and probably the post as well.

If they are to be allowed I would prefer them to be submitted in the form of footnotes rather than to take centre stage as something one has to wade through

Faling that ,I would like them to be very clearly demarcated with spacings (and clear quotation marks ,even formatting) so that they do not interrupt the post itself.

Possibly it should be expected that the poster should summarize what in particular they think is relevant about the ai passage to their post and to the topic.

Agreed that citations need to be included(and understood by the poster ideally)


I do use the ai results myself when I search the internet for simple stuff like opening times and to look for results that might otherwise be buried under the search results-so I feel a little hypocritical but this is only going to get worse-or harder to deal with)


I
 
Last edited:
That is: what guidelines or policies would you like to see implemented (and, potentially, moderated), if any?
AI is simply an automated system which pastes together human generated material on a topic, using stochastic algorithms. Citing the primary source(s) which the AI used should be required, and this widely benefits websites and content providers which may depend on clicks. (And content providers also should be directly credited, and not plagiarized) AI can also hallucinate, overlook the obvious, and come up with gross errors, so it should be required that anyone researching with AI must seek further verification from primary sources. Finally, failing to do one's own data gathering and reasoning on it is a trend that is doing cognitive damage to an entire generation - all forums should stand against this horrible trend.
 
AI is simply an automated system which pastes together human generated material on a topic, using stochastic algorithms. Citing the primary source(s) which the AI used should be required, and this widely benefits websites and content providers which may depend on clicks. (And content providers also should be directly credited, and not plagiarized) AI can also hallucinate, overlook the obvious, and come up with gross errors, so it should be required that anyone researching with AI must seek further verification from primary sources. Finally, failing to do one's own data gathering and reasoning on it is a trend that is doing cognitive damage to an entire generation - all forums should stand against this horrible trend.
Amen to all that.

I would add that AI text is nearly always prolix and full of buzzwords designed to impress or bamboozle - the opposite of clarity and brevity. It seems to me that people can by all means use AI as a souped-up search engine, but the references it consults should always be cited and accompanying text should be written by the poster. Maybe we can allow AI to straighten out grammar and syntax for non-native English speakers, but whole-cloth AI text generation should not be permissible.

One thing we should definitely not allow is thread responses written by AI. I am not wasting my time talking to some fucking robot.
 
One thing we should definitely not allow is thread responses written by AI. I am not wasting my time talking to some fucking robot.
I think it should fall under the same rule as simply posting a video. In both the poster is effectively delegating their response to someone else.and adding nothing if their own. In both cases it's not the content of what they post that is really the issue: we can ignore the content or not at out own discretion. The issue is that if they can't talk about what they post, can't defend it, can't support it, and in some cases can't even comprehend what they are posting, then that's an issue, just as it would be with any other post they make, AI-generated or otherwise.

At the moment it's relatively easy to tell when someone is just posting AI content, but that's mostly a formatting and linguistic styling matter. I'm sure you can get chatGPT to give the same response but in a more conversational, colloquial, minimally formatted manner that would more closely resemble a typical person's post. (It would still lack character, but would be less obviously AI).
At that point the issue is no longer really that it is AI-generated but rather the poster's inability/unwillingness to support, to talk coherently about it, to cite sources, to own what they have posted, etc. And that should be the same regardless of where the content has come from.

So, basically, my issue in this regard would be with the poster's behaviour, not with simply copy/pasting AI itself. In so much as the copy/paste can be used as a proxy for the other unwanted behaviour? Sure, okay. But I'd give benefit of doubt until a clear(er?) pattern emerges.

Hope that makes sense? And none of it a copy/paste of AI... except the bits that were! ;)
 
I think it should fall under the same rule as simply posting a video. In both the poster is effectively delegating their response to someone else.and adding nothing if their own. In both cases it's not the content of what they post that is really the issue: we can ignore the content or not at out own discretion. The issue is that if they can't talk about what they post, can't defend it, can't support it, and in some cases can't even comprehend what they are posting, then that's an issue, just as it would be with any other post they make, AI-generated or otherwise.

At the moment it's relatively easy to tell when someone is just posting AI content, but that's mostly a formatting and linguistic styling matter. I'm sure you can get chatGPT to give the same response but in a more conversational, colloquial, minimally formatted manner that would more closely resemble a typical person's post. (It would still lack character, but would be less obviously AI).
At that point the issue is no longer really that it is AI-generated but rather the poster's inability/unwillingness to support, to talk coherently about it, to cite sources, to own what they have posted, etc. And that should be the same regardless of where the content has come from.

So, basically, my issue in this regard would be with the poster's behaviour, not with simply copy/pasting AI itself. In so much as the copy/paste can be used as a proxy for the other unwanted behaviour? Sure, okay. But I'd give benefit of doubt until a clear(er?) pattern emerges.

Hope that makes sense? And none of it a copy/paste of AI... except the bits that were! ;)
Yes I think the analogy with posting videos without explanatory commentary is quite apt. In both cases the poster has lazily outsourced their thinking and their communication, leaving respondents reacting to something to which the poster has made minimal personal input.

There may be difficulties monitoring this, as you say, once LLMs get better stylistically, but at the moment it is fairly obvious. I would suggest the rule should be that AI output is not allowed and that it is up to the poster to ensure posts do not look like AI output, because if it does the mods have carte blanche to send it to the cesspool. I suppose if the day ever comes that AI output looks as good as a human post then we would get less annoyed by it, so letting the odd one slip through would not be so much of a problem.
 
Is it possible ro run any post through an app that will spit out whether or not the content is AI or not?

BTW I have noticed that individual people (who I happen to know*) get described by AI summaries.If these summaries are incorrect or defamatory or even unwelcome would that person have recourse to the law?
Seems open to manipulation and abuse.

* they drill down very deep and in double quick time.
 
Is it possible ro run any post through an app that will spit out whether or not the content is AI or not?
Yes.

GPTzero is one such service.

I just ran your post through it and it concluded your post is 100% human.

Then I ran this post through it:


And it thinks it is at least 50% AI generated.
 
Last edited:
Is there any policy in place at the moment (we seem to be being overrun)?

I am going to suggest that any unsignaled predictive text in any poster's content should attract an immediate warning ,followed by a perma-ban in very short order in the event of non compliance.

This is creepy stuff .

Do we have just the one ai poster at the present?
 
Back
Top