The incidence of members of the forum pasting in responses from AI chatbots into threads on sciforums has been gradually increasing in recent months. A not-unusual pattern is that a person will query a chatbot about a topic that is under discussion and post the AI reply that they receive into the discussion thread more or less verbatim.
I would like to develop a policy to apply going forward as to what and what is not acceptable use of AI in sciforums discussions. To this end, I would appreciate your thoughts on the matter.
I think it should be a no-brainer than any AI content in a member's post should be clearly flagged as such by the person who posts it. We have already seen a number of incidences of people trying to pass off an AI-generated response as their own words, although this hasn't been very common so far.
The more general question of the extent to which AI-generated texts should be acceptable in discussions here - especially verbatim cut-and-pastes, which usually strike me as very low-effort on the part of the people who post them - is an important one.
There can be value in having AI summaries of particular lines of argument on certain topics, or in having some basics facts or definitions produced by an AI bot. However, I suspect that many people who post AI material here are not especially vigilant about the accuracy or reliability of the AI material they are posting under their user name.
I would like, therefore, to hear your thoughts and suggestions on how we could possibly work some guidelines on the use of AI into our Posting Guidelines. That is: what guidelines or policies would you like to see implemented (and, potentially, moderated), if any?
I would like to develop a policy to apply going forward as to what and what is not acceptable use of AI in sciforums discussions. To this end, I would appreciate your thoughts on the matter.
I think it should be a no-brainer than any AI content in a member's post should be clearly flagged as such by the person who posts it. We have already seen a number of incidences of people trying to pass off an AI-generated response as their own words, although this hasn't been very common so far.
The more general question of the extent to which AI-generated texts should be acceptable in discussions here - especially verbatim cut-and-pastes, which usually strike me as very low-effort on the part of the people who post them - is an important one.
There can be value in having AI summaries of particular lines of argument on certain topics, or in having some basics facts or definitions produced by an AI bot. However, I suspect that many people who post AI material here are not especially vigilant about the accuracy or reliability of the AI material they are posting under their user name.
I would like, therefore, to hear your thoughts and suggestions on how we could possibly work some guidelines on the use of AI into our Posting Guidelines. That is: what guidelines or policies would you like to see implemented (and, potentially, moderated), if any?