There’s nothing wrong with lengthy paragraphs or word salad posts if someone makes the effort of writing it themselves. People could read, laugh or skip.
It’s mainly that abusing AI can generate low quality text in vastly bigger volume and with much less effort than it takes others to read or even to skip which makes it such awful spam.
Just add it to the rules and act on it when someone shows signs of over-using AI to spam large amounts of text. It happens often.
For me an official statement in the forum rule set together with a new flag category for the posts would be the right way.
My guess would be that forum users would already mark the relevant AI posts then and the effort to detect or verify them wouldn’t be that high. False reported posts could even directly be marked as checked AI free, the flag button could be disabled for that post if it stays unchanged or they could take them down if deemed AI slop.
And as stated above by Gerard the term slop for me includes “generated low quality text in vastly bigger volume and with much less effort than it takes others to read”.
It just eats your lifetime and if you start a discussion with the poster then it leads to nowhere. Some even feed the answers back into the AI and slop your further.
If ISD are willing to put that work in then I wish them every success. Again, I agree with your sentiment. If the issue has reached the point that Gerard points out:
Just add it to the rules and act on it when someone shows signs of over-using AI to spam large amounts of text. It happens often.
…the effort required might be more than ISD anticipate. As you said, having a policy in the books, whether or not it’s possible to fully enforce, might be preferable to the absence of policy. Particularly when many other media platforms have recently announced official positions one way or the other.
I am a little concerned for ISDs collective work-life balance in that event.
I find AI is usually wrong. Not by a lot sometimes, but still enough to not be trusted. While I watch some just in the hope I “might” find some education, it usually proves to be disappointing. Ban AI misinformation from the forums. Meh…
Edit: Theres already enough player misinformation on the forums already!
For now one can tell the difference more times than not. At the same time, it is getting harder and will get harder as AI gets even more sophisticated going forward.
The real question if we can no longer trust what a search engine pumps out. Where and how do we get to information that is real. What program out there that does not have in part or in whole. That is not either influenced by AI or has an AI component to it.
That is mainly a problem of the younger generations that chose to rely on social media, tiktok and now search engines for news. Of course…rebellion to what older people use and such things…but then came AI and nothing is trustworthy anymore. Everywhere slop is presented as facts.
Guess what? The old structures are still in place. There are news agencies with reporters. There are newspapers, magazines, tv news and libraries and all the other infrastructure shunned by young people is still available and something you can trust further than any of the “new media”. Of course in some countries this is also difficult and you have to know which media follows an agenda or is neutral. Maybe in your country there is even no such media left.
But still, we live in a global world, where you can reach out to other media. But you have to learn to use it.
But how do you know your reporters aren’t reporting falsehoods? Whether it be conventional propaganda or because their source was a convincing AI fake on Telegram, how can you trust any information?
The problem with todays reporters is that they are too personally opinionated in their own news castings. That was the result of some severe indoctrinations that have happened in recent presidential terms. Its really hard for me to believe anything i read nowadays.
It just spreads like wildfire from one source to another. The younger phone zombies are just sucking it all up and relay it using their own personal opinions. The world isnt looking very good in the next 15 to 20 years knowing that i may have to actually rely on these people in one form or another.
As far as AI is concerned…..
I roll my eyes every time i hear about ideas on how ai can run our company better in meetings im in. Its like nobody can any longer leave our systems alone long enough for anything to settle. Its just mayhem all the time now.
Ill be retiring in a few more years so ill be thankful to not have to worry about herding phone zombies around to do simple tasks that they are learning to do off tiktok.
The problem there is that a lot of reporters and news outlets are buying into the AI craze. Whether it lasts or not remains to be seen. But simply saying trust reporters and news outlet and learn to use them. Is only just adding a step that really arrives at the same place.
Trusting a news outlet or reporters without understanding that they themselves are far to often using AI. Is in my opinion, a little naive. AI is being marketed as the latest greatest. And for now this barely regulated, hardly understood, and becoming the go to place.
The old saw that build a better mouse trap and the world will beat a path to your door. The better mouse trap has been built and the mice are lining up to try it.
The only advice that I have heard to overcome the AI mis/disinformation. Is to use critical thinking skills. Verify as much as you are able to what is being presented.
Even then be aware that some AI have faked research and studies. Actually getting some even published in peer reviewed magazines. In this age of mis/disinformation. It is critical to not just check the source, but check the source itself. Scary? it gets far worse than that. As no one has any real idea of how this tech works. No one is sure where it can go. And what happens when we get there.