Adapt the forum rules to ban AI usage

Several posts have been recently posted in this forum that were clearly and detectable AI generated. Also the posters show a general lack of understanding how to use AI to generate text to the point.

The effect is, that they post large walls of text that are in size beyond what a sane person would write at hand to any given topic. Readers are then understandable offended to read through those texts and this is then discussed in those threads and not the topic.

Also the users using the AI don’t crosscheck the output of the AI for correctness and validity. The posts usually contain age old information on game mechanics that was changed 5-10 years ago or is plain false information.

This forum should be about people interaction and not AI interacting with people. For me it is acceptable that if you have written a text to use a translation tool before posting, but not have an AI formulate a wrong and long wall of text that offends users who head to the forums for a discussion.

Please ban AI usage in the forums. Or if this is not acceptable by your vision for the forums, please put other measures in place to mitigate the effects of AI usage. Like a stronger limit of post sizes or have an AI analyze the posts themselves to reject them if they contain multiple different topics that can’t be discussed in a standard forums discussion.

12 Likes

I use AI for a tonne of things. It can quickly collect articles on topics I’m interested in, it can summarise longer articles into something more digestible, it can quickly write little scripts or small applications for me, and it can generate music and pictures.
What I don’t let it do is spew walls of text for me to copy-paste at people I disagree with. I like arguing on the internet, but arguing with an LLM is utterly unsatisfying, and the people using it for this don’t seem to understand that you need to explicitly set a limit for how long the output should be, lest it simply spew paragraph after paragraph of slop utterly devoid of content or arguments. If someone were using it to back their own arguments with data, I wouldn’t mind, nor would I mind if they write their own posts based on what the LLM tells them.
So I agree, we need rules against posting slop on the forums. There are many services online that can automatically detect AI slop, and plugins for forums to make use of them. Perhaps add something that if a post registers as more than 25% LLM-written, it cannot be posted. This would prevent “LLM arguments against ganking thread #3780” while not stopping people using it simply to check their spelling, correct their grammar, or translate their posts, since using an LLM to edit a human-written text may still register it as being AI slop, it will do so at a much lower rate than if the entire thing was made by the chatbot.

For example the above text, when fed into https://gptzero.me/, will register as 100% human. This is accurate, I didn’t use a chatbot at all.

Here is the text translated into German by chatGPT:

Ich nutze KI für eine Menge Dinge. Sie kann schnell Artikel zu Themen sammeln, die mich interessieren, längere Artikel in etwas leichter Verdauliches zusammenfassen, mir schnell kleine Skripte oder kleine Anwendungen schreiben und für mich Musik und Bilder erzeugen.
Was ich sie nicht machen lasse, ist, Textwände auszuspucken, die ich dann per Copy-paste an Leute schicke, mit denen ich nicht einer Meinung bin. Ich diskutiere gern im Internet, aber mit einem LLM zu diskutieren ist völlig unerquicklich, und die Leute, die es dafür benutzen, scheinen nicht zu verstehen, dass man ausdrücklich eine Begrenzung dafür setzen muss, wie lang die Ausgabe sein soll, damit es nicht einfach Absatz um Absatz von inhalts- und argumentlosem Geschwafel ausspuckt. Wenn jemand es benutzen würde, um die eigenen Argumente mit Daten zu untermauern, hätte ich nichts dagegen, und ebenso wenig hätte ich etwas dagegen, wenn jemand seine eigenen Beiträge auf Grundlage dessen schreibt, was das LLM ihm sagt.
Also stimme ich zu: Wir brauchen Regeln dagegen, solchen Müll in den Foren zu posten. Es gibt viele Onlinedienste, die KI-Müll automatisch erkennen können, und Plugins für Foren, die sie dafür nutzen können. Vielleicht sollte man etwas hinzufügen, sodass ein Beitrag nicht gepostet werden kann, wenn er als zu mehr als 25 % von einem LLM geschrieben erkannt wird. Das würde „LLM-Argumente gegen Ganking-Thread Nr. 3780“ verhindern, ohne Leute daran zu hindern, es einfach nur zum Prüfen ihrer Rechtschreibung, zum Korrigieren ihrer Grammatik oder zum Übersetzen ihrer Beiträge zu benutzen; denn obwohl die Nutzung eines LLM zur Bearbeitung eines von Menschen geschriebenen Textes weiterhin als KI-Müll registriert wird, geschieht das in einem deutlich geringeren Maß, als wenn der ganze Text vollständig vom Chatbot erzeugt wurde.

This, too, registers as 100% human. If I instead ask the LLM to rephrase my entire post, I get:

I use AI for a wide range of tasks. It is useful for quickly gathering articles on topics of interest, summarizing longer pieces into more accessible formats, writing small scripts or lightweight applications, and generating music or images.
What I do not support is using it to produce long, copy-pasted responses aimed at people one disagrees with. I enjoy online debate, but engaging with LLM-generated arguments is deeply unsatisfying. Many people who use these tools in that way also seem unaware that output length needs to be constrained explicitly; otherwise, the result often becomes paragraph after paragraph of verbose but low-substance text with little real content or argumentation. By contrast, I would have no objection to someone using AI to support their own arguments with data, or to write their own posts based on information an LLM helped them gather.
For that reason, I agree that forums should adopt rules against posting AI-generated low-quality content. There are already many online services that can detect likely AI-written text, as well as forum plugins that can integrate such detection. One possible approach would be to prevent posts from being submitted if they are assessed as more than 25% LLM-generated. That would help reduce repetitive AI-generated debate threads without preventing people from using these tools for more limited purposes such as spell-checking, grammar correction, or translation. While AI-assisted editing of a human-written post may still register to some degree as machine-generated, it would likely do so at a much lower rate than content produced almost entirely by a chatbot.

This registers as 100% “mixed”, ie that the base text was originally written by a human, but that all of it gone through an LLM. If I instead ask the LLM to just write me a post about why we shouldn’t use LLMs to post on forums, I get:

Using LLMs to post on forums is usually a bad idea because forums are not just information repositories; they are social spaces built around human judgment, accountability, and context.
The first problem is authenticity. A forum post is normally read as a statement from a person who actually holds the view, has the experience, and can defend it. If the text was generated by an LLM, that assumption becomes false or at least weakened. Even when the content is technically accurate, the implied claim of authorship is misleading. The reader is not just evaluating words; they are evaluating whether another human actually thought them.
The second problem is epistemic pollution. LLMs are very good at producing plausible language, not reliably grounded knowledge. On a forum, that matters. Bad posts do not need to be obviously wrong to be harmful. It is enough that they are shallow, overconfident, or subtly distorted. Once generated text is mixed into discussion at scale, readers spend more effort filtering style from substance. The signal-to-noise ratio drops.
Third, LLM posts tend to flatten local context. Good forum contributions are usually shaped by tacit norms: what has already been discussed, what counts as evidence there, which points are tired, which jokes or references matter, what kind of precision the community expects. LLM output often imitates generic “helpful internet prose” instead of the actual culture of the space. That makes discussions more homogeneous and less alive.
There is also an accountability problem. A human poster can be questioned: “Did you test this?”, “What do you mean?”, “Why do you believe that?” If the post was substantially produced by a model, the author may not actually understand or endorse the details. That breaks the normal feedback loop that makes forums useful. Discussion becomes interaction with text, not with a mind.
Another issue is unfairness. Forums often reward effort, expertise, and the ability to explain clearly. Using an LLM to mass-produce polished answers lets people simulate competence more cheaply than they earned it. That can crowd out genuine contributors, especially in smaller communities where trust and reciprocity matter more than raw output volume.
There is a long-term degradation effect too. If many people use LLMs to post, forums risk becoming recursive training sludge: models generating text based on earlier model-shaped text, with human experience increasingly diluted. The archive becomes less valuable over time because it contains fewer traces of actual lived reasoning and more synthetic consensus.
A defender might say that LLMs are just tools, no different from spellcheck or grammar correction. That is only partly true. Spellcheck changes wording; an LLM can generate claims, arguments, tone, and implied expertise. That is not mere assistance at the sentence level. It can replace the act the forum exists to elicit: one person thinking in public.
The strongest argument against LLM posting is therefore not that the outputs are always false. It is that forums depend on a social contract: people show up as themselves, say what they actually think, and take responsibility for it. LLM-mediated posting weakens that contract. Once that becomes normal, forums stop being conversations among people and become staging grounds for automated text production. At that point the form remains, but the value is gone.
A concise version:
People should not use LLMs to post on forums because forums are valuable precisely as records of human thought, experience, and accountability. LLM-generated posts weaken authenticity, lower the signal-to-noise ratio, flatten community-specific norms, and let users simulate understanding they may not possess. Even when the output is fluent, it often substitutes synthetic plausibility for real engagement. A forum full of such posts may still look active, but it becomes less trustworthy, less informative, and less human.

This wall of slop naturally registers as 100% AI. And if you actually go through the effort of reading all these versions, at least I personally feel that my original version, the machine-translated German version, and the AI-polished version, all read well. If I disagreed, I’d feel like replying with a written post of my own. But the one entirely written by the LLM is, frankly, trash, and I would stop reading just a few sentences into it. Hell I generated the thing, and even I couldn’t be bothered reading through it all. Why should someone else bother to do so, let alone write a reply?

3 Likes

You’re absolutely right. This should be a PVP forum, not a PVE forum!

9 Likes

I feel seen.

Agreed, excessive AI usage to spam threads with posts nobody wants to fully read isn’t the regular ‘unrelated to the topic’ spam, but with vast volume of low effort posts it effectively does the same thing.

4 Likes

Yes. And it’s usually the people who are making troll posts begging for high sec to be made completely safe.

1 Like

and I suppose it is just coincidence that aikos name begins with AI? HMMMMMM?

m

6 Likes

Well yeah… She’s the one person no one would ever accuse of wanting High Sec to be safe.

Aiko absolutely wants highsec to experience Safety.

2 Likes

Cosigned.

What would be funny is the chaos that would bring this game crashing down if High Sec was removed. Jita would go away, newbie entrance gates for new people would be camped, no one could survive long enough for a starting corp… it’d be a fun doomsday event.

I mean, Zarzakh kind of works. And the handful of FW trade hubs.
You can absolutely have trade hubs without CONCORD oppression.

I just wished that people would do more AI art and expand their minds except to harp on one little tiny annoying subject such as a constantly safe EVE environment.

The poor attempt of folks trying to express thoughts through ai just miserably fails every single time. They actually think theyll be okay and get the point across. Thats what i want to see gone for good.

Im in the beginning stages of trying to do some art using ai. Its pretty amazing of all the different aspects of a theme you can dig up and apply to a blank sheet.

We need more of that…..not walls of jibberish.

:rose:

With strangers? Non-alliance members? I’d have to see that to believe it.

Check it out. I draw your attention in particular to the lines Zarzakh, Huola, Alparena, and other low and null systems with more than ten units for sale. These are by no means big trade hubs, but trade hubs they are. Their use is simple, not all FW players can be bothered with hauling their own hulls out into the warzone, so other players will seed a handful of these staging systems with common hulls and fittings (at a premium ofc). Now, lowsec and Zarzakh are uniquely suited for trade as they have NPC stations, don’t allow bubbles, and have relatively low populations. You couldn’t transplant Jita into Pochven and expect it to remain prosperous, but removing highsec would by no means kill trade, it would just spread it out over dozens of lowsec systems (because any one system that gets too populous will become a less enticing trade hub).

Still would kill trade though for the small guys. No way to safely get new ships especially if Jita was turned into Null Sec along with the rest of High Sec. I still maintain that it would be an “end of the world” event :slight_smile:

I agree with the overall sentiment, but I have my doubts that a written rule of this kind would be in any way enforceable. We’d simply be left to the whims of poor ISD to make suppositions, which is more or less what we already have.

Using the forum’s ignore feature might really be the best option for reducing slop interactions.

ISD already is monitoring the forum and taking action where necessary.

Asking them to add the rule ‘no excessive AI usage’ to the list that also includes other actions like ‘no spam’ makes it simpler for all. And yes, it would be up to the whims of ISD to make that decision, but often the AI usage is clear enough that most people see it, and I would think ISD too.

The forum ignore feature helps if one person has a problem with another person, but is not the right solution if the majority of the forum has a problem with another person’s AI spam. Other people would still be interacting with the perpetrator derailing threads even if you were to ignore the person.

1 Like

ISD?

Imperial Star Destroyer?
Independent School District?

ahhhh… Interstellar Services Department

Got it.


100% guaranteed AI slop and what High Sec would look like without the High Security being in place.

Also Some of my posts are more than a few paragraphs in length. And what about a frostpacker infamous word salad posts.. How would that register as? Or would the system simply take melt down to make it stop? I know I have read a few lines into a frostpacker unique Haiku and realized there was not enough brain or eye bleach that was ever going to make it go away…