Best EVE Players

:thinking: Research for ‘The Plan’ started in mid 2009, then was tested for 3 months before finally being presented to the Eve Online Community in February, 2010… :wink:

4 Likes

Thank you. I will check out their vids… and add the name to the incomplete flawed list.

1 Like

You should read threads instead of spamming them.

1 Like

CCP needs to adapt the forum for ChatGPT spammers and automate the detection of these GPT texts so they are flagged as AI, so we don’t even have to bother opening this trash.

1 Like

@Rebelde_planet You could also not read and post so I don’thave to see your trash.

2 Likes

@Meona Sherie

I think it has become increasingly evident over the last several months that the problem we are facing in the official forums is not only the presence of ChatGPT-generated posts but the sheer volume and frequency at which they appear, creating a situation in which the average user is forced to sift through an endless tide of low-quality, formulaic, repetitive, and ultimately unhelpful material. While one could make the argument that such contributions are “technically” on topic or at least tangentially related to ongoing discussions, the truth is that they are functionally indistinguishable from spam, since they neither provide new insights, nor foster meaningful dialogue, nor demonstrate any real understanding of the game or its community. In this sense, I would argue that we are no longer dealing with individual cases of bad posting but with a structural issue that requires a systemic solution on the part of CCP.

It is no longer realistic, for example, to expect community moderators or volunteer ISD staff to manually review every suspicious post one by one, because the scale of the problem is inherently larger than what human moderation can reasonably handle in an efficient manner. Instead, what is required is an automated filtering mechanism that can reliably detect, flag, and isolate AI-generated content before it ever reaches the broader forum audience. This is not an abstract demand or a technically impossible challenge; other online platforms, both large and small, have already implemented similar systems to varying degrees of success. The essential idea is that the forum software itself should be able to identify linguistic patterns, repetition rates, and structural markers that are highly correlated with machine-generated text. Once flagged, these posts could be automatically marked as “AI-generated” with a visible tag or relegated to a secondary section that most users can safely ignore.

The benefits of such an approach are self-evident. First and foremost, it would drastically reduce the amount of wasted time spent by users clicking into threads that appear to be genuine discussions but turn out to be nothing more than generic, synthetic paragraphs with no actual player perspective. Second, it would protect the integrity of the forum as a space for human interaction, because at its core a community is supposed to be a place where people exchange experiences, insights, and disagreements rooted in lived engagement with the game, not a recycling bin for automated text blocks that could just as easily have been posted on any other gaming forum. Third, it would lessen the frustration and hostility that inevitably arise when users repeatedly encounter this kind of content, because instead of wondering whether the person they are replying to is real or not, they would immediately know whether a contribution is human-written or AI-generated.

Some may argue that AI-generated content is harmless or even occasionally informative, and while I concede that there may be isolated cases in which a ChatGPT-style post incidentally provides a correct answer to a basic question, the overwhelming reality is that the cost far outweighs the benefit. For every one semi-useful post, there are dozens that dilute the conversation, derail discussions, and bury valuable human contributions under layers of verbosity. The net effect is that the forum becomes less navigable, less readable, and ultimately less appealing for veteran players who might otherwise contribute thoughtful posts, but who now hesitate because they feel their effort will be drowned out by noise.

From a design perspective, the implementation of automated AI detection could be done in multiple ways. The simplest approach would be keyword and phrase pattern recognition, which, while imperfect, would already filter out a large portion of the most obvious offenders. A more advanced system could incorporate natural language processing models designed specifically to identify AI-like syntax, which tends to be overly generic, balanced, and lacking the stylistic irregularities of genuine human speech. An even more effective strategy might combine automated detection with a community reporting feature, where users can quickly flag suspected AI content, which would then be cross-checked against the detection model before final action is taken. This hybrid approach would minimize false positives while ensuring that genuinely disruptive AI spam is caught early.

1 Like

Denied.

With easy regards
-James Fuchs

3 Likes

fight fire with fire, love it

Yeah a lot of low quality posts could be spared from the forum. Would make things clean and tidy for sure. As an added bonus the ChatGPT slop spammers would also get all out of shape over the fact their junk gets removed. :face_with_hand_over_mouth:

:smirking_face:

Yes. Just imagine just as easily they would be removed as easily people spam them. :popcorn:

The only efficient way. :wink:

You seem confused, expressing the same opinion as someone else is not against the rules and is not spam. This is a public forum. If anything your offtopic ranting is against the rules. Maybe try calming down a bit? :thinking:

:blush:

:rofl: Yes. Aaalready then.

Damn, that wall of text is definitely one hell of a rant…

Nobody is forcing you to read or reply to Forum threads…

Course on the flip side posting ChatGPT seems to generate some activity, at least the Forums don’t appear to be a Ghost Town…

3 Likes

While I appreciate the sentiment that “nobody is forcing” any individual user to read or reply to particular forum threads, I think that framing the issue in such a binary way overlooks the broader systemic consequences that unregulated AI-generated content has on a community platform. The problem is not merely whether or not I personally decide to open a given thread, but rather what happens when the cumulative weight of this content begins to reshape the entire environment in which discussions take place. A forum, by its very nature, is a shared space, and the quality of that space is determined not only by the choices of individual users but also by the overall ecosystem of contributions. To reduce the issue to individual responsibility (“just don’t click on it”) is to ignore the structural dimension of how communities thrive or decline.

The notion that the mere presence of ChatGPT-generated material could be considered beneficial simply because it generates “some activity” is also, in my view, problematic when examined more closely. Activity, in and of itself, is not inherently valuable; what matters is the quality, sustainability, and authenticity of that activity. For instance, one could flood the forum with hundreds of identical posts, and technically, this would constitute “activity.” But would anyone genuinely argue that such activity enriches the forum? The same logic applies to AI-generated text: while it may superficially inflate the number of posts, it does not foster genuine dialogue, nor does it encourage long-term participation, because users quickly recognize the artificial nature of the content and disengage. In other words, equating quantity with quality risks mistaking noise for vitality.

Furthermore, there is a subtle but important cultural dimension to this issue. Community spaces such as these forums are not only about exchanging information, but also about developing a collective identity and culture rooted in shared experiences of the game. When those spaces become saturated with generic AI responses that lack the personal flavor, history, and idiosyncrasies of real players, the cultural fabric begins to erode. Over time, the community risks transforming from a vibrant discourse among diverse individuals into something resembling a sterile database of machine-generated paragraphs. The sense of belonging, the recognition of familiar voices, and the nuanced debates that emerge from lived player experience are gradually displaced by uniformity and detachment.

In addition, I would suggest that relying on ChatGPT content as a stopgap solution to prevent the forums from appearing like a “ghost town” is counterproductive in the long run. While it may temporarily increase post counts or give the illusion of busyness, it simultaneously discourages genuine contributors from engaging. Many players, particularly those who have been part of the community for years, are unlikely to spend their limited free time posting detailed analyses, stories, or feedback if they feel those contributions will be drowned out by AI spam or dismissed as irrelevant in an environment dominated by artificial filler. Thus, while the short-term effect may be a slight uptick in “activity,” the long-term effect is more likely to be disengagement and decline, because people do not invest in spaces that feel inauthentic.

Another angle worth considering is the precedent this sets for moderation and community management. If the argument is simply that “nobody is forced to read,” then one could extend that logic to all forms of spam, trolling, or disruptive behavior. After all, nobody is forced to read offensive comments either, yet the community rightly expects moderators to intervene when such behavior undermines the atmosphere of the forums. The principle at stake is not about individual choice but about collective standards. Allowing unchecked AI spam under the justification that it at least generates activity effectively lowers the bar for what counts as acceptable contribution, which in turn normalizes a cycle of declining quality. Once that bar is lowered, it becomes increasingly difficult to raise it again, because the expectation of minimal standards has already been eroded.

Lastly, I would argue that the perception of the forums matters beyond their internal function. New or returning players who visit these spaces often use them as a proxy for gauging the health of the community at large. If their first impression is that the forums are populated by repetitive, robotic, and disengaged content, they may reasonably conclude that the player base itself is inactive or uninterested, which is not an accurate reflection of the reality within the game. Thus, while ChatGPT spam may superficially mask the problem of low activity, it may actually exacerbate the deeper issue by projecting an image of inauthenticity and detachment to outsiders.

In summary, the point is not whether I as an individual am compelled to read or reply to threads, but whether the collective environment is being preserved as a meaningful, authentic, and engaging community space. Activity generated by ChatGPT posts may stave off the appearance of emptiness in the short term, but it does so at the cost of long-term vitality, cultural authenticity, and community trust. The solution, therefore, is not to dismiss concerns with “nobody is forcing you,” but to acknowledge that forums, like any shared platform, require intentional stewardship to ensure that the activity they host is not merely activity for its own sake, but meaningful engagement that sustains the community over time.

Right? This forum is virtually dead 15 hours out of 24.

And it’s funny that it is only my 3rd thread, of which only one talks about ChatGPT, but I “spam” ChatGPT?

I think that those who complain are jealous that their name isn’t in the list.

1 Like

Well, after reading another one of your ‘Wall Of Text’ rants, and considering how it’s structured, I think you’re another ChatGPT poster… :rofl:

4 Likes

Exactly. Was curious how long it will take for you to notice. The irony of her post is strong. Hence she proven, even based on you that less AI slop would actually be a benefit for the forum instead. :thinking:

:face_with_hand_over_mouth:

1 Like

And here I thought my posts sometimes drag on.. But alrighty then.

1 Like

Ironic you would think that but nope, my statement was purely a sarcastic jab…

Anyway, I have no problem with AI Chat being used to help formulate the original post of a thread, especially if it contains historical info and/or statistics…

However after that the poster should definitely post their own replies without any AI Chat help…

1 Like

I think it is important to underline a point that often gets overlooked in these discussions. Within the game itself, CCP has taken a clear and firm stance against the use of bots. Automated play, whether through input broadcasting, macros, or third-party software, is explicitly forbidden because it undermines the integrity of the sandbox, damages the economy, and erodes the value of authentic player effort. Nobody disputes this principle, because we all recognize that the unique appeal of EVE Online is tied to the reality that every action in the universe has weight precisely because it was carried out by a human being making choices, taking risks, and investing time.

If we accept that standard in the game itself, then it stands to reason that the same principle should apply to the official community platforms that surround the game, including these forums. After all, the purpose of the forums is not merely to generate “activity” for its own sake, but to provide a space where actual players can exchange knowledge, stories, and perspectives based on their own lived experiences in New Eden. When AI-generated text is introduced into that environment, it functions in much the same way as a bot in-game: it inflates metrics, produces artificial activity, and creates the illusion of engagement, but it does not actually contribute the human insight or perspective that makes the forum valuable.

The analogy here is not a stretch. Just as a mining bot in the game can strip asteroids without thought, context, or risk, an AI spamming the forum can produce endless paragraphs without context, personal history, or any evidence of lived gameplay. In both cases, the effect is the same: authentic human contributions are drowned out by automated repetition. In-game, this distorts the market and discourages players from participating in legitimate industry. On the forums, it distorts discussion and discourages players from contributing thoughtful posts, because they feel their voices will simply be lost in a sea of artificial text.

Now, to be absolutely clear, I am not saying that people cannot use AI privately, in the same way that some people might use a notepad, a grammar checker, or even a private coach to sharpen their writing. If someone wants to go to a dedicated AI website, have a conversation, ask for advice, or bounce ideas back and forth in a private setting, that is entirely their prerogative. What I object to is the outsourcing of public forum contributions to AI, because that changes the fundamental nature of the community space. Once those posts enter the forum, they are no longer a private tool, but part of the shared record of discourse, and at that point, the standard of authenticity matters.

To allow AI spam to persist on the forums while prohibiting bots in-game is an inconsistency that ultimately undermines both standards. Players rightly expect fairness, integrity, and authenticity in the sandbox; they should also expect it in the discussions that surround the sandbox. Otherwise, we end up in a situation where gameplay is human-driven but community conversation is automated, which erodes the sense of culture and identity that has always been central to EVE’s appeal.

For these reasons, I strongly believe that CCP should treat forum automation in the same spirit as in-game automation. If bots are banned from flying ships, they should also be banned from posting threads. The principle is the same: this is a game and a community built by humans, for humans, and if someone wants to interact with an AI, they are free to do so in private channels dedicated to that purpose, not here where the expectation is that we are hearing from actual capsuleers with genuine experience in New Eden.

I agree with this but even the OP I personally prefer to be original.

The reason is:

  1. AI often has incorrect, factually wrong info and
  2. it has this certain style which is annoying to read and often is overly lengthy compared to a much shorter way a human could write the same amount of info, as well as often also repeat the same details several times.

:thinking:

4 Likes

I nominate Zhilia Mann, not well known but a nice chap nonetheless.

2 Likes