How can CCP increase node performance?

RNG is perfect in this case. Want to improve your odds of not randomly losing a bunch of ships? Don’t bring more ships than you need. Don’t bring as many ships that cause excessive lag. If you have 6000 ships on grid, and the server needs to drop 10% of those to be able to perform, the numbers are high enough that odds are it will be pretty balanced. Lag is already used as a weapon. It should have a chance to backfire.

It’s called the “Dunning-Kruger effect”. Check out my homepage for it.

  1. place supporting citadel nearby
  2. place clones here
  3. change ships to shuttles
  4. go to destination grid
  5. see enemy ships and your shuttles blowing

now try rethink your idea with ship cost in mind

There’s no advantage to bringing a bunch of shuttles. You won’t accomplish your objectives of either driving off the opponent or destroying the target. Bring a bunch of shuttles to defend a citadel, and you’re gonna lose the citadel. Bring a bunch of shuttles to attack a citadel, and you’re not gonna pop the citadel. Do it if you want, but it would make as much sense as it does now. The tactic would shift to holding back actually useful reinforcements until you can effectively deploy them. The best strategy becomes finding a sweet spot where win or lose, both sides don’t want to cause too much tidi.

It’s still probably a terrible idea, but I’m thinking it might just be an awesome terrible idea.

edit: what if the losses were load-based? Whoever’s causing the most load on the server is most likely to get popped?

  1. come N hours before timer
  2. send shuttles to destination grid until half of enemy will die
    3.1 if your enemy is not smart - change shuttle to usual ships 15min before timer
    3.2 if your enemy smart and uses rifters - change shuttle to caracals

“blowing” node cap is not solution for TiDi it’s just one more weapon

Whoever’s causing the most load on the server is most likely to get popped?

how to measure “most load”?

1 Like

+1.
This idea is inherently bad, as it revolves around arbitrary notions like “responsibility”, or “random is fair”

Dude, I’m not saying that at all.

But what I’m saying is there should be 2 threads per ship. 1 for recieving damage, where other ships send the damage info in to a queue for processing and for ship repair and such, and another for outgoing damage and all other system work.

Yeah, that’s a shitload of threads, but something like a Phi coprocessor or 2 would make quick work of.

That’s better than having 1 thread dealing with all the damage for everyone.

No.

Many things that seems obvious only look lie true because you have no idea of the real complexity of the problem.

And you don’t either, considering you haven’t actually seen the code. The fact of the matter is, in theory, my ideas would work. You’re ideas are assuming the worst.

There are optimizations possible, of course, but there is no such thing as “infinitely scaling.” As to whether your idea is actually plausible, according to what CCP has previously said, no, it’s not. Claiming your idea is good because of your own ignorance is laughable. Go read a decade’s worth of dev blogs, watch a decade’s worth of fanfests, and then you can start to have a vague idea of the problems involved.

Where did I mention “infinitely scaling”. Also, their codebase has changed a lot. It seems as if they’re trying to move away from stackless python. That will open a lot of doors previously closed.

So there are three major problems.

  1. The legacy code needs an overhaul
  2. The people who knew how to do it are gone.
  3. Client server interaction

The game uses 2 sets of code C++ and Python. Python is where the optimizations need to be made at, and by optimizations, the ability to use the new high core processors to the full capacity. This is where the problems start. First before people complain about python, python is a very powerful programming language used in several labs across the states, universities, and internationally for physics simulations. If you can overhaul the last of the legacy code to use 10-12 cores on some of the new server processors the game will change dramatically on all levels of performance. This brings up the last point, client / server interaction. If you change the server code you will need to overhaul the client code too. To do this would be a monumental task, and the only way to do it would be to put everything else on hold for quite some time.

my experience of informatics tells me that

  1. whatever you may assume the worst, the reality is worst yet.
  2. somebody has already tried what you are imagining and failed.
  3. what remains to be done costs so much that at a decent price we can only do a fraction of what we aim for.

In theories, your idea would NOT work. It’s been already discussed, on several other programs. to you it seems obviously true in theory, to me it seems obviously false - by experience. As I said already, the management overhead negates the benefits from high parallel computing.
THIS is a fact. Just because we use General-Purpose (/programing) GPU does not mean it works in the general cases.

It seems you’ve read just enough dev blogs to completely misunderstand what’s involved. Or maybe you didn’t get that from a devblog.

There are certain things stackless python is good for, some things it’s not. Yeah, they are moving a lot of things over, and there are a lot of things they are unloading from Dogma, but Dogma itself is probably going to stay Stackless Python for as long as EVE still exists. And there are certain things that, not because of the language but because of the problem itself, must remain single core. Simple example: 2+2=4. Let’s put the first 2 on one CPU and the second 2 on a different CPU. For either of them to get to 4 you have to have a scheduling program to decide which core gets to solve the problem and decide which one has the “correct” answer, the scheduler has more load than the original problem, and in the end you’ve just totally wasted your time by trying to farm the problem between the 2 cores to begin with.

There are optimizations. All the calculations that have nothing to do with 2+2=4 have been moved off Dogma. Things like the market and inventory. BIAB means Dogma doesn’t have to figure out either “2” anymore, it’s done on a different server, now Dogma can focus on the actual problem. But in the end, once it’s reduced to just 2+2=4, only Dogma can do that. On one thread.

1 Like

This is how I know you don’t know what you’re talking about. Yes, they use python, but stackless python. Which does not support multithreading at all. That’s why I’m saying it needs to be deprecated and replaced so they can start using the right tools for the job.

Yes, and the optimization I’m saying is instead of having 2+2.2+2,2+2, etc on the same thread for all ships, have a thread for each ship for incoming damage queued. If you don’t think that would have a huge performance increase when used with a coprocessor that can do all these threads in parallel, idk what to tell you. And yes, the main problem is stackless python doesn’t allow this. Which is why they need to deprecate and move on to the correct tool.

It is the right tool for the job. Multi threading is for smaller numbers pf large complex equations. Lots of simple calculations that must be processed in sequence is not good for multi threading.

A thread for each ship.results in desyncs and dead ships still shooting.

So ships that would have been blown up already get to still apply damage… Yeah…

That’s the most obvious flaw in your suggestion, something any decent programmer would have spotted right away. There are more, but you being such a super duper experienced programmer, I’m sure you’ll figure it out on your own. Or not.

I’m piggybacking on this to give you a more thorough answer (Linus is banned :<)

The short answer is: No.
Quantum computers do compute everything in parallel, but the problem is getting the results out. You can compute it all in parallel, but have to go back to the classical computer world to read out the data and you’d essentially only read out one result at random.

Quantum computers can only provide exponential speedups in cases where there is some algebraic structure to exploit when condensing the exponentially many computations down to a single result. That’s why you can get an exponential speedup for factoring e.g. the discrete logarithm, but not for symmetric cryptography, or really any other useful problem (except for the kind of circular problem of simulating quantum systems).

Just to keep this horrible thread alive, I want to point out some of the use cases of parallel processing, off the top of my head, and why they can function.

  • big amazon/netflix/cdn type stuff. When an individual puts in a request, that single request doesn’t get split between machines, though it might get handed off to machines (i.e., the login server hands over to the storefront, hands over to the shipping department.) They have parallel routes so they can handle more customers simultaneously, but those customers don’t interact directly There are no interactions so any multi-threading scheduler has almost zero load.
  • at home in graphics simulations. Graphics can be split up because, for example, the anti-aliasing doesn’t affect where shadows are rendered, and even if conflicts and errors happened, who cares, it’s just the graphics, artifacts happen all the time. There are still some interactions between threads which puts load on the scheduler, so you hit a point of diminishing returns (otherwise, everyone would have a 40lb graphics card that would produce perfect graphics.)
  • crypto mining. I’m not as informed in this, so my understanding may be wrong, but basically, this is infinitely scaling (or at least effectively so.) It works by brute forcing equations. When you’re brute forcing you are running individual calculations, most of them fail and you’re just looking for the few that succeed. None of the calculation runs interact with each other, it’s just a matter of running them and returning the results to the central scheduler. The scheduler has very little to do.

Where agents in the equations interact, a developer has to make sure they interact in the correct order so that the correct solution comes out. If it’s complex equations with few interactions, the scheduler would represent very little relative overhead, but when each equation is simple with lots of interactions, the scheduler gets so busy it represents more overhead. In the case of EVE, the equations themselves are relatively simple, and the problem becomes the number of interactions. Interactions would add load to EVE’s hypothetical scheduler to the point where the scheduler would then have more CPU load than the equations themselves, and it would be worse than self defeating.

I’m not a software engineer, I don’t have enough knowledge to provide solutions. I haven’t invested enough effort into a particularly deep understanding, but as an interested party with a good memory and a quick uptake, I’m pretty sure my overview is close enough to work as a rough primer.

1 Like