Basic idea is that damage of a gun/missile/etc. applies to the first object between you and your target.
This would imply a lot for fleet combat especially in engagements with high ship counts. Some immediate implications:
-Large fleets would need to spread out to apply damage effectively
-Small fleets would be able to ‘hide’ using enemy ships as cover and mitigating the damage output of larger battalions
My primary line of thinking is the allowance for clever piloting to mitigate damage (or even get a lazy opponent to fire on their own kin). Allowing for more strategy for gunners, and adding a new layer to combat. Blob tactics would become incredibly risky as half the fleet will be firing through their allies (conversely center a blob on small fleet and watch as your fleet can’t miss a target)
I am aware that the real challenge of this idea would be implementation due to higher CPU cost. If this isn’t technologically feasible yet, how challenging/ far away are we from being able to implement something like this. Otherwise I think it would add a lot to EVE combat.
This would also enable deployables on the battlefield to provide cover (gatecamp not entrenched enough? Protect your logi with these!), or certain missiles being able to go around these obstacles if they’re far enough away. See also: those massive EM force fields in PvE sites.
… except, there go the 5000 person fleet fights EVE is (in)famous for. You’d be lucky to get 100 in a fight with these mechanics in play because line of sight suddenly matters and has to be calculated for and against everything. This kind of stuff wouldn’t be out of place in Legion or maybe Valkyrie.
i doubt it would stop large fleets, they would just have to seperate into squads to keep from tripping on each other. But yea entrenchment could be tough, although on the brightside incoming folks would also have cover
This isn’t a case of challenge, this sort of thing is a solved problem algorithmically.
The problem is that it’s an exponential increase in CPU required to process a gun firing event compared to presently, never mind the mess that would be adding collision to missiles or drones.
Just to pass this sort of fundamental barrier would require the rate at which these calculations can be done to increase by at least an order of magnitude, probably more, and that’s not going to happen any time soon. Even assuming that Moore’s Law translates into a doubling of CPU speeds (it doesn’t) then that would put such a CPU increase at least six years out.
On top of that something like this would require pretty massive and fundamental gameplay changes to collision mechanics, AI behavior, and ship control options for it to work as gameplay. All of that takes even more processing power as well as design time.
Assuming that CCP actually wants to do something like this, and there’s zero indication they do, then it would probably take at least a decade from now.
Hehe we should see if Chris Roberts wants to come work for CCP, and promise this kind of stuff.
One thing I would point out is that Moore’s law isn’t quite relevant here unless they’ve got uneducated buffoons in their tech department. They’d have to scale out, rather than scale up.
One possibility would be for example to offload the calculations to a GPU cluster… think bitcoin mining rigs x 100. Send the details off to a gpu rig, wait for an answer back. We may be on a single shard, but that doesn’t mean we’re on a single server.
That being said, I completely agree (as previously alluded to in my last post in this thread) that it’s computationally infeasible, even if CCP wanted to do it.
Moore’s Law (or more accurately the speed metrics it plays surrogate for) actually is the primary bottleneck, because one of the things that’s almost impossible to multithread efficiently is the action queue for an MMO. If you multithread things you risk ending up with a race condition where someone shoots after they’re dead or is both dead and alive at the same time because one thread applied reps and says “you live!” while the other one said “you got DD’d and died” and then it’s a mess to un-$%#@ the whole thing.
You can get around that by doing some pre-analysis on each action, but that’s expensive too and still has to happen more or less in-order since it’s context sensitive and one of those actions could be a lock completing (for example) that changes the math.
TLDR: Action queues need to be largely single-thread so doing something like off loading to a GPU cluster doesn’t really work or help.
Okay… so you’ve got your “single queue” of events. Lets say there’s players shooting at each other, and each event is “one volley”, simply stated. Lets also pick an arbitrary number of “calculations” which you could simply count as FLOPs if you wanted to.
Permissibility/sanity check farmed off to cluster 1, requiring 10 flops
Chance to hit calculation farmed off to cluster 2, requiring 15 flops
Damage calculation farmed off to cluster 3, requiring 20 flops
Collision calculation farmed off to cluster 4, requiring 50 flops
All calculations completed, passed back to whatever called “shot 1”, which then alters the state of the game and proceeds to shot 2.
The bottleneck is of course the largest calculation, but not the sum of the calculations because each can be performed independently with the expectation of a permissible outcome. If it turns out to be not a valid action (lagging and although his client thinks he’s in range he’s actually not) then it simply discards the calculations.
As I said, there’s definitely a feasibility factor, but it’s that bottleneck calculation, not the linear scale of it all. You can farm things out and wait for results, just like you are now, whilst still scaling out horizontally.
Your real bottleneck is still the linear nature of the whole thing. Using your example you’re saving at best 50% over just running things linearly, and with how that’s setup you’re going to have to deal with the fact that one of your threads is going to finish literally 5 times faster than another which has the potential to create some nasty memory leaks and other fun (been there, done that, it wasn’t fun).
Plus there’s a certain amount of overhead inherent to farming things out like that.
Basically assuming something roughly close to your numbers you’re saving, at best, 1 Moore cycle’s worth of time off of this whole thing. Maybe two years, and that’s even assuming things can be parsed out how you’re talking about.
More realistically you end up with something like the sanity check eating 20% of the task time, but it’s best to do that first anyway because you can trash the rest if it fails. Damage and hit are just popping a number off the RNGesus generator at 5% each, and the remaining 70% or so is the collision math, and that’s probably being generous.
The actual collision math can be farmed off and multi-threaded, but when I say that you’re an order of magnitude more expensive than just “I shoot Tom” like we have now I’m assuming you’ve optimized the $4!# out of your collision checking and you’re running it as fast as possible which is probably something like O(n log(n)) because you’re not just checking collision you’re tracing a ray and then checking collision along it so you’re going to be similar to a Cubesort.
The real winner here is if someone can find a way to quickly and easily allocate tasks to pools, similar to an Oct Tree is used to simplify the collision detection. Basically going “there’s no way for player X to affect Y so anything related to X or Y but not subset Z can be split off”. The problem there is that you very quickly take longer to parse things than it does to simply do the basic calculations in the first place and making the determinations is a fantastic way for nasty bugs to end up in your code and then you have a dead Titan DDing someone 30 seconds after it exploded.
We seem destined to spend most of our time arguing each other, don’t we? Lol.
Okay, amend my previous sequence by multiplying the flop count by 1000. I never said scaling out would scale linearly, just that it would scale better horizontally than scaling up. It’s really not hard to multi-thread with mutexs, if there’s “other fun” happening it’s the result of a coding error, not a design methodology flaw. Believe me I’ve been there too, I one time used memory pointers to access the variables of other recursively called iterations of the same function. I honestly cannot remember why, in retrospect it was a completely stupid idea.
1 moore cycle’s worth of advancement may fix that… but that’s in the future - nothing’s to say the gpu farms wouldn’t improve to meet or exceed that advancement in a year’s time as well.
I would imagine that the vast majority of received “client input” is sanity-checked by the client, and that it’s only edge cases that need to be discarded. For example, tidi certainly screws with clients. Ultimately, while that sanity check certainly needs to happen, it’s better for them to assume that the client already sanity checked things and proceed with all of the calculations without waiting to find out if they’re valid, under the assumption that “waiting” for a valid state is less efficient than cramming the calculations in.
In terms of the collision calculation I completely agree that will eat the lion’s share of the computation time… but that too can be farmed out. For example, each ship on grid (or some other optimized subset) could be individually calculated in parallel on different farms. Now while you’ve got 70% of the calcs in one “function”, you can scale that 70% out to 10 or 100 different clusters. If any one of them returns a hit, the damage calculation can be unchanged, and it simply alters the state of the game differently.
Ultimately, if it’s 10 computationally intensive items, sending each off to a dedicated platform will be faster. You just have to be sure you maintain that linear lockstep in your game state alteration. Not going to say it scales linearly, just that it scales better.
Does this mean we could angle our hulls to richochet incoming shots off our armor?
Like world of tanks. Get my apoc, line up at 20 degrees with my nose behind a structure and be unkillable unless flanked?
I prefer to think of it as lively and engaged debate
When you get to the kind of scale CCP’s talking about it becomes a coding problem just to get something like this working because performance across both the CPU and memory is a concern. If you just sling things at threads and forget about them then you can have a pretty bad time pretty quickly, so you need tighter control, but that’s going to eat into your performance gains and the whole thing still has the potential for errors…
My point wasn’t to poke holes in your flop counts, I was pretty specifically trying to avoid that because I honestly have no idea how close to correct they are. My point was just that even given your spitballing estimate you’re not actually saving that much time.
Also yeah I’ve done stupid things with multithreading and memory too. Pretty sure everyone whose ever had access to both threads and pointers in the same app has, lol.
I honestly have no idea exactly how much the client sanity checks before checking with the server. That particular interaction is definitely part of the problems TiDi runs into, but I’m not sure if the issue is the client waiting on a state change that hasn’t come yet but should have or waiting on a sanity check to come back on an action that’s been sent. That the UI could potentially be interpreting both of those things into the common visual bugs (infinite cycling, unresponsive modules, ect) is an issue.
In any case the client won’t let you do something until it’s been sanity checked by the server, so the server still needs to san-check everything. Probably more so in TiDi because of the potential for weird client states.
You can’t just have one return a hit though, it needs to be the first one along the ray, and you have to adjust the damage because the thing you just hit is going to be running a different resist profile, have different transversal and distance, and generally just be a completely different target.
Plus you’re not just processing shoot commands, you’re also processing module events and other stuff, so you can’t have any of those jumping in and changing the positions of your ships while you’re calculating, and you can’t run ahead of any that are behind your shooting in the queue because someone might be about to change course or MJD away and that should be taken into account.
Like I said though, when I said order of magnitude I was still figuring on all of this optimization for the things that are easily optimized.
And this is why. Maintaining that lockstep is a massive pain, and limits you a lot if you don’t want to end up with weird race conditions. When you’re talking physics and game-state this is really absurdly easy to end up with too. Looking at this academically it looks super easy, until you actually look at what the different events do and that you’re still pretty tied to that linear progression, even if pieces of each command can potentially be farmed out, the actual efficiency gains end up being pretty minimal.
Oh and there’s also the issue that if you have enough processing hardware to basically super-computer the collisions for a big fleet fight you’ve massively overbought on hardware. It’s just not going to get used 99% of the time and there’s not a whole lot else something like that can be used for in a game. It’s unlikely to make the market run faster, or missions, or anything else. Plus you have to be careful about allocating too many things against it as that could create thread delays and send the whole thing screaming off a cliff.
Unless I’m mistaken Titan DDays already account for line of sight, its just acceptable due to the relatively small number of DDays, additionally EVE already tracks distance and arc speed, so some form of point to point understanding must already exist in the game? But yes it would be exponential power, thank you for a time ball park!