EVE Evolved: More FPS for less CPU devblog feedback

Great news and a good idea to look at refactoring trinity to be focused on prioritising the GPU as the primary workhorse :slight_smile: good stuff.

My issue is in a huge firefight, my bottleneck isnt my CPU, it isnt even my GPU. my client performance can drop to single digit FPS because for some reason the clients FPS is fixed to the server performance. so under heavy tidi (maxxed 10% with module lag) both my cpu and gpu are twiddling their thumbs cause the client is waiting for data from the server for some inexplicable reason to go ahead and render frames.

This in my experience has always been the largest contributor to low fleet fight client FPS performance in eve online.

A modern GPU can render more frames with this approach – it’s just simply faster due to the advances DirectX 12 (Windows) and Metal (macOS) offer with modern GPUs.

CCP has fixed their trainwreck DX12 implementation yet? With all the issues regularly reported with Dx12, no one is really using it for EVE.

Simply ensure that DirectX 12 is enabled in the launcher.

Sure, as soon as CCP’S DX12 implementation stops preventing me from playing EVE in the first place.

Besides, I am not interested in more FPS than the 60 I have limited my clients to. What I am interested in is not wasting too much power. How does this new GPU pipeline impact power draw of the PC? Looking at CCP’s poor handling of station hangars, where turning them off reduces the computer’s power draw by up to 50 W, don’t have high hopes that this new GPU pipeline won’t cause a core meltdown of my power bill.

What if there was an option to disable ship models? Can already disable drone models

I am using DX12 for months now, without a single issue.

1 Like

Ctrl Shift F9 disables all models in case you want to minimise the load on your PC.

So when is this expected to be released? March 12?

This has given me a huge boost of hope for EVE.
I saw the news post, logged in and saw 400+ FPS in station.
I even managed consistently over 60 FPS on a 1440p monitor with 100+ ships on grid.

I would love to see CCP adopt:

  • Vulkan as it would give CCP more control over display APIs (even if they just use a wrapper and just push it from DirectX to Vulkan)
  • Linux servers, they would regain hardware resources, load only the packages they need, have a better CPU scheduler and take control of the OS, including being able to re-write the networking software to use fastest route rather than first available and load balance better between the servers.
  • The ability to stop rendering ships and replace them with elongated pyramids when zoomed out. I can’t see them, there is no point in rendering them.
  • CTRL+SHIFT+F9 options to display very basic, translucent spheres that show warp disruption bubbles, bombs, smart bombs and the 0 km boundary around structures.

Is it smart to have crazy effects when jumping?

It seems like it takes almost 9 seconds. I wonder if it would be faster if the effects werent there.

"I wonder if the game could load faster if they simply showed the loading screen less long"

You’re misunderstanding the purpose of the jump animation and come up with nonsensical suggestions based on your lack of knowledge, as usual.

It would help you and others if you approached the world with a desire to understand instead of a desire to fix what you do not yet understand.

Trinity is not currently running DX12 shaders, I see no evidence of CCP using any DX12 shaders.

To be clear - you can run DX8 shaders, and then compile them in DX12, but that does not and never will make the actual shaders ‘DX12’.

As an example, there is a file called “compile.fxh” located in /res/graphics/effect.

My best guess is that this header helps select an array of different legacy shader models in a single codebase (perhaps to support different GPU capabilities in the older engine). It likely was part of a larger system that automatically configured the compile profile for vertex and pixel shaders, set up constant registers for each pass, and toggled features (fog, shadows, normal mapping, etc.) based on the chosen Shader Model. It is not geared toward modern DirectX 12 or Shader Model 6.x usage.

The highest shader model referenced is SM 3.0, which is from the DX 9 era (with partial support in DX 10). It pre-dates the best practices and standard usage for modern DX12 (SM 5.x, SM 6.x, root signature usage, wave intrinsics, etc.)

The code lumps them into ‘ifdef’ logic based on SM 3.0, as opposed to separate specialized techniques, or re-usable library code, or single-pass-late-binding.

IF (and it’s a big IF) CCP were to re-write their entire shader chain(s) and then compile DX12, I would back off…but I strongly suspect they are going to optimise their existing shaders, but not update them to the current modern standards.

The modern standards are as follows:

Raster shaders

Vertex shader v6.6+
Pixel shader v6.0+
Hull shader v6.0+
Domain shader v6.0+
Geometry shader v6.0+
Compute shader v6.0+

Then you got the new mesh shading pipeline

Amplification Shader v6.0+
Mesh Shader v6.0+ (can potentially replace Vertex+Hull+Domain+Geometry pipeline for geometry, CCP’s mileage would vary, but as the LOD system would be removed, and you get highly advanced geometry manipulation, this is a big win)

Ray tracing

Ray-tracing (DXR shader), broken down into
raygeneration v6
intersection v6
anyhit v6
closesthit v6
miss v6
callable v6

ALL OF THESE SHADERS ARE PROGRAMMABLE!

This means, that within just a few minutes of being inside of Jetbrains, the entire codebase for shaders can be evaluated for refactoring of the shadercode, such as:

-Removing the legacy constructs, like technique/pass blocks and replace them with standalone HLSL entry functions, or use separate .HLSL files.

-Introduce any new root signatures, which shall mean no longer relying on the old ‘FX’ framework for resource binding.

-Migrating fixed-function-like, or simplfied macros for shadows, fog, normal mapping (which is not longer needed using mesh shaders) and moving to a pure PBR approach, or at a minimum open-coded, modern HLSL.

-REMOVE MULTI-PASS LOGIC, which is having SM2, SM3, SM4 when you do not need that if you are trying to support a single, modern pipeline.

-The engine itself can (under DX12) allow for PSO’s (Pipeline State Objects) that DIRECTLY reference the (compiled) shaders, changing the device states from the DX9 era (ID3DXEFFECT) to the shader, that requires the engine to have root signatures, descriptor heaps and of course PSO’s.

-Removing constant registers for calls to old, clunky framework methods, requiring moderate/difficult changes to how the engine feeds data to the shaders, like lighting, textures and matrices.

-Anything referencing D3DXHANDLE, D3DXEFFECT, ID3DXEffect, or .fx parser calls needs to be removed or replaced. That means a direct compilation step with D3DCompile or DXCCompile in your build system, plus resource binding at runtime/realtime rendering.

Given the size of the game and the number of tired old code, to update the entire shader base and code for the graphics rendering, I would estimate 2-6 months, depending on how good the project managers are, and; how large the team is.

The good news is that Jetbrains, VS Studio etc all have re-factoring and debug - the bad news is that the current texture base and art-pipeline is using such an old system, the material system would need updating also, and so would the actual models.

CCP simply do not have the money or resources to update the textures/materials and models - that is far too much work, even if you consider retopology tools, the original models I’m sure are no better than they were in 2007, and I did check - most of the models from 2007 “Trinity” are using the same geometry.

As an aside, hard surface modelling is commonly performed inside of CAD tools (like Rhino) before being exported into a game engine (like Unreal) and upon that import, the engine will convert the NURBs model into a ‘Tri’ model (well, you do that in Rhino). The reason you find this in hard surface modelling is so, as time moves on and you want an updated number of polygons in your model, then boomshakalaka, you just import the CAD model again (which is a mathematically ‘pure’ model) but this time, you use a higher polygon count when the engine converts it into a tri model.

In short, for CCP to update their SM 3.0-era shaders to modern DX12/SM6.7+ (as an example) will both require rewriting the HLSL and changing the way the engine sets up render states and resources.

The long term gain would future proof New Eden, but the short-term resources required to do it are too high for the budget that CCP could allocate to it.

CCP - you got my number, and although my NDA is now out of date, I’d be happy to sign up for another five years.

Ball, meet court.

2 Likes

There is nothing wrong with asking if the crazy effects and animation makes jumping take longer

1 Like

I am using low shader settings and 90% of the planets look amazing. But 90% of the ships and stations look bad, including station interiors.

Will updating the shaders to DX12/SM6.7+ change this? Or what is the point of that?

Some quick screenshot for anyone who is curious.



Old shaders running on new GPU’s perform badly, like an elastic band you pull apart, only for it to snap back, introducing (for one thing) stuttering, and unnecessary calls due tot he pipeline being old.

New shaders running on old GPU’s either do not run, or, they run badly.

You always want new shaders

Coding standards always move forward, as an example, stackless Python has always had a lock-step with the latest coding standards for CPython, yet, I am unaware if CCP have followed this or not, as that is more about the backbone than the graphics.

C++ has a feature set update through constant evolution of the code, and those coding to that language must update their code constantly.

Shader language is really critical for ccp right now, and I can bet they are smart enough to realise the level of work involved.

The problem is not just the shaders, it’s just, updating the shaders is the low hanging fruit.

As mentioned above, to update the graphics completely would require:

New Models
New Textures
New Material chains
and finally, New Shaders

The cheapest is the shaders, the most expensive is the modelling and texturing.

1 Like

It seems like new GPUs are able to handle the apparant lower effecciency of old shaders but older GPUs cannot handle the newer shaders. So all else being equal that means you shouldn’t update them

Old?

SM 6 came out 4 years ago!

Introducing SM 6 shaders now would be considered out of date!

You go girl. Go update those shaders. Nobody is using older hardware anyway and if they are they can go play albion

1 Like

are you suppose to leave and never come back ?

This question also applies to past and future alts. :smirk:

1 Like

I just wanna know when we will get it.

The new system will be released on Tranquility on 18 March.

Found it