Trinity is not currently running DX12 shaders, I see no evidence of CCP using any DX12 shaders.
To be clear - you can run DX8 shaders, and then compile them in DX12, but that does not and never will make the actual shaders ‘DX12’.
As an example, there is a file called “compile.fxh” located in /res/graphics/effect.
My best guess is that this header helps select an array of different legacy shader models in a single codebase (perhaps to support different GPU capabilities in the older engine). It likely was part of a larger system that automatically configured the compile profile for vertex and pixel shaders, set up constant registers for each pass, and toggled features (fog, shadows, normal mapping, etc.) based on the chosen Shader Model. It is not geared toward modern DirectX 12 or Shader Model 6.x usage.
The highest shader model referenced is SM 3.0, which is from the DX 9 era (with partial support in DX 10). It pre-dates the best practices and standard usage for modern DX12 (SM 5.x, SM 6.x, root signature usage, wave intrinsics, etc.)
The code lumps them into ‘ifdef’ logic based on SM 3.0, as opposed to separate specialized techniques, or re-usable library code, or single-pass-late-binding.
IF (and it’s a big IF) CCP were to re-write their entire shader chain(s) and then compile DX12, I would back off…but I strongly suspect they are going to optimise their existing shaders, but not update them to the current modern standards.
The modern standards are as follows:
Raster shaders
Vertex shader v6.6+
Pixel shader v6.0+
Hull shader v6.0+
Domain shader v6.0+
Geometry shader v6.0+
Compute shader v6.0+
Then you got the new mesh shading
pipeline
Amplification Shader v6.0+
Mesh Shader v6.0+ (can potentially replace Vertex+Hull+Domain+Geometry pipeline for geometry, CCP’s mileage would vary, but as the LOD system would be removed, and you get highly advanced geometry manipulation, this is a big win)
Ray tracing
Ray-tracing (DXR shader), broken down into
raygeneration v6
intersection v6
anyhit v6
closesthit v6
miss v6
callable v6
ALL OF THESE SHADERS ARE PROGRAMMABLE!
This means, that within just a few minutes of being inside of Jetbrains, the entire codebase for shaders can be evaluated for refactoring of the shadercode, such as:
-Removing the legacy constructs, like technique/pass blocks and replace them with standalone HLSL entry functions, or use separate .HLSL files.
-Introduce any new root signatures, which shall mean no longer relying on the old ‘FX’ framework for resource binding.
-Migrating fixed-function-like, or simplfied macros for shadows, fog, normal mapping (which is not longer needed using mesh shaders) and moving to a pure PBR approach, or at a minimum open-coded, modern HLSL.
-REMOVE MULTI-PASS LOGIC, which is having SM2, SM3, SM4 when you do not need that if you are trying to support a single, modern pipeline.
-The engine itself can (under DX12) allow for PSO’s (Pipeline State Objects) that DIRECTLY reference the (compiled) shaders, changing the device states from the DX9 era (ID3DXEFFECT) to the shader, that requires the engine to have root signatures, descriptor heaps and of course PSO’s.
-Removing constant registers for calls to old, clunky framework methods, requiring moderate/difficult changes to how the engine feeds data to the shaders, like lighting, textures and matrices.
-Anything referencing D3DXHANDLE
, D3DXEFFECT
, ID3DXEffect
, or .fx
parser calls needs to be removed or replaced. That means a direct compilation step with D3DCompile
or DXCCompile
in your build system, plus resource binding at runtime/realtime rendering.
Given the size of the game and the number of tired old code, to update the entire shader base and code for the graphics rendering, I would estimate 2-6 months, depending on how good the project managers are, and; how large the team is.
The good news is that Jetbrains, VS Studio etc all have re-factoring and debug - the bad news is that the current texture base and art-pipeline is using such an old system, the material system would need updating also, and so would the actual models.
CCP simply do not have the money or resources to update the textures/materials and models - that is far too much work, even if you consider retopology tools, the original models I’m sure are no better than they were in 2007, and I did check - most of the models from 2007 “Trinity” are using the same geometry.
As an aside, hard surface modelling is commonly performed inside of CAD tools (like Rhino) before being exported into a game engine (like Unreal) and upon that import, the engine will convert the NURBs model into a ‘Tri’ model (well, you do that in Rhino). The reason you find this in hard surface modelling is so, as time moves on and you want an updated number of polygons in your model, then boomshakalaka, you just import the CAD model again (which is a mathematically ‘pure’ model) but this time, you use a higher polygon count when the engine converts it into a tri model.
In short, for CCP to update their SM 3.0-era shaders to modern DX12/SM6.7+ (as an example) will both require rewriting the HLSL and changing the way the engine sets up render states and resources.
The long term gain would future proof New Eden, but the short-term resources required to do it are too high for the budget that CCP could allocate to it.
CCP
- you got my number, and although my NDA is now out of date, I’d be happy to sign up for another five years.
Ball, meet court.