Giter Club home page Giter Club logo

slang's Introduction

Slang

Linux Build Status Windows Build Status macOS Build Status CTS Status

Slang is a shading language that makes it easier to build and maintain large shader codebases in a modular and extensible fashion, while also maintaining the highest possible performance on modern GPUs and graphics APIs. Slang is based on years of collaboration between researchers at NVIDIA, Carnegie Mellon University, Stanford, MIT, UCSD and the University of Washington.

Key Features

The Slang system is designed to provide developers of real-time graphics applications with the services they need when working with shader code.

  • Slang is backwards-compatible with most existing HLSL code. It is possible to start taking advantage of Slang's benefits without rewriting or porting your shader codebase.

  • The Slang compiler can generate code for a wide variety of targets and APIs: D3D12, Vulkan, D3D11, OpenGL, CUDA, and CPU. Slang code can be broadly portable, but still take advantage of the unique features of each platform.

  • Automatic differentiation as a first-class language feature. Slang can automatically generate both forward and backward derivative propagation code for complex functions that involve arbitrary control flow and dynamic dispatch. This allows users to easily make existing rendering codebases differentiable, or to use Slang as the kernel language in a PyTorch driven machine learning framework via slangtorch.

  • Generics and interfaces allow shader specialization to be expressed cleanly without resort to preprocessor techniques or string-pasting. Unlike C++ templates, Slang's generics are checked ahead of time and don't produce cascading error messages that are difficult to diagnose. The same generic shader can be specialized for a variety of different types to produce specialized code ahead of time, or on the fly, completely under application control.

  • Slang provides a module system that can be used to logically organize code and benefit from separate compilation. Slang modules can be compiled offline to a custom IR (with optional obfuscation) and then linked at runtime to generate DXIL, SPIR-V etc.

  • Parameter blocks (exposed as ParameterBlock<T>) provide a first-class language feature for grouping related shader parameters and specifying that they should be passed to the GPU as a coherent block. Parameter blocks make it easy for applications to use the most efficient parameter-binding model of each API, such as descriptor tables/sets in D3D12/Vulkan.

  • Rather than require tedious explicit register and layout specifications on each shader parameter, Slang supports completely automate and deterministic assignment of binding locations to parameter. You can write simple and clean code and still get the deterministic layout your application wants.

  • For applications that want it, Slang provides full reflection information about the parameters of your shader code, with a consistent API across all target platforms and graphics APIs. Unlike some other compilers, Slang does not reorder or drop shader parameters based on how they are used, so you can always see the full picture.

  • Full intellisense features in Visual Studio Code and Visual Studio through the Language Server Protocol.

  • Full debugging experience with SPIRV and RenderDoc.

Getting Started

If you want to try out the Slang language without installing anything, a fast and simple way is to use the Shader Playground.

The fastest way to get started using Slang in your own development is to use a pre-built binary package, available through GitHub releases. There are packages built for 32- and 64-bit Windows, as well as 64-bit Ubuntu. Each binary release includes the command-line slangc compiler, a shared library for the compiler, and the slang.h header.

If you would like to build Slang from source, please consult the build instructions.

Documentation

The Slang project provides a variety of different documentation, but most users would be well served starting with the User's Guide.

We also provide a few examples of how to integrate Slang into a rendering application.

These examples use a graphics layer that we include with Slang called "GFX" which is an abstraction library of various graphics APIs (D3D11, D2D12, OpenGL, Vulkan, CUDA, and the CPU) to support cross-platform applications using GPU graphics and compute capabilities. If you'd like to learn more about GFX, see the GFX User Guide.

Additionally, we recommend checking out Vulkan Mini Examples for more examples of using Slang's language features available on Vulkan, such as pointers and the ray tracing intrinsics.

Contributing

If you'd like to contribute to the project, we are excited to have your input. The following guidelines should be observed by contributors:

  • Please follow the contributor Code of Conduct.
  • Bugs reports and feature requests should go through the GitHub issue tracker
  • Changes should ideally come in as small pull requests on top of master, coming from your own personal fork of the project
  • Large features that will involve multiple contributors or a long development time should be discussed in issues, and broken down into smaller pieces that can be implemented and checked in in stages

Contribution guide describes the workflow for contributors at more detail.

Limitations and Support

Platform support

Windows Linux MacOS
supported supported unofficial

Target support

Direct3D 11 Direct3D 12 Vulkan CUDA OptiX CPU Compute
HLSL HLSL GLSL & SPIR-V C++ (compute-only) C++ (WIP) C++ (compute-only)

*for greater detail, see the Supported Compilation Targets section of the User Guide

The Slang project has been used for production applications and large shader codebases, but it is still under active development. Support is currently focused on the platforms (Windows, Linux) and target APIs (Direct3D 12, Vulkan) where Slang is used most heavily. Users who are looking for support on other platforms or APIs should coordinate with the development team via the issue tracker to make sure that their use case(s) can be supported.

License

The Slang code itself is under the MIT license (see LICENSE).

Builds of the core Slang tools depend on the following projects, either automatically or optionally, which may have their own licenses:

Slang releases may include slang-llvm which includes LLVM under the license:

  • llvm (Apache 2.0 License with LLVM exceptions)

Some of the tests and example programs that build with Slang use the following projects, which may have their own licenses:

  • glm (MIT)
  • stb_image and stb_image_write from the stb collection of single-file libraries (Public Domain)
  • tinyobjloader (MIT)

slang's People

Contributors

1ace avatar apanteleev avatar arielg-nv avatar arquelion avatar cek avatar checkmate50 avatar chloekek avatar chuckgenome avatar csyonghe avatar dsiher avatar expipiplus1 avatar fknfilewalker avatar jkwak-work avatar jsmall-zzz avatar kaizhangnv avatar kopaka1822 avatar kyaonv avatar mighdoll avatar natevm avatar nbickford-nv avatar pema99 avatar pmistrynv avatar saipraveenb25 avatar sirkero avatar skallweitnv avatar sriramm-nv avatar tangent-vector avatar tgrimesnv avatar westlicht avatar winmad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

slang's Issues

Find a solution for VS_OUT

This is here just so I won't forget about it.
In order for a define-based solution to work for GLSL, I need to know when I'm compiling a Slang shader (so I can define the struct) vs when I'm compiling GLSL (so that I can remove the struct).
We may also need a SHADER_STAGE macro to know how to use in and out

Vulkan GLSL: uniform block with `push_constant` allocation

Both Vulkan and D3D12 share the idea that "push constants" ("root constants") are exposed to the shading language as ordinary uniform blocks (cbuffers). In D3D12 the actual assignment of a buffer to root constant resources is handled at the API level, while in Vulkan all of the binding to API-level resources is handled in layout modifiers.

A GLSL uniform block decorated with layout(push_constant):

layout(push_constant) uniform MyStuff { ... };

consumes different resources from an ordinary uniform block. This needs to be accounted for in a few places:

  • We need to parse the push_constant layout modifier into a form visible to semantics
  • The ParameterBinding.cpp and/or TypeLayout.cpp logic needs to allocate resources differently for these blocks
    • Don't allocate a descriptor table slot for the block
    • Allocate a new resource kind (PushConstant) for members of the block

The existing reflection API should be able to handle this case.

Actually, one parting shot: it might be worthwhile to go ahead and detect this modifier during parsing, so that we can break out a distinct type for the push_constant block like what we do for in and out blocks already. That would probably simplify a lot of the logic because the block would no longer surface as an "ordinary" uniform block.

GLSL: Array sizes that use expressions on specialization constants

Vulkan GLSL allows for the size of an array to depend non-trivially on a specialization constant:

layout(constant_id = 0) const int N = 5;

float foo[N]; // trivial dependence

float bar[ (N+ 15) / 4 ]; // non-trivial dependence

The GLSL spec language around this stuff is messy, and gives the impression that somebody worked backwards from the implementation in glslang. The basic behavior in glslang (which is now the "correct" behavior via spec) is:

  1. Two array types, with the equivalent element types, that use specialization constants in their sizes are equivalent if and only if their sizes are specified using the exact same specialization constant (no math on the constant is allowed, not even N +1). All other cases are deemed not equivalent (even when syntactically identical).

  2. Computed layout information as exposed in, e.g., SPIR-V or a reflection API will always be based on the "default" value for the specialization constant, and it is up to the user to carefully avoid cases where this will lead to incorrect behavior (e.g., don't put an array with speciaziation-constant-based size in the middle of something).

Item (1) is unfortunate, but understandable. Handling such things "right" means building a more complete dependent type system, and most people are going to back away from that. I'd like Slang to eventually do a better job, and at least incorporate a very basic "solver" to check for obvious algebraic identity. This would mean going against the GLSL spec, but compatibility with GLSL is a non-goal for Slang.

Item (2) is just plain dangerous. At some point it would be good to at least emit a warning in cases where the user has done something that could cause problems. A good policy would be to treat an array that uses a specialization constant a bit like an array with no statically-specified size (float foo[]), and only allow it at the end of a structure, etc.

I'd like to ensure that the Slang reflection API never lies to people.

For now, I'm considering all of this mess out of scope, so this issue only exists to provide a backlog item.

HLSL's StructuredBuffer to GLSL conversion issues

Unrelated to the release, just documenting some issues I encountered.

Our initial thought was to use variable-size arrays. That doesn't map well to HLSL - and by extension to Falcor. The problem is how to reflect things correctly.
For example

struct Foo {float3 bar;};
StructuredBuffer<Foo> gFoo;

(1) Falcor looks for the buffer using gFoo.
(2) Falcor is not aware of the Foo. It gets the struct size from gFoo.
(3) Reading/writing variables from the host side is done using the fields on Foo directly - mpFoo[0]["bar"] (the first index is the struct index

I don't know how to translate that to GLSL. We can do something like

struct Foo {float3 bar;};
layout(set = 1, binding = 4) buffer gFoo
{
    Foo foo[];
};

HLSL (1) Will work just fine
HLSL (2) Doesn't. We will need to get the size of gFoo.foo
HLSL(3) Doesn't work as well. foo gets in the way

This isn't necessarily just a cross-compilation issue. Even for hand-written code, I have no idea how to make Falcor's DX abstractions work with GLSL

Add some kind of compile-time loop/unroll construct

This is needed for code that relied on HLSL [unroll] for semantic validity, since glslang doesn't unroll as part of doing semantic checks.

For now this could be something really hacky like a special-case statement construct __unroll_foreach(i,N) { ... }. Longer term I'd like to move towards arbitrary compile-time computation (we just aren't in a position to implement the latter right now).

Shader with missing bracket cause the rewriter to loop endlessly

`struct LightCB
{
float3 vec3Val; // We're using 2 values. [0]: worldDir [1]: intensity
};

StructuredBuffer gLightIn;
AppendStructuredBuffer gLightOut;

[numthreads(1, 1, 1)]
void main()
{
uint numLights = 0;
uint stride;
gLightIn.GetDimensions(numLights, stride);

for (uint i = 0; i < numLights; i++)
// MISSING BRACKET
    gLightOut.Append(gLightIn[i]);
}

}`

Add support for `image` memory qualifiers

coherent, volatile, restrict, readonly, writeonly

Slang reports an error if I use any of them.
Simple compute-shader to reproduce the problem:

layout(set = 0, binding = 1, rgba32) uniform writeonly image2D gOutput;
layout(local_size_x = 16, local_size_y = 16, local_size_z = 1) in;
void main()
{
imageStore(gOutput, gl_LocalInvocationID.xy, vec4(1,1,1,1));
}

Reflection: validate information for varying input/output

We do a basic job of enumerating varying inputs/outputs for GLSL, because the global-scope block declarations are just sitting there and it is hard to ignore them. I'm pretty sure we don't deal with declarations using double and related types correctly, so there is still work to do there.

We also walk through the varying inputs/outputs for HLSL entry points, so that we can find fragment-shader outputs and properly account for them, but I don't think we currently add any actual TypeLayout information for reflection to use.

A further issue is that we probably need to distinguish between general varying input/output and the specific cases of "vertex input" and "fragment output." The API here is a bit messy right now, and conventions need to be defined.

Falcor: design and implement approach for specialization

This is strongly related to #16.

Given a shader with some set of "top-level" parameters, such as a Material, we need a way to inform the compiler that for a given variant to be generated, the implementation of that parameter's type should be specialized in a particular way (based on an actual run-time C++ object of type Falcor::Material in the Falcor case).

The ideal API interface is then relatively simple:

  • A shader entry point may include parameters (possibly declared at global scope) that belong to a high-level "module" type like Material or Light

  • When low-level code generation is requested for an entry point, we can specify a concrete type to substitute in for the parameter (any parameter where we don't specify this will ideally be left as a "generalized" parameter.

This seems to require a few things not present in the code today:

  • We need to flesh out the system for interface declarations (currently using the __trait keyword), and allow interface implementations and interface-typed parameters.

  • We need to split the code-generation API up (or at least support splitting it) into two phases:

    1. An AST generation and checking phase, that produces a representation suitable for reflection
    2. A low-level code generation phase, where the user requests compilation of one or more entry points into low-level code.

    Actually, there might be a step (1.5) in that flow, which is where one creates target-specific "layout" objects from the raw AST objects to represent how parameters would be bound for a target.

Cross-compilation: survey builtin HLSL functions

I added a hand-written translation in an __intrinsic attribute for saturate(), but the same needs to be done for many other functions.

The ideal case is to do a systematic survey of the HLSL "standard library" and attach GLSL equivalents to all the functions that we can handle easily.

Notes:

  • We don't currently haven good support for remapping types. That might be worth adding.
  • The current approach to remapping with __intrinsic doesn't account for member function calls. That should get fixed.

Flag to control emission of line directives

Emitting line directives is probably the right default in "rewriter" mode (since we need to see error messages from the downstream compiler), and it is arguably the right default when you want debug output. It isn't the right thing when people want to look at the output and know what is going on.

Realistically, we probably want:

  • A direct flag (e.g., -no-line-directives and -line-directives) to control whether we output line directives at all

  • Make line directives the default for "rewriter" files and when debugging is desired (e.g., when a -g flag is specified), but otherwise skip it

GLSL constant-buffer reflection alignment issue

Slang reports the wrong offset for scale in the following CB:

layout(set = 0, binding = 0) uniform PerFrameCB
{
vec2 offset; // This is reported as 0, which is correct
vec2 scale; // This one is reported as 16, but it should be 8 since it can be packed together with offset into a single vec4
};

Reflecting an array of structured buffers reports the buffer size as 0

struct LightCB
{
    float3 vec3Val; // We're using 2 values. [0]: worldDir [1]: intensity
};

RWStructuredBuffer<LightCB> gRWBuffer[5]; // Only UAV counter used

I get the correct array size, but the size is 0. If I'm not using array the size is reported correctly.
Might be user error, in both cases I'm calling (uint32_t)pSlangElementType->getSize()

Matrix-float multiplication cross-compilation fails

float f;
mat4 a;
mat b = a*f; // This is legal in GLSL but fails compilation inside Slang

Looks like it's a Slang bug, since the GLSL compiler is never invoked.

We are using it for vertex-blending- ShaderCommon.slang::59 - getBlendedWorldMat

Direct generation of binary formats (DXBC, SPIR-V)

There is code in place for allowing the API user to directly ask for binary formats like DX bytecode as output from a compilation, rather than strings, but the actual function for querying output assumes it can return a null-terminated string (no length is passed) and the internal implementation uses the String type which isn't really suitable for a byte buffer.

Naming convention

A lot of the existing code from "Spire" uses a naming convention where type members are in UpperCamelCase. The new convention is that type declarations are UpperCamelCase while value declarations are in loweCamelCase (with the exception of enum tags for now, which use upper when scoped, and a k prefix when unscoped).

We need to make a pass over the codebase and regularize the convention at some point.

(Consider trying to set up clang-format to help catch issues).

Incorrect definition of HLSL patch types

Running slangc on HLSL input like:

void main(InputPatch<FOO, 3> foo, ...) { ... }

leads to an error: `too many arguments to call (got 2, expected 1).

This is because the definitions of InputPatch and OutputPatch only list the type parameter, and not the count parameter.

Make source location lightweight

The current representation of source locations is large and also expensive to copy (it contains a reference-counted string).

We should migrate to a lightweight location representation where a location is just a pointer-sized integer that stores an absolute index of a byte processed during the compilation session.

(This complicates debugging, so some time needs to be spent on making sure there is still a reasonable experience for tracing back to where an error came from)

Don't emit functions not fit for target stage in GLSL

GLSL has restrictions that mean you can't have a function with a discard in the translation unit, if you aren't targetting the fragment-shader stage. This is even true if you don't even call the function.

The current lowering strategy emits all symbols in an imported module if it detects we are in "rewriter" mode, so this causes problems.

The simplest fix is just to not do that, and only emit declarations that were referenced. That strategy should work for any case where we were able to semantically resolve all the user's code.

If we need something more fail-safe than that, we could try to ensure that when we see an un-checked name expression, or one that resolved to an overload group, we go ahead and emit every matching declaration, just in case. That could cause problems if one of those overloads is invalid for the target, of course.

We might also want to add some logic to detect stage-limited functions, and skip lowering them for targets where they aren't allowed. That seems like something we might need in the long term anyway.

Cross-compilation of entry points

The initial cross-compilation goals for Slang only apply to "library" code: files full of types, functions, and maybe some global shader parameters. This is an intentional prioritization choice.

Longer term we'd like to support more complete cross-compilation, in which the user can just throw an HLSL or Slang file at the compiler and get back GLSL for each entry point. That is a crowded field, though, so there would be little reason for somebody to pick our tool over another. Thus it makes sense to put this on the backlog until we've got a more interesting feature set to win over users.

Handle PS `uint` inputs

We have the following declaration in SceneEditorCommon.slang.h: uint drawID : DRAW_ID;

HLSL implicitly assigns nointerpolation qualifier to it, but GLSL requires that the user define it with the flat qualifier.
I tried adding nointerpolation, but Slang didn't replace it with flat.

The simple solution for now would be to just replace nointerpolation with flat, but the long-term solution would be for Slang to detect uint outputs and assign the required qualifiers

Report conflicting explicit bindings

Not Slang's fault, but it would be great if it will report conflicts in external bind locations.
For example:

layout(set = 0, binding = 0) uniform texture2D gTexture;
layout(set = 0, binding = 0) uniform sampler gSampler;

The shader compiles successfully. I don't know if it's a valid GLSL shader or not, perhaps the correct place for this issue is glslang.

Compute correct layout for arrays of opaque types in Vulkan

HLSL and OpenGL GLSL are consistent in that a declaration like:

Texture2D t[4]; // HLSL
sampler2D t[4]; // OpenGL GLSL

consumes 4 registers/bindings: one for each array element.

Vulkan seems to follow a different rule, where the entire array gets a single binding. Right now we do not implement this behavior correctly.

NormalMapFiltering.ps.glsl not compiling

Effect/NormalMapFiltering sample (default settings)

The error I'm getting is from glslang - perturbNormal is missing the definition. The definition is actually missing from the GLSL string we send even though _MS_USER_NORMAL_MAPPING is defined.

GLSL layout rules for `uniform` and `buffer` blocks

There is code in TypeLayout.cpp that tries to implement the std140 and std430 rules, but I have little confidence that it is being invoked correctly.

Tasks:

  • Set up some reflection-generation tests for GLSL, so we can see how layouts are being computed

    • Need to be careful when defining the expected output here; should probably run the same input through glslang when generating baselines to double-check offsets
  • Ensure that we are picking up the rules specified as a layout attribute and applying them correctly.

  • Ensure that given GLSL source we are picking appropriate rules by default when nothing is specified (e.g., std140 for all uniform blocks, and std430 for all buffer blocks)

  • Make sure to emit downstream code that reflects the layout choices we make, either by applying a layout attribute to the block, or by applying layout(offset=...) to each member. We should be conservative and try not to require too many extended features that could make it harder to output portable OpenGL GLSL later.

GLSL `std430` rules are not implemented correctly

Right now the std430 rules inherit the constant-buffer restricts on struct and array alignment (which are the specific restrictions they are exempted from), but do not inherit the vec3 layout rules that pad them out to be vec4 aligned (which they should use).

Adding this as an issue so that I can reference it in a test case.

`slangc` should allow specifying output file(s)

Right now slangc always just writes its output to the console, which is convenient for testing (and mirrors what fxc does by default) but it doesn't have any provision right now for specifying an output file.

In simple cases, we should be able to do something like:

slangc -o foo.dxbc -profile vs_5_0 -entry vsMain foo.hlsl

and get the result we expect (DXBC for the vsMain() entry point function output to the file foo.dxbc).

Ideally, the front-end driver should be able to infer the desired output format based on the file extension provided, so that you could change that command line to use -o foo.spv or -o foo.dxil and it would Do What I Mean).

(Cross-compilation from HLSL/Slang to GLSL should probably be triggered in a similar fashion, e.g.:

slangc foo.hlsl -o foo.glsl

That seems like a slightly more complex feature than what this issue is trying to get at.)

As a simple starting point, this should only be allowed for compilations that involve a single entry point, and output to one of the existing formats that is designed for single entry points: DXBC, DXIL, or SPIR-V.

Longer term, we should define a container format that can hold output for multiple entry points (in any format), but that is a larger change.

Namespace abuse

The existing "Spire" code made heavy use of namespaces, but that just complicates what we are trying to accomplish in a lean-and-mean codebase.

I'm not 100% decided on what an ideal convention should be, but my first stab would be:

  • All user-face C++ interfaces (currently just inline wrapper stuff around the C API) will reside in the slang namespace

  • All implementation stuff will reside in the Slang namespace (notice capitalization) with no nesting.

  • If we really need multiple namespaces (e.g., multiple back-ends need types with similar names), then it probably makes sense to give each a distinct top-level namespace with a Slang prefix, just to keep things flat.

I'm not enamored of having a distinct namespace for public API vs. private implementation. I might advocate for finding a different way to expose the API that avoids the need for the opaque wrapper types, so that we can actually expose the same namespace and type names (potentially making Slang more debuggable for client code).

Convert Single-Pass Stereo from HLSL to GLSL

The Vulkan extension is called VK_NVX_multiview_per_view_attributes
The extension spec is here

DefaultVS.slang outputs to NV_X_RIGHT and NV_VIEWPORT_MASK which should be convert to new GLSL builting.
When we detect that SPS is used, we need to output to gl_PositionPerViewNV.
I didn't read the spec yet, the extension is more permissive then what SPS requires. For now we only need to support what's required for SPS.

Generate a declaration for `gl_PerVertex` when cross-compiling

If we don't generate an out gl_PerVertex block when generating GLSL, then glslang will generate one behind our backs, and it seems to always give it layout(location = 0) which is not helpful.

To avoid this, we need to automatically generate an appropriate gl_PerVertex declaration, and ensure that it gets a location after any user vertex outputs.

Falcor: design for exposing "modules" to user code

We need a POR for how a conceptual module like Material will be exposed to user code in both HLSL and GLSL in a way that works with:

  • The limitations of both languages and the compilers that will be used for each (e.g., the non-overlapping bugs in both fxc and dxc)

  • The constraints of what we are willing to let the "rewriter" mode in Slang do (e.g., we currently don't let it rewrite anything in function bodies)

Of course we also want a model that is as usable and natural for users as possible.

Going back to the "rosetta stone" sketch that outlined the "rewriter" architecture, there are two key challenges;

  1. What to do when a module conceptually contains both "ordinary" uniform parameters (e.g., a float3) and "resource" parameters (a `Texture2D). The various langauges/compilers we need to work with have varying levels of support for types that mix up resource and ordinary parameters.

  2. What to do when there is a single conceptual type Material, but there might be specialized versions of it used in specific cases (but not all)?

and the POR for how to solve them is something like:

  1. This is officially Falcor's problem, not Slang's, since a complete solution to the mixed-type-struct problem is out of scope for what the "rewriter" is supposed to do. That said, I expect that if we can find a way to sole the problem using Slang, nobody is actually going to complain.

  2. The intention is to expose a syntax that the user sees as a macro invocation, e.g., PARAMETER(Material, m);, but that is actually custom syntax that can expanded to a different type based on how a parameter will be specialized.

Item (2) obviously requires a complete design and has API implications, so it is the important bit to focus on first. Item (1) is actually conceptually straightforward, even if there is a lot of detailed work in implementing it.

Allow calling out to `dxc` for "pass through" and DXIL generation

The front can currently call out to fxc and glslang both for use as a "pass-through" compiler (bypassing the Slang compiler almost completely) and as a means to generate DX bytecode ("DXBC") and SPIR-V.

We should add support for calling out to the new HLSL compiler dxc as an HLSL-to-DXIL (and eventually HLSL-to-SPIR-V) compiler.

I would not advocate for putting dxc into our build anywhere, since CMake, LLVM, and clang make for a pretty heavy footprint on what is currently a lightweight project. Instead, we should follow a similar approach to what we do when interacting with d3dcompiler_47.dll for fxc, and assume that clients who want to use dxc will either ensure it is installed on end-user systems, or incorporate it into their build.

Cross-compilation of SkyBox.ps.slang fails

The last statement in the Slang file is
return gTexture.Sample(gSampler, normalize(dir), 0);

Which is converted to the following illegal statement in GLSL
gTexture.Sample(gSampler, normalize(dir), float(0));

To reproduce run the Effects/EnvMap sample

Lowering functions with default parameter values

GLSL doesn't support default values on function parameters, so we need to eliminate their use during lowering.

A simple strategy would be to lower the original function, and then generate a bunch of helpers that take fewer parameters and call the original with the default values plugged in.

The main down-side of that approach is that it could cause problems if the signature of the generated functions matches any existing function of the same name (although that should have created ambiguity in the first place).

Another alternative is to lower the call sites differently, by plugging in the default-value expression for missing arguments. That would handle many simple cases, but would create problems if the default-value expression ever relied on the lexical context of the original function.

I think we should go with the first option, and rely on a more general renaming pass to solve the collisions if they ever arise.

Introduce a distinct `Name` type for identifiers

We currently use String way too much in the code, and in particular we do string comparisons for identifiers, and string-based lookups when looking up names in semantic checking.

A cleaner approach would be to have the lexer go ahead and unique identifiers as they come along, and just store a pointer to a uniqued Name in each identifier token. Then later stages can just use the names directly, and most dictionaries can be keyed on these pointers intead.

HLSL: Parse and reflect user-specified root signature

HLSL allows a shader entry point to specify the root signature to use as an attribute:

[RootSignature("...")] void main(...) { ... }

The string literal there is expected to be provided via a #define, and in fact fxc seems to have special support for compiling a #defined string into a root signature (so it is a #define that gets linkage? what happens if I #undef it or have multiple definitions over the course of one file?).

The string literal that specifies a root signature encloses some syntax that looks more or less like a bunch of constructor calls with a bit of key = value argument-passing sugar.

See [this page][hlsl-root-sig] for the MSDN documentation of the feature.

[hlsl-root-sig]: https://msdn.microsoft.com/en-us/library/windows/desktop/dn913202(v=vs.85).aspx)

Slang's current approach is to pass through attributes it doesn't understand, so shaders using this feature shouldn't be rejected today (modulo support for implicit concatenation of string literals, which we need to add).

A more complete implementation would require:

  1. Add the RootSignature attribute as a known attribute we actually parse/check
  2. Add a dedicated sub-parser (we can probably leverage the existing lexer and related infrastructure) to handle root signatures and parse them into some kind of AST.
  3. Figure out an appropriate reflection interface for querying this data (maybe a satellite library that just returns the raw D3D data structures)
  4. Maybe consider a feature to allow attaching root signature names to a compile request, without need for any entry points

Properly handle errors from downstream compiler

Right now I just exit(1) if either glslang or D3DCompile fails on the lowered input. This needs to be changed so that these failures come across as ordinary diagnostics from the perspective of the user.

Error when compiling compute shaders

Slang strips all declarations from the GLSL before sending it to glslang.

I send

#extension GL_ARB_compute_shader : require

layout(set = 0, binding = 1, rgba32) uniform image2D gOutput;
layout(local_size_x = 16, local_size_y = 16, local_size_z = 1) in;

void main()
{
    imageStore(gOutput, ivec2(0,0), float4(1,1,1,1));
}

slang sends

#version 420
#extension GL_ARB_compute_shader : require

#line 33 0
void main_(){
{
    imageStore(gOutput, ivec2(0,0), float4(1,1,1,1));}
}
void main(){
main_();
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.