Challenge Opened. Read rules - here. Content, links and translations are currently being updated and will be finalized this week... Good Luck to all!

Thread: Quick Maya CGFX tutorial.

Results 1 to 6 of 6
  1. Vailias is offline
    Location: San Francisco, CA
    Posts: 324

    Quick Maya CGFX tutorial.

    Not sure if this is the best place, but I didn't see any forum better.
    Since its been requested, heres a quick tutorial of what does what with maya's CGFX plugin.

    First off realize that any cg shader is made up of two parts. The vertex program (or shader), and the pixel (or fragment) shader. The pixel shader does the actual shading of the model, even if it just passes on the info from the vertex shader. Why? because thats how the hardware is set up. The vertex processor processes vertices, the fragment processor handles pixel info.

    I'll recommend downloading the CGToolkit from NVIDIA http://developer.nvidia.com/object/cg_toolkit.html and looking in your maya/presets/CGFX/examples/ folder for the included example shaders. Most of them are semi usefull, but none of them make for a perfect starting point. At least they can show you structure and syntax.

    The pixel shader and vertex shader components are much like a regular function or method in your programming language of choice.

    return_type FunctionName (parameters)

    Now a specific difference is that Cg (and hlsl for that matter) have certain keywords that denote preset memory locations/types in the GPU archetecture.
    These are
    NORMAL
    POSITION
    TEXCOORD
    COLOR
    and there may be a few others, but those are the important ones for the input and output functions of the vertex and pixel shaders

    Some things to know specific to maya: most cg shaders you'll see out there specify some of their input parameters using the keyword Uniform. For maya this is pretty much unnecessary and actually complicates things more. Normally these parameters are passed from the program to the shader, but since maya can directly access global variables for the shader in its interface the redundancy of the uniform variable input is unneeded.

    So about these variables.

    Anything you want to be able to tweek in the shader through maya's interface MUST be declared as a variable outside the shaders.

    You can use any of the normally supported Cg Variable types (read the users manual in the tool kit, there are too many to go over in any detail here), and they will be represented in maya's menu in a number of ways.

    the basic syntax is

    variableType VariableName : REGISTERTYPE
    <
    Parameters
    > = defaultValue;

    Register type is optional. From the list above is what you can choose. If you declare a variables register type POSITION it will be represented as a right clickable menu in maya that you can put objects into. This is what the lights and camera positions are declared as in my shader.

    If you declare its register type as Diffuse it will be a color slider with an alpha slider below it.
    if you delcare a float4 (or 3 or 2) type variable without the Diffuse or POSITION types it will simply be a linked set of floating point boxes like is an objects position in the attribute editor.

    Parameters are interface parameters available through Cg. Many of them do not do a damn thing in maya. Some that do are
    string UIWidget = "Slider";
    float UIMin
    float UIMax
    Using the UIWidget = "Slider" will make a float type variable a slider that you can change values on easily. UIMin and UIMax are what control the slider's range.

    Bool type variables are ALWAYS checkboxes. Always.

    Back to the shader types.

    Vertex shaders feed into pixel shaders. This means they must output data usable by a pixel shader. To this end you must declare structures. This concept is similar to class declarations in C++.

    syntax is

    struct structName
    {

    dataType Name : REGISTERTYPE;
    //as many as you need here
    };

    Your vertex program's return type will be of this struct type, ie
    structName VertexShader (parameters)
    {
    new strucName OUT;
    //do stuff
    OUT.Name = compatableDataType;
    return OUT;
    }

    the pixel shader must be of type float4 so it can return a color, that is an RGBA value.
    oh right, important point here, colors are not 0-255 they are 0-1(*or more.. but thats another topic) in floating point data. Ie 0,0,0 is black, 1, 1, 1 is white, 0.5, 0.5, 0.5 is medium gray.

    in addition the return value of the pixel shader must be set to a register type of COLOR
    this can be either

    float4 shaderName (parameters) : COLOR
    {
    float4 rCol;
    //do stuff
    return rCol;
    }
    or
    float4 shaderName (parameters)
    {
    //do stuff

    return rCol : COLOR;
    }

    Either way the return type must be to the COLOR register for output to color.
    (yes you can write pixel shaders that alter things other than color, but thats outside the scope of this)

    Last but not least the vertex shader needs some info from the application, such as what vertex, where it is, etc. this is the role of the appdata struct
    struct appdata
    {
    float3 Position : POSITION;
    float4 Normal : NORMAL;
    float2 texcoord0: TEXCOORD0;
    float3 Tangent : TEXCOORD1;
    float3 Binormal : TEXCOORD2;
    };

    Just use that one for maya shaders. It works fine, and you can't really do much more (ie second uv channels etc) with Maya's implementation of Cg.

    coming next. how to write a simple diffuse and specular blinn shader in vertex and pixel flavors.
    Reply With Quote Reply With Quote #1

  2. Marcus Dublin is offline
    Location: Queens, New York
    Posts: 1,918
    Thanks for posting this information as always Vailias, I'm sure this will be extremely beneficial to the 3d community and I moved it to the appropriate section. Thanks again!

    Marcus
    Reply With Quote Reply With Quote #2

  3. MM
    MM is offline
    Posts: 524
    thanks a lot :thumb: this should be quite helpful.
    Reply With Quote Reply With Quote #3

  4. Vailias is offline
    Location: San Francisco, CA
    Posts: 324
    Ahh thanks for moving it.
    I'm going to wind up editing that post tomorrow for better structure and to add some information I just looked up in Maya's API.

    and as promised heres a simple blinn vertex shader step by step. Feel free to follow along, copy the code into a text file, and save it as a CGFX file to test it as we go along.

    First off some information about The blinn specular model. Its an approximation of the Phong model that calculates faster.
    The blinn model calculates a vector halfway between the incident light vector and the camera vector (the way the light shines and where the camera is looking). This halfway vector is then compared to the surface normal, and the closer they are together, the brighter the highlight. The reason this works is that as the half vector approaches the surface normal, the camera vector approaches the reflection vector of the light, its just a faster approximation than doing a number of other vector calculations.

    So that said we'll need a few things.
    First we need the World View Projection transformation matrix and the World transformation matrix.
    float4x4 World : World;
    float4x4 WVP : WorldViewProjection;
    World and WorldViewProjection are keywords parsed by Maya's CgFX plugin that supply data of the specific type.

    next we'll need some information regarding the light and camera locations for doing the specular calculations later.

    float4 camera_Position : Position
    <
    string Space = "World";
    > = { 900.0, 900.0, 0.0, 1.0};

    float4 light_Position : Position
    <
    string Space = "World";
    > = {0.0f, 0.0f, 1500000.0f, 0.0f}; //assuming Z up world this is basically a distance light.
    again, the semantics Position, space and World, are all parsed by the plugin to put data in the right format.

    we need one more thing to start, and that's the diffuse color of the object
    float4 diffuse_Color : Diffuse
    <
    string UIWidget = "Color";
    > = {0.5, 0.5, 0.5, 1.0};
    note that this is declared a Diffuse register type, this is again, a maya Plugin keyword, not a CG Register Type. Full List of keywords coming soon.

    So now we can get on to calculations. First we need two structs. One to get the data from maya, and one for the vertex program to pass to the fragment program.
    struct appdata
    {
    float3 Position : POSITION;
    float4 Normal : NORMAL;
    float2 texcoord0: TEXCOORD0;
    float3 Tangent : TEXCOORD1;
    float3 Binormal : TEXCOORD2;
    };

    struct VS_vertexOut
    {
    float4 hPosition : Position;
    float4 vertColor : Color;

    };
    Note that you could do with less in the appdata struct, but for where this goes eventually, just leave it be.
    Also notice the hPosition member variable in the VS_vertexOut struct. This is REQUIRED for anything to render. The pixel shader needs to know where things are in homogenous clip space. (more on that in a second)

    Now that we have the structs defined we can write the vertex shader.
    VS_vertexOut blinnVert (appdata IN)
    {
    First two lines. This declares the struct type VS_vertexOut for the return type, gives the vertex program the name blinnVert, and takes in an appdata type struct named IN.

    we're going to go piece by piece here, and just start with diffuse.
    VS_vertexOut OUT;

    float4 PosWorld = float4(IN.Position, 1.0);
    OUT.hPosition = mul(WVP, PosWorld)

    float4 lightVec = normalize(light_Position - PosWorld);


    float4 diffuse = diffuse_Color * saturate(dot(lightVec, IN.Normal));

    float4 outColor;
    outColor = diffuse;
    OUT.vertColor = outColor;

    return OUT;

    }
    LIne by line
    First we need a temporary struct of the return type of the program to return at the end.
    thats the VS_vertexOut OUT; part.

    Next, the appdata position comes in as a float3 (3 component vector), but we'll need it as a float4 to do anything with it.
    float4 PosWorld = float4(IN.Position, 1.0); creates a 4 component vector named PosWorld then uses the float4 constructor to have the first xyz elements be IN.Position and the w element set to 1. Must be one. Do not use zero. You will get incorrect data.

    Next we get the Homogenous position of the vertex. multiplying the World View Projection matrix by the World Position of the vertex yields the vertex position in projection space.
    notice the function mul. Mul is used to multiply a variety of mathematical types, but its most often used for matrix to matrix and matrix to vector multiplication.

    Next thing we need is the light vector. The direction it shines in relation to the vertex. So we subtract the vertex's world position from the light's world position, leaving the offset of the light from the vertex, or its vector. This is then normalized so it calculates properly.
    For those who don't know, normalizing a vector preserves the directional information only, and for these calculations, direction is all we're interested in.

    so now its time to light the object
    float4 diffuse = diffuse_Color * saturate(dot(lightVec, IN.Normal));
    this takes the diffuse color of the object, and multiplies it by the dot product of the normalized light vector and the vertex normal. The dot product is related to the angle between two vectors. Since the vectors are normalized the dot product is limited between 0 and 1. 0 is the result when vectors are perpendicular, 1 is the result when they are parallel. The dot product will be negative when the vectors are more than 90 degrees apart. The Saturate function clamps the output value between 0 and 1. This is important for when we add more lighting functions later, as we don't want subtractive lighting behavior.
    next we create another float4 for the return color, set it equal to the diffuse term and return it as the structure's vertColor member.
    The reason this is done, rather than simply returning the diffuse, is for when the shader gets more components so we can just calculate each component by itself, then add them together to get the final pixel color.

    now its time for the pixel shader
    float4 VS_Color (VS_vertexOut IN) : COLOR
    {
    return IN.vertColor;

    }
    simple as can be. Declares a float4 return type, gives it a name, takes in the output of the vertex program, and assigns the output to the COLOR register of the GPU fragment processor.
    All this does is return the vertex color information as previously calculated. The linear interpolation between vertices is handled for us.

    the LAST thing you'll need, so you can actually use your shader in maya, is a technique declaration.

    technique Vertex_Shaded_Blinn
    {
    pass
    {
    DepthFunc = LEqual;
    DepthTestEnable = true;
    VertexProgram = compile arbvp1 blinnVert();
    FragmentProgram = compile arbfp1 VS_Color();

    }

    }
    all technique declarations start with the technique reserved word. Then its the technique Name. No spaces allowed. sorry.
    The depth function and enable are needed to render properly. LEqual means culling verts or pixels that have a coordinate less than or equal to the current vertex or pixel. Enableing the depth test executes this behavior.

    VertexProgram = compile arbvp1 blinnVert();
    FragmentProgram = compile arbfp1 VS_Color();

    This is the meat of the program. VertexProgram and FragmentProgram are obviously the vertex and pixel shaders you want to use for this technique. The next bit may look a bit odd. It instructs the CgFX plugin to compile the specified program to a certain profile. The profiles are for OpenGL or DirectX.
    arbvp1 and arbfp1 are the advanced vertex and fragment profiles for OpenGL. Other profiles are :
    vp30 and fp30 (older open gl profiles)
    VS_3_0 and PS_3_0 (Direct X Shader model 3)
    VS_2_x and PS_2_x (Direct X Shader model 2.x(replace x with proper number. 0 works fine)
    VS_1_1 and PS_1_1 (Direct X Shader model 1.1. Legacy support)

    For the CGFX shaders, stick with arbfp1 and arbvp1. They're a little more sure to work. For the ASHLI Shader plugin the directx models are fine, but that's a whole other topic.

    Ok. so now if you've copied all the code sections into a file, and saved it as a .cgfx file, you can load up maya and make a new cgfx shader node to load your .cgfx file into. the results should look something like this

    so congratulations.
    you've just replicated the basic lighting calculations of maya's viewport.
    Next up: Adding Specularity, an Ambient Light, and Textures!
    After that: Moving all this to a pixel shader.
    Reply With Quote Reply With Quote #4

  5. Vailias is offline
    Location: San Francisco, CA
    Posts: 324
    ok now for the specularity.

    A few things go into making a specular highlight. One of which we need a controllable variable for: The specular power, or shinyness of the object. While we're at it may as well add in a specular color option too since its easy and makes for nice effects. (add these in the first section of variable declarations.
    float4 specular_Color : Diffuse
    <
    string UIWidget = "Color";
    > = {1.0, 1.0, 1.0, 1.0};

    float specular_Power
    <
    string UIWidget = "Slider";
    float UIMin = 1.0;
    float UIMax = 200.0;
    > = 70.0f;
    now go back to the vertex shader and we need to add in a few lines to make the specular happen.
    first and foremost we need to calculate that half vector mentioned before so we can do the rest of the blinn calculations (add these after lightVec in the vertex program)
    float4 eyeVec = normalize(camera_Position - PosWorld);
    float4 halfVector = normalize(eyeVec + lightVec);
    float NDotH = saturate(dot(IN.Normal, halfVector));
    eyeVec is the same process as the light vector. Normalized direction the camera is facing.
    The half vector is simply the eye vector plus the light vector, normalized. This works because the light and eye vectors are in relation to the world position of the vertex, so adding them together will net a point halfway between the two.
    Next is the comparison of the half vector to the surface normal, called NdotH. That is Normal dot Half vector.
    We then multiply the specular color by NdotH raised to the specular power.
    (add this after diffuse in the vertex shader)
    [code]
    float4 specular = specular_Color * pow(NDotH , specular_Power);
    [/code]
    If you remember any of the algebra you took in high school you may remember graphing exponential functions. As the exponent raised, the graph’s curvature altered, and approached a square corner at 1.0. This is the same idea, but illustrated by the tightness of the specular highlight.

    Now we add the specular component to the diffuse in outColor, save and refresh the shader in maya.
    outColor = diffuse + specular;

    so how about an ambient light? So you can easily fill in those harsh shadows. Ambient lighting is really easy, as its just the material color multiplied by some scalar or color value
    (add this to the variable section)
    float4 ambient_Color : Diffuse
    <
    string UIWidget = "Color";
    > = {1.0, 1.0, 1.0, 0.2};
    (add this to the vertex shader after specular)
    float3 ambcol = diffuse_Color.rgb * ambient_Color.rgb * ambient_Color.a;
    float4 ambient = float4(ambcol, 1.0);
    notice ambCol. This is the first use of the swizzle operator in this shader the ( . ) period before rgb on the end of diffuse color. The swizzle operator is a sort of vector component selection mask. What im using it for here is multiplying the diffuse color’s rgb components by the ambient color’s rgb, but leaving the alpha channel alone. The ambient color’s alpha value is used as an intensity slider rather than a transparency control.
    Then we pack the rgb components into a new color vector with its alpha value set to 1.0, normally you’ll want to use your diffuse texture’s alpha value so transparency stays transparent, but that’s for later.
    Now we just add in the ambient component to the outcolor.
    [code]
    outColor = diffuse + specular + ambient;
    [/code]

    and voila, theres the full shader.

    Now I’m sure you’re all thinking, hey useful, but what about textures man?
    Well just adding a diffuse texture is terribly simple. First you’ll need 2 global variables, a texture and an associated texture sampler. So add this in your variable section of your cgfx file.
    texture DiffuseTexture : Diffuse
    <
    string ResourceName = "";
    string ResourceType = "2D";

    > ;
    sampler2D DiffuseSampler = sampler_state
    {
    Texture = <DiffuseTexture>;
    minFilter = LinearMipMapLinear;
    magFilter = Linear;
    };
    ResourceName is the file name of the default texture file you want to be loaded in the shader. Resource Type sets the texture to 1D 2D or 3D.
    Sampler 2d is a logical device that, well, samples the texture at each pixel. minFilter is the minification filter. Ie when there are more pixels in the texture than texels on the screen it filters it how you specify. LinearMipMapLinear is the normal way of using mipmaps for smaller texture detail.
    magFilter is the magnification filter, when there are more texels on screen than pixels in the texture. It is limited to Linear and Nearest for Maya.
    Now we need to alter the vertex shader slightly, so it can pass along UV coords to the pixel shader.
    Add
    float2 texcoord0 : TEXCOORD0;
    to the vertex shader struct.
    And add
    OUT.texcoord0 = IN.texcoord0;
    to the vertex shader just before the return statement.

    Now of course we need to alter the pixel shader so that it can get the texture information
    float4 diffTex = tex2D(DiffuseSampler, IN.texcoord0.xy);
    return IN.vertColor * diffTex;
    The first line is the color of the pixel in the texture file. It gets its information via the tex2D function (capital is important). It takes a sampler state and a texture coordinate as its arguments. DiffuseSampler is the sampler state, and the UV’s have been passed in from the vertex shader.

    Now if you test this you’ll notice that the texture appears a bit dark. This is thanks to the multiplication in the previous example. So we can solve this by modulating the output by a scaling value.
    Add
    float modulation
    <
    string desc = "Texture multiplier";
    string UIWidget = "Slider";
    float UIMin = 1.0;
    float UIMax = 10.0;
    > = 1.0f;
    To the variables section and change the pixel shader’s return statement to look like
    return IN.vertColor * diffTex * modulation;

    This allows you to scale the brightness of the object by a slider.
    So there ya go with a textured blinn shader with vertex lighting. Just like maya’s viewport renderer.
    Reply With Quote Reply With Quote #5

  6. zyphrwind is offline
    Posts: 2
    Hi..i'm just new here..i just downloaded this cgfx shader..i really liked it, but i'm having problems in adding the specular map..is this shader compatible with maya 2008 ext2? the problem is when i enable the specular map i get this dark shadow like color on the model..i tried to use the apply color but it doesn't work..here are the images...
    with out specular

    with specular
    Reply With Quote Reply With Quote #6

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

Welcome to GameArtisans.org Online - Within these pages, a unique flavor of enthusiasm and motivation is provided to inspire artists of all levels to create art that is above and beyond expectations!
NAVIGATION
FORUMS
PORTFOLIOS
ART CHALLENGES
MAIN ARCHIVES
MINI ARCHIVES
TUTORIALS
JOBS
F.A.Q.

GAMEARTISANS
ABOUT
ADVERTISING
CONTACT US

GA EVENTS
DOMINANCE WAR
UNEARTHLY CHALLENGE
MOVIEFEST CHALLENGE
COMICON CHALLENGE
PLANETARY CHALLENGE