Implement particle based water simulation in Unity (II: start shading)

Implement particle based water simulation in Unity (II: start shading)


Recently, the author is studying Unity's programmable rendering pipeline. Please refer to the article address Later projects should be expanded based on the rendering pipeline.
At the same time, this article is based on This article The Unity implementation of is just a reference for implementation.
Because there will be terrible Overdraw when simulating particles, it is actually a little extravagant to run in real time in the game, but if it is only used as a fixed fluid simulation in the game and does not need to change the effect of the scene, it can still achieve a good effect through customization.

However, this paper is still a real-time simulation, which is not well realized in some details, such as the effect realization after the liquid collides with the object, because the realization of more physical particles is too extravagant, so it is only a simple implementation. If you want a better effect, you can customize it yourself.

At the same time, the data refresh of the physical frame is replaced by a real-time frame, so that the liquid ejection is more continuous. At the same time, the first cycle is deleted, and instead of a cycle, wait until the refresh is not possible, because this cycle will cause the number of frames to become very unstable. The effect after the update is as follows:

Implement spray effects in custom render routes

1, Generate vertex

Particles are generated by the vertices generated by surface subdivision, that is, in my This article It is generated in the format of generating particles. At the same time, all the vertices of the generated particles are processed in one file, making the whole more modular, unlike putting the whole process in one file at the beginning.

First of all, all the data we pass in should be in the structure of surface subdivision. At the same time, two structures are defined to distinguish the input from the output. However, in fact, the data of these two structures are consistent, because they need to be passed to the geometric shader for data calculation.

I mentioned here that I didn't look at the API before, because Unity only provides uv coordinates in the format of float2, but it can actually support the coordinates of float4. If necessary, you can change the method of setting data to make full use of every data.

struct TessVertex_All{
    float4 vertex : POSITION;
    float4 color : COLOR;
    float3 normal : NORMAL;
    float4 tangent : TANGENT;
    float2 uv0 : TEXCOORD0;
    float2 uv1 : TEXCOORD1;
    float2 uv2 : TEXCOORD2;
    float2 uv3 : TEXCOORD3;
    float2 uv4 : TEXCOORD4;
    float2 uv5 : TEXCOORD5;
    float2 uv6 : TEXCOORD6;

struct TessOutput_All{
    float4 vertex : Var_POSITION;
    float4 color : Var_COLOR;
    float3 normal : Var_NORMAL;
    float4 tangent : Var_TANGENT;
    float2 uv0 : Var_TEXCOORD0;
    float2 uv1 : Var_TEXCOORD1;
    float2 uv2 : Var_TEXCOORD2;
    float2 uv3 : Var_TEXCOORD3;
    float2 uv4 : Var_TEXCOORD4;
    float2 uv5 : Var_TEXCOORD5;
    float2 uv6 : Var_TEXCOORD6;

Then there is the standard format of surface subdivision, which is the same process. However, it should be noted that the surface subdivision macros in SRP that support detection need to be defined by myself. For my convenience, I deleted them directly. After all, most machines can support them.

//Input value of vertex shader, pass directly without operation
void tessVertAll (inout TessVertex_All v){}

//Subdivision parameters control shaders, pre preparation for subdivision
OutputPatchConstant hullconst(InputPatch<TessVertex_All, 3>v){
    OutputPatchConstant o = (OutputPatchConstant)0;
    float size = _TessDegree;
    //Get the subdivision distance value of three vertices
    float4 ts = float4(size, size, size, size);
    //In essence, the following assignment operation is to control the three sides of the subdivision triangle and the subdivision degree inside
    //This value is essentially an int value. 0 means no subdivision, and one more layer is subdivided for every additional 1
    //Controls the subdivision degree of the edge. The value of this edge degree is not used by us, but for tessellation
    o.edge[0] = ts.x;
    o.edge[1] = ts.y;
    o.edge[2] = ts.z;
    //Internal subdivision degree
    o.inside = ts.w;
    return o;

[domain("tri")]    //The input entity is a triangle
//Determine how to split
//Define the orientation of the element, which is generally used, based on the tangent
//The name of the function that defines the patch, that is, the function above. The return value of the hull function will be transferred to this function, and then the surface will be subdivided
//Define that the output entity is a triangle, corresponding to the above
TessOutput_All hull (InputPatch<TessVertex_All, 3> v, uint id : SV_OutputControlPointID){
    return v[id];

TessOutput_All domain_All (OutputPatchConstant tessFactors, const OutputPatch<TessOutput_All, 3> vi, float3 bary : SV_DomainLocation){
    TessOutput_All v = (TessOutput_All)0;
    v.vertex = vi[0].vertex * bary.x + vi[1].vertex*bary.y + vi[2].vertex * bary.z;
    v.normal = vi[0].normal * bary.x + vi[1].normal*bary.y + vi[2].normal * bary.z;
    v.tangent = vi[0].tangent * bary.x + vi[1].tangent*bary.y + vi[2].tangent * bary.z;
    v.color = vi[0].color * bary.x + vi[1].color*bary.y + vi[2].color * bary.z;
    v.uv0 = vi[0].uv0 * bary.x + vi[1].uv0*bary.y + vi[2].uv0 * bary.z;
    v.uv1 = vi[0].uv1 * bary.x + vi[1].uv1*bary.y + vi[2].uv1 * bary.z;
    v.uv2 = vi[0].uv2 * bary.x + vi[1].uv2*bary.y + vi[2].uv2 * bary.z;
    v.uv3 = vi[0].uv3 * bary.x + vi[1].uv3*bary.y + vi[2].uv3 * bary.z;
    v.uv4 = vi[0].uv4 * bary.x + vi[1].uv4*bary.y + vi[2].uv4 * bary.z;
    v.uv5 = vi[0].uv5 * bary.x + vi[1].uv5*bary.y + vi[2].uv5 * bary.z;
    v.uv6 = vi[0].uv6 * bary.x + vi[1].uv6*bary.y + vi[2].uv6 * bary.z;
    return v;

2, Migration simulation

1. Receive vertices output from subdivision shaders

The code is as follows (example):

void geom(triangle TessOutput_All IN[3], inout TriangleStream<FragInput> tristream)
    LoadWater(IN[0], tristream);
    LoadWater(IN[1], tristream);
    LoadWater(IN[2], tristream);

The LoadWater function is used to determine the vertex state, prepare data and call corresponding processing functions.

2. call the corresponding processing method according to the data

LoadWater determines whether the motion phase of the vertex is before collision (curve phase) or after collision (user-defined offset phase). Because different stages have different data processing methods, different processing methods are adopted.

First determine the start time of the vertex and the phase of the vertex. The particles in the same batch (a triangular surface) are not output at the same time, but have an output time range. Therefore, it is necessary to determine whether the vertex is at the output time.

//Start time of this batch of particles, in Uv6.x is the movement time
float beginTime = IN.tangent.w - _OutTime - _OffsetTime - IN.uv6.x;
//The sending time of this vertex, ramdom, is a float4 data random according to the vertex
float outTime = _OutTime * ramdom.w + beginTime;
//Do not emit time at vertex, do not emit
if(_Time.y < outTime  ) return;

Then, according to the data, judge which stage it belongs to and the data situation, and call the corresponding processing method.

 //Offset phase, that is, the period of time after collision
 if(partiTime >= 1){
     float3 end = (float3)0;
     if(step(0.5, IN.color.x))
         end = float3(IN.uv3.xy, IN.uv4.x);
         end =;

     Offset(IN, (_Time.y - outTime - IN.uv6.x) / _OffsetTime,
     	 tristream, ramdom, end );
 if(step(0.5, IN.color.x)){    //First ray hit
     OnePointEnd(IN, partiTime, tristream);
 else{    //The second ray hit
     TwoPointEnd(IN, partiTime, tristream);

3. Curve fitting

Obtain the position of the vertex through the set data, that is, determine the world coordinates of the vertex at this time through the Bezier curve. The method of outputting to the vertex generation plane after fitting (Move_outOnePoint).

//A method performed when the first ray touches an object
void OnePointEnd(TessOutput_All IN, float moveTime, inout TriangleStream<FragInput> tristream){
    float3 begin = float3(IN.uv0.xy, IN.uv1.x);
    float3 center = float3(IN.uv2.xy, IN.uv1.y);
    float3 end = float3(IN.uv3.xy, IN.uv4.x); = Get3PointBezier(begin, center, end, moveTime);
    Move_outOnePoint(tristream, IN, moveTime, end);

//Second ray collision
void TwoPointEnd(TessOutput_All IN, float moveTime, inout TriangleStream<FragInput> tristream){
    float3 begin = float3(IN.uv0.xy, IN.uv1.x);
    float3 center = float3(IN.uv2.xy, IN.uv1.y);
    float3 end = float3(IN.uv3.xy, IN.uv4.x);

    float3 target1 = Get3PointBezier(begin, center, end, moveTime);

    begin = end;
    center = float3(IN.uv5.xy, IN.uv4.y);
    end =;
    float3 target2 = Get3PointBezier(begin, center, end, moveTime); = target1 + (target2 - target1) * moveTime;
    Move_outOnePoint(tristream, IN, moveTime, end);

4. simulation after collision

The simulation after collision is implemented casually at present, that is This article There is little content in the mobile mode of implementation, so this part must be reset according to the project.

//Offset phase
void Offset(TessOutput_All IN, float offsetTime, inout TriangleStream<FragInput> tristream, float4 random, float3 begin){

    if(offsetTime > 1 || offsetTime < 0) return;

    float3 dir0 = lerp(_VerticalStart, _VerticalEnd,;
    float3 normal = IN.normal;
    //Determine rotation matrix
    float cosVal = dot(normalize( normal ), float3(0, 1, 0));
    float sinVal = sqrt(1 - cosVal * cosVal);

    float3x3 xyMatrix = float3x3(-cosVal, sinVal, 0,
                                -sinVal, -cosVal, 0,
                                0, 0, 1);

    float3x3 yzMatrix = float3x3(1, 0, 0,
                                0, -cosVal, sinVal,
                                0, -sinVal, -cosVal);

    float3x3 xzMatrix = float3x3(-cosVal, 0, -sinVal,
                                0, 1, 0,
                                sinVal, 0, -cosVal);

    float3 targetDir = mul(xzMatrix, mul( yzMatrix, mul(xyMatrix, dir0) )); = begin + targetDir * offsetTime;
    Offset_outOnePoint(tristream, IN, offsetTime);

3, Coloring

1. slice shader input

First, describe the input structure of the film source shader, because the processing method of this output is This article Is the same, so just describe the input data.

struct FragInput{
//The uv coordinate of the particle plane is the same as the particle uv distribution of the particle system
    float2 uv : TEXCOORD0;  
    float4 pos : SV_POSITION;
    //This data is not used at present, so it is the total time of particles
    float time : TEXCOORD2;
    //When x is 0, it is the curve phase, and when x is 1, it is after collision
    //zyw stores the spherical center of this particle
    float4 otherDate : TEXCOORD3;
    //Store world space location
    float3 worldPos : TEXCOORD4;
    float4 otherDate2 : TEXCOORD5;  //Reserved data

2. generate width data

The format of generating width data is very simple. Just return it directly according to the texture color value. The texture I use is a circular graph with transparent channels. After collecting the texture color, multiply it by its transparency according to the corresponding stage.

    float4 col = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, uv) * _Color;

    #ifdef _CURVE_ALPHA
        if(i.otherDate.x < 0.1){
            col *= saturate( LoadCurveTime( i.time.x, _MoveAlphaPointCount, _MoveAlphaPointArray ) );
        else {
            col *= saturate( LoadCurveTime( i.time.x, _OffsetAlphaPointCount, _OffsetAlphaPointArray ) );

In fact, the most important aspect of width data is the blending mode of texture. Because the data should be superimposed, the blending mode to be used is One One. However, for better definition, you can control the blending effect by setting the blending mode as an option.

[Enum(UnityEngine.Rendering.BlendMode)] _SrcBlend ("Src Blend", Float) = 1
[Enum(UnityEngine.Rendering.BlendMode)] _DstBlend ("Dst Blend", Float) = 0
[Enum(Off, 0, On, 1)] _ZWrite ("Z Write", Float) = 1

Width map of overlay mode:

3. Generate normal and depth data

Since the depth needs to be written when rendering the depth, I also wrote the normal data along with it.
The way to generate normals in particles is very simple, because first of all, we need to obtain the ball center when the particle is a ball, which is actually the radius of the ball according to the principle of point along the direction of the camera.

//worldVer is based on the world coordinates of the vertex, and paritcleLen is the radius
float3 sphereCenter = worldVer - UNITY_MATRIX_V[2].xyz * paritcleLen;

The normal data is obtained by the difference between the position of the center of the circle and the world coordinates of the pixel. However, there will also be data outside the circle (because the particle is a plane), so it needs to be eliminated according to the transparency.

float4 col = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, i.uv);
    if(i.otherDate.x < 0.1){
        col *= saturate( LoadCurveTime( i.time.x, _MoveAlphaPointCount, _MoveAlphaPointArray ) );
        col *= saturate( LoadCurveTime( i.time.x, _OffsetAlphaPointCount, _OffsetAlphaPointArray ) );
clip(col.a - 0.5);       //Culled position

float3 normal = normalize(i.worldPos - i.otherDate.yzw);
//Since textures cannot store negative numbers, data mapping is required
return float4(normal * 0.5 + 0.5, 1);

The generated normal map and depth map are not displayed because of the accuracy problem. Anyway, they are all black


1. A little problem

So far, the rendering of three textures has been completed. It needs to be explained that the rendering call here is called through the programmable rendering pipeline, because it can render the data to the position I want, but it cannot be done in the default pipeline.
Because the default pipeline rendering can only be rendered by specifying the TargetTexture of the Camera. By specifying two cameras to render, these two pictures can be rendered. However, there is a small problem here, that is, I haven't found a way to transfer the default depth data of the texture in Unity. As a result, I used a Camera to render the depth at the beginning.

2. Supplement

In fact, I have a big problem with particle simulation here, because the two stages lead to intersection problems. For example, the normal of the edge replaces the normal of the curve, but in fact, the water of the normal of the edge is not very thick, so I think this mode of particle simulation is not good, but if you simulate water flow through particles, the effect should be much better and the cost should be low.

At the same time, because the water has a thickness value, the BTDF effect simulation will also be very good if it is used to simulate milk.

Tags: Unity

Posted by el_quijote on Sat, 02 Jul 2022 01:44:37 +0930