Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto Normals look broken/worse than automatic normal generation in Unity #16

Open
hybridherbst opened this issue Jul 7, 2019 · 17 comments

Comments

@hybridherbst
Copy link

hybridherbst commented Jul 7, 2019

image

Regular "automatic normal generation" in Unity (mesh.RecalculateNormals) does look better. A sphere didn't get hard corners with automatic normal generation.

@hybridherbst hybridherbst changed the title Auto Normals look worse then automatic normal generation in Unity Auto Normals look broken/worse than automatic normal generation in Unity Jul 7, 2019
@keenanwoodall
Copy link
Owner

Are you sure the mesh on the right's normals are generated from mesh.RecalculateNormals()? In my tests my normal recalculation results in the same normals as Unity's runtime method. The only difference is that my version is multithreaded.

I have a hunch that you are assuming mesh.RecalculateNormals() generates the same normals as the ones generated from the mesh asset import settings, but let me know if I'm wrong.

Normals generated from the import settings are calculated differently than normals calculated via the RecalculateNormals method. The normals calculated on import are much higher quality because they are only calculated once and don't need to be calculated at runtime. My version and mesh.RecalculateNormals() result in seams where adjacent triangles don't share vertices. I could have the generated normals be smooth but it would be very very slow to calculate. Check out this gist if you're curious how someone else did it.

Unfortunately, the best way to remove those seams is to open your DCC tool of choice and merge the vertices together.

@hybridherbst
Copy link
Author

hybridherbst commented Jul 8, 2019

You're right that the image on the right side is from the asset importer.

That being said, and understanding that it would be a performance implication to do "proper" normal recalculation, you actually have a much better way here -
since most (or all) of you Deformers are mathematically known / have known derivatives, you should be able to just properly deform the Normals along with the Vertices instead of having to recalculate them in the end.

Right now, deforming an originally smooth mesh has only the options of a) living with the seams or b) just keeping the original normals and thus having wrong lighting.

@keenanwoodall
Copy link
Owner

Calculating the derivative would be the ideal solution but there's a couple reasons why I haven't.

  1. It will be much more expensive. From what I understand, each deformer will take roughly 3x as long to calculate if I want to find the derivative. It's much faster to do the generic normal recalculation once per mesh rather than finding the derivative per deformer per mesh.
  2. Calculating the derivative for some deformers will be very hard if not impossible. If I can't have every deformer calculate their derivative I'm not sure I want to do it at all.

That being said, I may create a new branch and start messing around with calculating the normals from the derivative. If I can support a significant number of deformers and the performance overhead isn't ridiculous I may merge the changes into master.

For now all you have to do is join any split vertices where you want a smooth transition between triangles and split joined vertices where you want a hard edge. It's not ideal but from what I've seen, the same limitation exists in other deformation packages like Megafiers.

@keenanwoodall
Copy link
Owner

keenanwoodall commented Jul 8, 2019

In case anyone wants to take a stab at derivative-based normals here's a tutorial on how to modify the normals on any previously created deformers:

  1. Open your deformer of choice in your IDE of choice. I'll be modifying the Bend deformer.
  2. Find the overridden DataFlags property. It's likely above the Process method.
    It looks like this on the Bend deformer:
public override DataFlags DataFlags => DataFlags.Vertices;
  1. If it isn't already there, add DataFlags.Normals to the enum. This tells Deformables that you will be changing the normals. Deformables keep track of what buffers have been changed so that they only have to copy the modified ones back to the mesh.
public override DataFlags DataFlags => DataFlags.Vertices | DataFlags.Normals;
  1. A lot of deformers schedule different jobs depending on their properties. The most common cause for having different jobs is if they have have a BoundsMode. If their mode is Limited they'll schedule a limited version of the job and if it's Unlimited they'll schedule an unlimited version. The Bend deformer is no exception. If you look at the Process method you can see it is switching on the mode and scheduling different jobs based on that.
switch (mode)
{
	default:
	case BoundsMode.Unlimited: // Unlimited job scheduled here!
		return new UnlimitedBendJob
		{
			angle = totalAngle,
			top = Top,
			bottom = Bottom,
			meshToAxis = meshToAxis,
			axisToMesh = meshToAxis.inverse,
			vertices = data.DynamicNative.VertexBuffer
		}.Schedule (data.Length, DEFAULT_BATCH_COUNT, dependency);
	case BoundsMode.Limited: // Limited job scheduled here!
		return new LimitedBendJob
		{
			angle = totalAngle,
			top = Top,
			bottom = Bottom,
			meshToAxis = meshToAxis,
			axisToMesh = meshToAxis.inverse,
			vertices = data.DynamicNative.VertexBuffer
		}.Schedule (data.Length, DEFAULT_BATCH_COUNT, dependency);
}
  1. Find all the jobs and add a native array of float3s called "normals" to all of them.
    The UnlimitedBendJob becomes:
[BurstCompile (CompileSynchronously = COMPILE_SYNCHRONOUSLY)]
public struct UnlimitedBendJob : IJobParallelFor
{
	public float angle;
	public float top;
	public float bottom;
	public float4x4 meshToAxis;
	public float4x4 axisToMesh;
	public NativeArray<float3> vertices;
	public NativeArray<float3> normals; // add this line

and the LimitedBendJob becomes:

[BurstCompile (CompileSynchronously = COMPILE_SYNCHRONOUSLY)]
public struct LimitedBendJob : IJobParallelFor
{
	public float angle;
	public float top;
	public float bottom;
	public float4x4 meshToAxis;
	public float4x4 axisToMesh;
	public NativeArray<float3> vertices;
	public NativeArray<float3> normals; // add this line
  1. Now go back to the process method and set the job's normals array to the one stored in the MeshData class. Now the Bend deformers job scheduling looks like this:
switch (mode)
{
	default:
	case BoundsMode.Unlimited:
		return new UnlimitedBendJob
		{
			angle = totalAngle,
			top = Top,
			bottom = Bottom,
			meshToAxis = meshToAxis,
			axisToMesh = meshToAxis.inverse,
			vertices = data.DynamicNative.VertexBuffer,
			normals = data.DynamicNative.NormalBuffer // added this line
		}.Schedule (data.Length, DEFAULT_BATCH_COUNT, dependency);
	case BoundsMode.Limited:
		return new LimitedBendJob
		{
			angle = totalAngle,
			top = Top,
			bottom = Bottom,
			meshToAxis = meshToAxis,
			axisToMesh = meshToAxis.inverse,
			vertices = data.DynamicNative.VertexBuffer,
			normals = data.DynamicNative.NormalBuffer // added this line
		}.Schedule (data.Length, DEFAULT_BATCH_COUNT, dependency);
}
  1. Now you can modify the normals however you want from the job's Execute method. For instance, you can add normals[index] = up() to make all the normals face up.

REMEMBER Deformables with their "Normals" property set to Auto will override any changes made to the normals. Set the normals setting to None for your changes to the normals to actually be sent to the mesh.

@hybridherbst
Copy link
Author

hybridherbst commented Jul 8, 2019

Have another idea for the normals: instead of running calculations on vertex positions (vec3), run them on a full mat4x4, so the result keeps rotational information. The result after going through all deformers would be a final transformation matrix that can be applied to both the vertex and the normal.

A reduced but similar approach would be to instead of transforming normals, transforming the up rotation (a quaternion) and then applying the final rotation to the original normal.

These might not be mathematically perfect but should still give much more stable results than Auto and None.

Also note that mathematically, the derivative formula shouldn't be something you have to find at runtime (you could, similar to DDX/DDY), but instead something that would ideally be calculated before-hand (for the deformer formula, find the derivative formula if possible) and then just applied, thus being the same complexity as the vertex calculation.

@keenanwoodall
Copy link
Owner

keenanwoodall commented Jul 8, 2019

These are good ideas. However, changing all the calculations to modify matrices instead of vectors, or to involve rotation would be a very large undertaking. All of the more complicated deformers were created by fiddling with math for days until it worked. I by no means am a math wizard and would have to reengineer all of the deformers from scratch which could take months. As for the derivative formula, maybe it could be found for the simplest deformers but I personally couldn't figure them out for any of the complicated ones. Again, I'm not a math wizard and doing stuff with derivatives is stepping a bit beyond my experience so I'm mostly talking out of my ass. I really don't have any knowledge of derivatives beyond the basic concept. Maybe it isn't hard to find, but some of the deformers are massive and I can't imagine the formula for the derivative would be any simpler.

@HitCache
Copy link

HitCache commented Aug 8, 2020

First of all @keenanwoodall thank you for the instructions, they were really helpful.

I went through every deformable and tried to "port" it by doing a setup where the input normal had the exact same transformations applied as the vertices with the hope of having the original mesh normals maintained (this would be ideal case scenario imo since we wouldn't have to make a choice between no smoothing/full smoothing or an angle based smoothing middle ground and could maintain the original mix of smooth/hard normals)

It seemed like it kind of worked at first on simple deformers, but after more testing there were really weird unnatural things happening with the lighting on the objects, especially when using multiple deformers, which makes me think I may have not thought this through and the normals might need recalculation instead of just having the same operations applied?

If that's the case a workaround might be to check if the normal is smoothed or not, flag that, then recalc a smoothed normal based on flag as needed?

Slightly out of my depth here for sure but hopefully someone can help. One of the attractive things about this package was the beautifully deformed smooth looking cubes and it'd be wonderful to have that in the final result on lower-poly stuff, even if it comes at a processing cost (which seemed really acceptable in a lot of cases in my initial tests btw)

I wouldn't mind doing the work of porting all the deformables if that's what is needed (maybe it can be done in a central final step like discussed above) but would be helpful if someone can provide more guidance/template on what that should look like so I can apply to the rest. I tried doing it as a final step but I'm really a novice with Unity's Job system stuff so had no luck there.

@HitCache
Copy link

HitCache commented Aug 8, 2020

Also not sure if fully relevant but this thread was interesting, specifically this:

"No. Smoothing is never recalculated. Smoothing algorithm simply determines which faces share the same vertex (and normal) and that information is used to compute vertex normal. Once the normal is calculated - by the mesh exporter - it is stored within the vertex and is NEVER recalculated, no matter what you do, even if you're running code on an ancient GPU (Dx9 and Pre-DX9) without Hardware T&L support (meaning vertex transformation is done on CPU).

Likewise even in case of Skinned Meshes, and even in case said mesh is calculated on CPU (Unity actualy uses CPU skinning by default), normal is never recalculated using smoothing information. It can be transformed/normalized, but it is never recalculated."

from: https://forum.unity.com/threads/smooth-vs-flat-performance-which-answer-is-right.487865/#post-3181802

Transforming like that is basically the goal I failed to achieve. 😊

@HitCache
Copy link

HitCache commented Aug 21, 2020

Minor update:

For now all you have to do is join any split vertices where you want a smooth transition between triangles and split joined vertices where you want a hard edge. It's not ideal but from what I've seen, the same limitation exists in other deformation packages like Megafiers.

Looked at the Megafiers package and it seems that (on latest version at least) he handles normals by modifying matrices.

I'm 100% a math idiot but have been looking at matrix transformations more and more lately to try fix this issue, it's not as daunting as it looked at first, hopefully can start converting all of these to use matrices soon and finally have good looking normals. 😬 😬 😓

This will impact maintainability in the future to some degree though. In theory they can co-exist so if people want to add their own deformers they don't have to use a matrix.

Maybe if I get a little smarter soon I can write an "upgrade guide" and show an example of what that process looked like and maybe some links to resources.

@keenanwoodall
Copy link
Owner

I'm trying to wrap my head around how matrices solve the problem. Here's an example scenario:

The Sine deformer moves vertices up and down along a sine wave. As is, this will break the normals which is why we have the normal recalculation pass at the end. If I were to reimplement the deformer to use matrices instead of points I would still only translate the matrices up and down. The orientation would not automagically align with the curvature of the wave.

image

Either way you'd still need to know the derivative/slope of your deformation function. Maybe I'm misunderstanding how it'd work tho

@HitCache
Copy link

I'm still learning and don't have all the answers but this is one of the sources I was reading from and maybe it will help:

You may ask then why not simply considering normals as vectors. Why do we take the pain of differentiating them? In the previous chapters, we have learned to use matrix multiplication to transform points and vectors. The problem with normals, is that we tend to assume that transforming them in the same way we transform points and vectors will work. In fact, this is sometimes the case, for example when the matrix scales the normal uniformly (that is when the values of the matrix along the diagonal, which we have learned encode the scale values applied to the transformed point or vector are all the same). But lets now consider the case where a non-uniform scale is applied to an object.

In fact, the solution to transforming normals, is not to multiply them by the same matrix used for transforming points and vectors, but to multiply them by the transpose of the inverse of that matrix:

https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/geometry/transforming-normals

@keenanwoodall
Copy link
Owner

keenanwoodall commented Jan 13, 2022

I took another stab at smooth normals. Have literally only tested on a cube and sphere, but I've got smoothing angle support working here https://github.com/keenanwoodall/Deform.git#feature/smooth-normals
Unity_CzufGKlt6B
edit: probs worth mentioning that Deform still can't add/remove geometry, so it won't split a vert to make a hard edge ie: setting the smoothing angle to 0.0 won't make every triangle faceted.

@bitinn
Copy link

bitinn commented Mar 3, 2022

I took another stab at smooth normals. Have literally only tested on a cube and sphere

Bad news, I tested smooth normal branch on some subdivided watertight cube from blender and some standard model from https://www.cs.cmu.edu/~kmcrane/Projects/ModelRepository/ (say the Spot cow). All of them cause NormalUtils.cs line #136 to throw index XXX is out of range '0' length error.

I will look into why...

@bitinn
Copy link

bitinn commented Mar 3, 2022

Good news, the error doesn't effect Deform package directly, smooth normal still works.

It was thrown by an incorrect code for AsDeferredJobArray in a very recent release of Unity and Collections package... fix is on the way.

(The error is thrown every time you view the Deformable inspector and Deform updates.)

@novavision
Copy link

@keenanwoodall I tried new smooth normals branch, but got the same errors spam in console as @bitinn mentioned even with a standard Sphere model

IndexOutOfRangeException: Index 1728 is out of range of '0' Length.
Unity.Collections.NativeArray`1[T].FailOutOfRangeError (System.Int32 index) (at <8b27195d2ee14da7b6fd1e5435850f80>:0)
Unity.Collections.NativeArray`1[T].CheckElementReadAccess (System.Int32 index) (at <8b27195d2ee14da7b6fd1e5435850f80>:0)
Unity.Collections.NativeArray`1[T].get_Item (System.Int32 index) (at <8b27195d2ee14da7b6fd1e5435850f80>:0)
Deform.MeshUtils+CalculateNormalsJob.Execute (System.Int32 index) (at Library/PackageCache/com.beans.deform@3802324a6b/Code/Runtime/Mesh/Utility/NormalUtils.cs:136)
Unity.Jobs.IJobParallelForDeferExtensions+JobParallelForDeferProducer`1[T].Execute (T& jobData, System.IntPtr additionalPtr, System.IntPtr bufferRangePatchData, Unity.Jobs.LowLevel.Unsafe.JobRanges& ranges, System.Int32 jobIndex) (at Library/PackageCache/com.unity.jobs@0.11.0-preview.6/Unity.Jobs/IJobParallelForDefer.cs:57)

@bitinn
Copy link

bitinn commented Mar 29, 2022

@novavision read this thread, either use newer version or patch the collection package yourself:

https://forum.unity.com/threads/2021-2-8f1-broke-asdeferredjobarray-and-deferred-jobs.1225635/#post-7815393

@novavision
Copy link

@bitinn thanks, updated Collections to 1.2.3 and errors are gone

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants