Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement Ideas 2 #1557

Open
mikedh opened this issue Apr 7, 2022 · 3 comments
Open

Enhancement Ideas 2 #1557

mikedh opened this issue Apr 7, 2022 · 3 comments

Comments

@mikedh
Copy link
Owner

mikedh commented Apr 7, 2022

PR's are always super welcome! This is a relatively small open source project and really benefits from the bugfixes, features, and other stuff the 100+ contributors have PR'd, so thanks!

The previous issue (#199) got pretty out of date and a lot of the stuff on it was done so here's a new thread. Feel free to suggest things!

  • Implement an actually correct and maybe even faster OBB algorithm (trimesh.bounds.oriented_bounds(mesh) documentation #1544)
  • Finally get the embree4 stuff over the line to be merged
    • The mikedh/embreex fork now has wheels for Mac/Linux/Win Python 3.6-3.11
    • A number of people have worked on this ray branch [WIP] Ray Refactor #1108
  • Add texture preservation to mesh.slice_plane Slicing a mesh and preserving the textured visuals #1920
  • Wishlist: find some way to make scene.save_image always work. We get a ton of issues about this and the answer is always pretty much "sorry your graphics driver hates you," or "try the docker image." I guess it's just a hard problem that depends a lot on platform and I'm not sure there's any way to solve it from trimesh
    • Apparently vulkan treats "screen vs offscreen" basically the same. Maybe we just need an additional viewer window that uses a vulkan backend, wgpu-native seems super interesting: https://github.com/pygfx/wgpu-py . BGFX also probably does this.
  • Polish the wart where OBJ sometimes return a Scene. Maybe now that multi-material single meshes were PR'd in Merge primitive materials #1246 we could have OBJ always return a Trimesh.
  • Fix trimesh.remesh.subdivide subdivide_to_size so that if you specify a subset of faces it splits their neighbors in a way to maintain watertightness.
  • Building on the relative success (measured by the lack of an immediate cavalcade of issues) of the deprecation procedure used in Release: Refactor Hashes #1693 refactor the trimesh.path.Path API. Specifically:
    • path.polygons_closed->path.linestrings
    • path.polygons_full -> path.polygons
    • Maybe additional logic so it isn't quite so "closed curves" specific and matches the Shapely data model a little better. Not sure about path.path and path.discrete.
  • make pyright trimesh pass
    • trimesh predates both pyright/mypy and type hints. Fixing a few of these at a time would be a great place to start contributing, i.e. pyright trimesh/graph.py.
@leonmkim
Copy link

leonmkim commented Jul 30, 2023

Firstly, want to say thank you @mikedh for the simply amazing library! Incredibly impressive how much functionality trimesh contains.

Just wanted to suggest, it'd be nice to see some of the functionality, particularly under proximity and collisions, leveraging primitives when possible rather than treating everything as arbitrary meshes. I've been using the primitive generation trimesh provides, but as far as I know, things like signed distance queries and collision queries are not taking advantage of the metadata associated with the primitive geometries. Please correct me if I'm wrong! I've written my own set of SDF queries for the primitives I'm working with right now (boxes, cylinders, spheres) and found the speedup for my use case considerable (just adapted the very comprehensive set of examples from the wonderful Inigo Quilez' site).

def sdf_box(locations, world_tf_box, box_half_dims):
    # locations is a Nx3 array
    # world_tf_box is a 4x4 transformation matrix
    # box_dims is a 3x1 array of [length, width, height]
    # returns a Nx1 array of sdf values
    # get the inverse transformation matrix
    box_tf_world = invert_transform(world_tf_box)
    # transform the locations to the box frame
    locations_box_frame = tm.transformations.transform_points(locations, box_tf_world)
    # get the sdf values (2 norm)
    diff = np.abs(locations_box_frame) - box_half_dims #broadcasting to Nx3
    # first term handles the case where the point is outside the box
    # second term handles the case where the point is inside the box
    sdf_values = np.linalg.norm(np.maximum(diff, 0.), axis=1) + np.minimum(np.max(diff, axis=1), 0.)
    return sdf_values

def sdf_cylinder(locations, world_tf_cylinder, cylinder_dims):
    # locations is a Nx3 array
    # world_tf_cylinder is a 4x4 transformation matrix
    # cylinder_dims is a 2x1 array of [radius, total_height/2]
    # note that height is half the total height
    # returns a Nx1 array of sdf values
    # get the inverse transformation matrix
    cylinder_tf_world = invert_transform(world_tf_cylinder)
    # transform the locations to the cylinder frame
    locations_cylinder_frame = tm.transformations.transform_points(locations, cylinder_tf_world)
    # get the sdf values in euclidean distance
    # r,z difference
    r_diff = np.linalg.norm(locations_cylinder_frame[:, :2], axis=1) - cylinder_dims[0]
    z_diff = np.abs(locations_cylinder_frame[:, 2]) - cylinder_dims[1]
    # get the sdf values
    # first term handles the case where the point is inside the cylinder
    # second term handles the case where the point is outside the cylinder
    rz_diff = np.stack([r_diff, z_diff], axis=1)
    sdf_values = np.minimum(np.max(rz_diff, axis=1), 0.) + \
        np.linalg.norm(np.maximum(rz_diff, 0.), axis=1)
    return sdf_values

def sdf_sphere(locations, world_tf_sphere, sphere_radius):
    # locations is a Nx3 array
    # world_tf_sphere is a 4x4 transformation matrix
    # sphere_dims is a scalar of the radius
    # returns a Nx1 array of sdf values
    # get the inverse transformation matrix
    sphere_tf_world = invert_transform(world_tf_sphere)
    # transform the locations to the sphere frame
    locations_sphere_frame = tm.transformations.transform_points(locations, sphere_tf_world)
    # get the sdf values
    sdf_values = np.linalg.norm(locations_sphere_frame, axis=1) - sphere_radius
    return sdf_values

I imagine it should also be possible to support leveraging primitives also with collision queries as FCL already does have objects for common primitives...

But perhaps it's simple enough for users like me just to add this functionality on an as-needed basis for themselves rather than offer full support.

@villares
Copy link

villares commented Feb 14, 2024

Dear @mikedh, trimesh is awesome, thank you!

As an educator, my feature request would be for the community of trimesh users to have somewhere to discuss and publish other forms of documentation, we have a nice reference and a few examples, but we could have, according to Laing's chart I copy below, tutorials, how-to-guides, and explanations.

image

Maybe turning on the discussions feature of the repo could be a way in that direction... We could build a gallery of user examples, tutorials & etc.

@RealDanTheMan
Copy link

Hi @mikedh

Very impressed with this library, its fast and supports wide variety of exchange formats. I have been having a blast using it so far - thank you for your work here and keeping it under what looks like an active development.

I wonder how you feel about the current state of mesh visual data representation, so far I am under impression that 'Trimesh' objects can only represent either a vertex color data via ColorVisual or texture coordinates via TextureVisuals but never both. It also seems that texture coordinates will only be available if a valid material is present, at least via GLTF exchange implementation.

This presents a serious challenge, its common in real-time graphics to have both - this allows colors sampled from textures to be tinted, overlayed or modified by vertex color values further in a shader pipeline. It also common to encode variety of useful mesh data in both vertex color and texture coordinate buffers without the need for materials or textures.

Whats your thoughts on this, and are you open to changes in this area, or perhaps I got the wrong impression here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants