Projected/Perspective UV Mapper tool mode

I would love a mode in the UV Mapper tool for projected UVs so that you can take a real world photograph and easily map it onto a 3d model of that object. The mode would be similar to the tool's current "Flat" mode, but instead of projecting the UV's in isometric fashion from a plane, they are projected from a point in perspective with an adjustable field of view similar to the way a camera sees the world. This would make it much easier to map real world photos onto objects.
 
I wanted to bump this request. Here is a tutorial on how to do this with blender and it is way too complicated.

I would love a simple tool similar to the Flat UV mapper tool from Cheetah for projecting a perspective photo onto a mesh's UVs.
 
Seems I don´t get the purpose of that, because wouldn´t it be much easier to align an image to a photo via perspective filter in an image editor? Why all the fuss with UV mapping at all - in the end you`ll need correct proportions for the alignment - otherwise it won´t work.
 
For the simple case of the box, what you say is true. In my case I am trying to project a photograph onto a very complicated Lidar mapped surface. I have a great mesh created with an iPhone Lidar scanner called scaniverse, but scaniverse's texturing is a real mess. It is extremely broken up. I also have a nice high resolution photo taken of the surface. I am trying to map my nice clean photo to this complex surface. I am going the blender fSpy route described in this tutorial at the moment, and it is working, but it is more complicated than it needs to be and I much prefer working in Cheetah.
 
I don´t have the resources you have - so I can´t check your case inside Cheetah3d. Maybe you can make a low-res version available (ca 500KB max for this forum), so we can mess around and try to find a way - or not. ;-) And from my knowledge this will just work for a still.
 
Not easy to make a low resolution version to mess with, but if you are thinking of taking up the challenge of writing a script or plugin that will do this, mapping a perspective photo of a box or square building onto a cube as shown in this tutorial would be a good test case. In my case, this is what my model looks like, and its texture map as scanned.
 

Attachments

  • Screen Shot 2021-07-09 at 12.23.21 PM.jpg
    Screen Shot 2021-07-09 at 12.23.21 PM.jpg
    104.1 KB · Views: 255
  • Screen Shot 2021-07-09 at 12.24.01 PM.jpg
    Screen Shot 2021-07-09 at 12.24.01 PM.jpg
    138.4 KB · Views: 263
This the photograph of the space and my almost but not quite lined up yet photograph mapped onto the surface using the blender technique described in the video:
 

Attachments

  • Screen Shot 2021-07-09 at 12.24.11 PM.jpg
    Screen Shot 2021-07-09 at 12.24.11 PM.jpg
    255.8 KB · Views: 243
  • Screen Shot 2021-07-09 at 12.38.30 PM.jpg
    Screen Shot 2021-07-09 at 12.38.30 PM.jpg
    154.9 KB · Views: 256
For sure this texture isn´t useable inside C3D and I don´t see a 3d mesh. This looks more like a Photogrammetry application and I still don´t understand the purpose and don´t have 20min for that video - sorry. Good luck on your project. (I´m used to use SketchUp to line up photos to build meshes and it has tools for that case.)
 
Actually camera mapping as shown in the video doesn't have much to do with photogrammetry (and it is kind of thick to link to such an overly long video, especially as it doesn't explain the mapping part very well and shows more the workflow (even with a part that shows how it isn't done anymore. And no, you don't need necessarily "other pictures" for the sides not seen from this angle; it would suffice to move the uv-islands so they overlap ... Just an aside). With photogrammetry you actually should get a mapped mesh and to create from that a lower res mesh with a fitting texture is something completely different, but at least there should be an undistorted texture to fit your new uv on.

"Campera mapping", also called "Projection mapping" or "View projection", is for something completely different. It's like "Front projection" in some apps, but actually creates an UV Map from the point of view of the chosen camera. Of course, that's heavily distorted, but if you model from a photo and align your mesh to that camera angle, you can use the photo as a texture. This you then bake down to an undistorted "usual" uv map and get from that in theory a usable texture because the app computes the right "distortion" to the texture so it actually looks undistorted. In theory. In the example above it works quite well, but missing information isn't magically generated.

That said, it's actually faster than creating an undistorted version of the image in an image filter, as most of this stuff happens more or less automatically. The "camera projection" happens automatically, you align the photo anyway to the mesh when you model from a photo, and an uv map you'd need anyway.

But all in all, it has a rather narrow use case not that many users of Cheetah could profit from. There are literally dozens or even hundreds of things that would be more important and make more sense to include in the next versions of Cheetah.

For the photogrammetry problem above the solution would be to use an app that creates usable uv mapping for a scanned object.
 
Yeah, its a long video. My apologies. The most relevant portion for my purposes was about 4 - 8 minutes. I did manage to do this in blender after studying that video. As I mentioned in my original posts, Cheetah already has a very nice easy to use tool that does a similar thing. The "Flat" mode on its UV mapper tool easily bakes an image into the uv map. I have used that a lot to map images to meshes. I hoped it might not be too difficult to add a perspective camera mapping mode as an extension of that tool. Its a very nice and easy way to uv map a mesh that you only need to see from one perspective.
 
Back in the day Canoma, from Metacreations, was state of the art for making 3d models from photos. You dropped in a primative shape, aligned it to the photo, and bamm! You had a simple, textured model. Adobe bought it and that was that.

Canoma Resources
 
It's certainly nice to have, but — in my opinion — there aren't that many people who actually would use it. That's why you don't find that many tutorials about it.

If you model from a photograph (or a blueprint) (highly recommended, by the way), you usually get out of your way to find the right reference photos or create them yourself so you really have side, front, top and back views (or at least 2 to 3 of them), draw something, whatever, but most people seldom use just a normal photograph to create a model from that. And that has to be the right angle, like in the video above, where the whole thing wouldn't work if the angle was just a bit lower as the top texture wouldn't be usable anymore. Etc. I very seldom saw examples of it's use and myself have tried it just once, only to ditch the whole workflow and do it different (with better quality in the end, all this obviously in another app, but that's why I knew what you're talking about).

Cheetah can't compete with the functionality and the amount of tools in Blender (even other 3d apps can't anymore), as it's just a one man show. Martin, the developer has to concentrate on the real important stuff and easy to include things that then will be used by a majority of Cheetah users. Especially materials (no sss, no volumetrics, no bump and normal at the same time, a better displacement included in the material not somehow else) are way behind (at least it got a pbr workflow), quite a lot of work to do, if you think about it (Martin would be much further if Apple wouldn't have created one hindrance after the other, one of them ditching the whole openGL). And UV-Mapping in itself is at the moment quite good, but at the same time very basic. Straighten an UV-Island with just one click? No, so much has to be done by hand which is rather cumbersome, especially with a high poly mesh. And then there is the modeler to modernize, etc. So, no, I don't think that such a camera projection will find it's way into Cheetah any time soon.

All in all, all of today's dcc apps can't fullfill everybody's needs, not even Blender, which at the moment is the most versatile of them all. That's why plugin developers and specialized apps still thrive (like zbrush, substance, ryzom etc). So it's not that bad for a special use case to do it in another app for Cheetah users.
 
Back
Top