I scan rocks. Lots of rocks. Rocks are good. Rocks don’t move, aren’t squishy, aren’t affected by wind and stay rooted in place if you have to leave the scene and return- rocks just make great scan subjects. Humans are different for obvious reasons, a stifled smile, a muscle twitch, a weight shift or a rumple in clothing caused by a draft are all enough to throw off a 3D scan when using photogrammetry. It’d be great if the same cheap 3D scanning technique that works so well for large static objects worked for humans but unfortunately it doesn’t, the best human scans come from expensive multi-camera rigs that folks like James Busby use to get scans like these. Maybe a detailed mannequin or waxwork would do the job but those are hard to access, people are everywhere but it really takes a patient subject to be a good model for single camera photogrammetry. They’re gonna have to sit or stand still while you orbit around them clicking away with the camera, fix their gaze at a point in space, control their breathing and be prepared to start the shoot from scratch if they have to move for whatever reason. Then they have to be OK with you using their likeness, it’s all a big ask if you’re not paying someone, that’s where family and friends come in!
For the past few weekends myself and my kids have been taking a swing at handheld human scanning. First we tried my eldest son as the model with me taking the photographs, but after a few tries he decided he just couldn’t keep in the laughing and quit on me! So we swapped roles, I set the camera up on a tripod and gave him a remote shutter release and showed him how to frame and focus, and I took a seat in the garden. I closed my eyes to make it easier to hold still, scans like this often have trouble with resolving reflections in the eyes and can leave the area very messy. My son took about 22 photographs in a 180 degree orbit, in a single row from right to left then back again. When I took them back to the PC for raw processing I saw that less than 14 images were in proper focus, the daylight had faded by then and it was too late to reshoot. We had tried hard and rather than see the session wasted I quickly masked the images and threw them into Photoscan anyway. Now, “garbage in, garbage out”, as the saying goes in VFX, and here the results were predictably less than stellar but actually not a bad starting point, definitely salvageable with a bit of patch up work and some sculpting in Mudbox/Zbrush – it’s all practice for me right now anyway. This is what the raw mesh looked like out of the box, rendered here in V-Ray using the Fast SSS shader:
There was more daylight on one side of the head so detail on the right was more accurately resolved, on the side with less light you’ll notice the ear wasn’t captured and there is more noise present around the beard and nose, this is also due to involuntary movement. The noise is more visible from the surface normals (click the image for higher res version):
So, although some micro detail was captured, like the forehead wrinkles and some skin blemishes, the beard and hair are a mess of wobbly noise. This should be fixable in Zbrush (or with, you know, a comb in real life) and I’ll also mirror the ear to the other side and try and patch that up. I managed to get on the beta of Marmoset Toolbag 3 a while ago, here’s a very quick low poly normal mapped version using the skin shader and some secret sauce:
And lastly for now, a cover for my forthcoming* album made from a decimated low poly copy:
In the next post, I’ll cover some of the cleanup work. Thanks for reading!