A header II…

•October 18, 2016 • Leave a Comment

Some additional cleanup work in Zbrush here, where I smoothed out some of noise, added some more definition to the skin creases and created a displacement map from the original scanned diffuse texture for micro detail such as pores and stubble. It’s getting closer to something usable but still more work to do on the model and on the diffuse and displacement maps.



A header…

•October 4, 2016 • Leave a Comment

I scan rocks. Lots of rocks. Rocks are good. Rocks don’t move, aren’t squishy, aren’t affected by wind and stay rooted in place if you have to leave the scene and return- rocks just make great scan subjects. Humans are different for obvious reasons, a stifled smile, a muscle twitch, a weight shift or a rumple in clothing caused by a draft are all enough to throw off a 3D scan when using photogrammetry. It’d be great if the same cheap 3D scanning technique that works so well for large static objects worked for humans but unfortunately it doesn’t, the best human scans come from expensive multi-camera rigs that folks like James Busby use to get scans like these. Maybe a detailed mannequin or waxwork would do the job but those are hard to access, people are everywhere but it really takes a patient subject to be a good model for single camera photogrammetry. They’re gonna have to sit or stand still while you orbit around them clicking away with the camera, fix their gaze at a point in space, control their breathing and be prepared to start the shoot from scratch if they have to move for whatever reason. Then they have to be OK with you using their likeness, it’s all a big ask if you’re not paying someone, that’s where family and friends come in!

For the past few weekends myself and my kids have been taking a swing at handheld human scanning. First we tried my eldest son as the model with me taking the photographs, but after a few tries he decided he just couldn’t keep in the laughing and quit on me! So we swapped roles, I set the camera up on a tripod and gave him a remote shutter release and showed him how to frame and focus, and I took a seat in the garden. I closed my eyes to make it easier to hold still, scans like this often have trouble with resolving reflections in the eyes and can leave the area very messy. My son took about 22 photographs in a 180 degree orbit, in a single row from right to left then back again. When I took them back to the PC for raw processing I saw that less than 14 images were in proper focus, the daylight had faded by then and it was too late to reshoot. We had tried hard and rather than see the session wasted I quickly masked the images and threw them into Photoscan anyway. Now, “garbage in, garbage out”, as the saying goes in VFX, and here the results were predictably less than stellar but actually not a bad starting point, definitely salvageable with a bit of patch up work and some sculpting in Mudbox/Zbrush – it’s all practice for me right now anyway. This is what the raw mesh looked like out of the box, rendered here in V-Ray using the Fast SSS shader:


There was more daylight on one side of the head so detail on the right was more accurately resolved, on the side with less light you’ll notice the ear wasn’t captured and there is more noise present around the beard and nose, this is also due to involuntary movement. The noise is more visible from the surface normals (click the image for higher res version):


So, although some micro detail was captured, like the forehead wrinkles and some skin blemishes, the beard and hair are a mess of wobbly noise. This should be fixable in Zbrush (or with, you know, a comb in real life) and I’ll also mirror the ear to the other side and try and patch that up. I managed to get on the beta of Marmoset Toolbag 3 a while ago, here’s a very quick low poly normal mapped version using the skin shader and some secret sauce:


And lastly for now, a cover for my forthcoming* album made from a decimated low poly copy:


In the next post, I’ll cover some of the cleanup work. Thanks for reading!


*Not forthcoming.



Quigley’s Rock…

•September 1, 2016 • Leave a Comment

Recently I spent a week in Inishowen, Donegal where I took a tripod, monopod, laser measuring tape, Macbeth card and a Nikon DSLR with the express purpose of taking a holiday but also doing some proper 3D scans when I could🙂 Long photogrammetry sessions were impossible on some coastal trips up there, but I did manage to get quite a few partial scans, three or four decent ones and I learnt a truckload during the exercise. Right now I’m in the process of trying to mesh the scans and break them up into usable modular assets as part of a full “Irish Coast” biome, including LODs at 16k, 8k, 4k and 2k tris.

I’ve refined my pipeline a bit too, incorporating Substance Painter 2 into the mix (it has useful simultaneous multiple texture painting features that have been a huge time saver in the mesh cleanup process) and I’ve had a look at Megascans too, which promises to be an essential part of working with PBR. Here is one of the scans that required the least amount of cleanup, click for large versions.






Here is a comparison of the LODs, from 16k tris (nearest) down to 2k at the back




Finally, here is a link to a Marmoset Viewer version that you can navigate in realtime on my Artstation page. (You can move the light by holding shift and mousing around)





Lookdev experiments II…

•August 9, 2016 • Leave a Comment

As you’ll have noticed I’ve been running a lot of art through Prisma lately, don’t worry, it’ll stop soon. But not before I post some of my experiments in non-photorealistic rendering!

While looking through Google Photos the other day I searched for “rocks”, I was sure I had taken a bunch of landscape pictures ages ago for scanning that I hadn’t used and the search turned up a few. These were taken about 3 years ago, on a smartphone, while pushing a buggy on a broken path, so the quality wasn’t particularly good. I figured if they wouldn’t cut the mustard for a realistic prop, I’d try something non-photoreal with them, maybe something that might feel at home in Okami or Child of Light. Although Agisoft Photoscan processed them into a 3D model quite well, there were some large holes which required fixing in Mudbox, and also missing texture information which I’m currently patching up. I made a low poly version and unwrapped it, baking out normals, ambient occlusion and albedo from the high res scan. I took the albedo texture generated and painted over it in Photoshop using some custom brushes and standard filters to see how if I could get a hand painted feel (the Prisma servers were down, this happens a lot!). I then applied similar techniques to the normal map, removing high frequency detail and generally softening most areas. I also added some geometry for a cel-shaded outline effect. Here are the results of this in real-time:

Prisma came back up last night and I ran the albedo through some of the filters that wouldn’t change the hues too much. Here is where I started:


Here are some of the filtered textures, rendered in Marmoset Toolbag 2:







Lookdev experiments…

•August 2, 2016 • Leave a Comment

I’ve blogged before about sculpting some Batman character busts for 3D printing, there are now more than a few littering my desk at home, some finished, others abandoned. I was looking for something to run through the Prisma app on my phone and picked some Joker and Batman prints that I made last year. I ran each image through a number of filters and saved out the results, then hand painted some colour layers and lighting enhancements and composited them together to achieve the looks below.









•July 29, 2016 • Leave a Comment

I lashed a few images into Prisma to see what the consistency would be like between frames, not bad as it turns out, but the workflow is pretty clunky on a phone. I did some optical flow frame blending in After Effects to pad out the animation a little. What do you reckon?


This is what an input frame looked like:


•July 27, 2016 • Leave a Comment

Jumped on the Prisma app bandwagon once it launched on Android, I’m sure we’ll all quickly tire of it but right now I love messing about with the filters! I ran a simple 3d rendered image of my Joker character (no textures, simple lighting) through it and here are the results!

Click to enlarge.



Prisma output (9 variations):


Alternative view:



A couple more: