Introduction to macroscanning

Macroscans started life as a personal project during the first Covid lockdown period in early 2020. Prior to this I had been developing collaborative VR sessions for the construction industry. I was building tools for supercharging meetings and design development sessions featuring scan data. The result was a prototype VR platform for 6 users sharing the same space. It was built on Epic Games then newly released multiuser VR template. (Fig 1)

Fig 1  Collaborative VR session for inspecting 3d scans as scale models

The high realism in photogrammetry comes across well on a VR headset; Empowering users to inspect and explore 3d scans of buildings and urban environments in minute detail. This only works however, if the 3d scans are scaled down in size, like a dolls-house. At this size the model detail is sufficiently high, providing a unique level of quality and fidelity for a VR application. (Fig 2)

The quality threshold required to qualify as a macroscan is a minimum of 1:1 texture fidelity at a distance of 1 metre in VR. In other words,  when viewing a surface at a distance of 1 metre in VR the texture would be crisp, with no blur.

Fig 2  3d scan presented in VR as a scale model.

Whilst the vast majority of 3d scan data is viewed on 2d screens, there is no comparison to the intimacy and agility of the VR viewing experience. Using VR is a key aspect of what Macroscans is doing and why texture fidelity is of utmost importance. (Fig 3)

I created a reference scan of a small studio apartment to demonstrate the quality threshold (Fig 3). The final scan contained 45 8k textures.  Unreal Engine was used to view the scan in VR but it was necessary to disable all lighting, tone mapping and post processing. This ensured the subtle colours and tones of the original photographs were preserved in the displayed model as faithfully as possible.  Standing inside the scan at life-size in VR the feeling is remarkably similar to being in the real apartment.

Fig 3  Exploring the reference environment scan in VR

If we can deliver a standard of 1:1 at 1 metre why can’t we go further and increase the fidelity of the real world with 3d scans! To do this I started with a focus-stacking system so that the extremely shallow DOF of macro lenses could be overcome.  Anything captured with a macro lens (1x magnification or greater) would result in a texel density much higher than 1:1 allowing us to increase the model size significantly. In fact, I  increased the size of our first insect scan (captured at 2x magnification) by almost 100x whilst maintaining pin-sharp textures in VR. (Fig 4)

Fig 4 – Macroscan in Unreal Engine

The implications for this are interesting. By capturing tiny creatures and objects from the natural world we can get close to a dimension of scale that is normally unseen, really close! The micro world is currently perceived through macro photography or microscope imagery only – now we have a way of bringing this part of our world to life in an immersive experience. After trying the first VR build I was struck by the impact it had on my 9yr old sons. They were fascinated by seeing the insects at enlarged sizes and remarked on how ‘cute’ or ‘cool’ they looked. (Fig 5 & 6)

Fig 5  Exploring a micro environment scan in VR

I noticed how they behaved towards real insects after experiencing the VR; Much more interest and regard for the creature as if with a dog or cat for example. This suggested that there is an educational dimension to the project which is connecting with the subconscious in ways that I had not predicted.

Fig 6  Getting up close to a 2 metre long wasp in VR

The process of macroscanning small objects and small environments, refining the data and deploying it in VR is a complex one. From the photography and focus-stacking system thousands of shots are taken, typically 4k or more for a single scan. Then there is image processing, model creation, refinement, decimation and optimisation. Final steps can include rigging and animation.

Fig 7  Macroscanning rig

Why photogrammetry?

Why I am using photogrammetry when there are several 3d capture technologies available? Why we are using a non-explicit capture method over an explicit one, (algorithmically generated as opposed to point to point)? It is because we are bending the laws of physics with focus-stacking and need some flexibility for 3d reconstruction.

Photogrammetry is also the most scalable 3d scanning technology we have. The reason for this comes down to the camera lens and the camera sensor. In each case we can scale the amount of detail we want to capture. For explicit capture systems such as lidar or structured light this scalability is simply not available.

Additionally with the release of more refined solver algorithms in software like Reality Capture and Agisoft we have now arrived at an inflection point where photogrammetry outputs have became consistent enough for real world use if the source data is correctly prepared.

Fig 8  Inspecting a RAW macroscan of a Tortoise Beetle (Cassidinae) in Reality Capture

It is my view that photogrammetry will underpin 3d scanning technology development into the distant future. The focus will be on achieving higher fidelities and moving away from the ‘melted wax’ model so commonly associated with the format.

Fig 9  Specimens waiting to be scanned

I will share my findings with 5 tutorial instalments over the next few weeks. With the right tools the process can become accessible to anyone open to a challenge! I see tremendous value in this technology for areas like historical reconstruction, forensics, education, digital preservation, archiving and analysis amongst others. I am excited to see how it develops into the future with more people jumping in.

The first of 5 tutorials will go live on 12/07/2022

TUTORIAL 1 – Building your own macroscan rig

Covers all the elements that make up the rig. Camera types, stacking rails, stages, lighting, cross-polarisation setup. Connectivity with rail and computer workstation for an optimal working environment.

TUTORIAL 2 – Using the macroscanning rig

Close look at the process of focus-stack creation and processing. Covering magnifications and lenses, filters and aperture settings. Specimen preparation.

TUTORIAL 3 – Photogrammetric reconstruction – making the model

Detailed step by step process from image loading to mesh creation, texturing and iterative reprojection. Software covered includes Reality Capture, 3dsmax, 3dcoat, Photoshop and instant meshes. Equivalents could be used (Maya, Agisoft, and Zbrush for example).

TUTORIAL 4 – Mesh and texture processing – refining the model

Close look at the process of refining and optimising capture data into production ready assets ready for rigging, animation, lighting, and shader creation. Software covered includes Reality Capture, 3dcoat, 3dsmax, Photoshop , instant meshes and Unreal Engine.

TUTORIAL 5 – Deployment to VR – experience the model 😊

Covering deployment in  Unreal Engine  for an optimal VR viewing experience. Attention will be paid to UDIM management, shader management, pixel sampling, and various Unreal setup requirements for the best VR experience.