20260301
3D Scanning in Vizproto
Note: Before embarking on this next assignment, please be sure to review my powerpoint lecture—including a history and overview of 3D Data Capture systems such as CT Scanning, MRI, airborne LIDAR and more— from Module 2: https://canvas.asu.edu/courses/244732/files/123818451?module_item_id=18939113 If you would like to view an actual video recording of the lecture, go to this link https://drive.google.com/file/d/1ZU9kKyYjwuz7HzL_bTZLDK3TbxxOXb_p/view and watch the lecture from about minute 00:8:00 – 01:32:0.
I personally have been tracking this technology since the early 90s when I did a residency at the Cyberware lab (Oscar winning technology company) in Monterey, California and had access to their cutting edge (at the time!) laser-based portrait and body scanners. I have explored various flavors of the IP—some DIY and affordable, some impossibly technical and expensive—as an aid to building sculptures and creating animations. Most recently (earlier this month in fact), I had the opportunity to utilize volumetric 3D scanning at the MIX center in Mesa courtesy of Canon, Inc. This most recent system uses 20 iPhones (4 equipped with the LIDAR range capture feature) to capture figures in motion. The processing is aided by AI “splat” technology which populates 3D space with millions of tiny, fuzzy, 3D "Gaussians" (ellipsoids) that represent light, color, and density, allowing for real-time, photorealistic rendering.
Over the years, students in the Vizproto class have explored a myriad of different 3D scanning technologies. We’ve used desktop laser (Cyberware, Next Engine, Roland), full body laser scanning (Cyberware), Kinect infrared scanning (like on an Xbox console), Structured Light (Artec Eva), and many, many smart phone apps including Polycam, Reality Scan, Kiri Engine, and more. We’ve also toured various labs on campus to see the “state of the art” and its significance for various research disciplines. We have visited the Keck Imaging Lab at ASU where we witnessed confocal microscopy and scanning probe microscopy being used for creating 3D models of fish scales, plant morphology, and objects as small as chromosomes. We have seen demonstrations of high-resolution laser scanning used by anthropologists studying the teeth of canines, the bones of Australopithecus, and Mimbres pottery. And we have met with planetary scientists who shared how ASU has been at the forefront of research scanning different celestial objects—such as the surface of Mars using a MOLA system (Mars Orbiting Laser Altimeter).
The technology has come a long way since I first started teaching this class over three decades ago. While it would be great if we could all have direct experience of all the different technologies found across the many labs at ASU, we are very fortunate indeed to have an incredible technology right in our own pockets for 3D scanning—smart phones equipped with photogrammetry (and LIDAR) capability.
For the past couple of years I have been recommending Polycam as a software of choice. However, unless you purchase the PRO version, the software allows only a limited number of images to be acquired and you can’t export standard formats (however, there is a workaround where you export a .glb file into Blender, then convert the 3D geometry into an .stl for 3D printing). I still think Polycam is the best option. Kiri Engine and Reality Capture are also good choices. Let us ALL know if you find a 3D scanning software (for your phone) that you really like.
For even more options, check out this fairly recent review of MANY different smart phone 3D scanning softwares: https://www.youtube.com/watch?v=kKy4cV4YbH4 This looks like a kind of cheezy sales pitch at first, but it is really a legit review of many of the popular 3D scanning apps. (Its top pick is the MakerWorld AI Scanner software.
Whatever you choose to use, know that each will pose its own unique challenges. Some allow for the use of a turntable, for example. Others require you to download a specific pattern that is printed out and placed beneath the object during the scanning process. Bottom line: follow the instructions for whatever software you select to the letter.
Here are three softwares that I can currently recommend for this project:
1. Reality Scan. https://www.realityscan.com/en-US/mobile (try the AI scanning tool). I captured 100 images (out of a possible 300) when scanning my test sculpture. See my test 3D model here: https://sites.google.com/view/vizproto/directory/dan-collins/collins-3d-data-capture For a tutorial in Reality Scan, check this out: https://www.youtube.com/watch?v=spPIqK3NVwc
2. Kiri Engine. https://www.kiriengine.app/ (very popular with former Vizproto students. There is a free version, but the capability is limited). Here’s a brief tutorial: https://www.youtube.com/watch?v=-OiT5FUa5IM
3. Polycam. https://poly.cam (I’ve successfully used this for a about 3 years now, but I have the PRO version. There are lots of complaints in the current reviews. Still, I love this software). Here is a youtube tutorial: https://www.youtube.com/watch?v=Ibcl_vGJj_I
Your challenge for this project is to use a 3D scanning process using a smart phone to produce a “buildable” 3D model in the “.stl” format (stereolithography). Unlike your “3D self-portrait,” you need to generate what’s called a “closed manifold” model—that is, you need to have a completely “closed” or continuous surface defining the surface of your object (no holes, no overlapping polygons, no dangling edges). The software will do most of the work for you, but you will need to ensure that your scanning process is consistent and fits the parameters of the particular software you are using. Here are a few tips:
· Make sure you are getting good angles to ensure coverage of your entire object.
· Keep your camera steady
· Do not use flash
· Use relatively flat lighting (no bright highlights or deep shadows).
· Shoot a minimum number of images (at least 20, but preferably 60 or more).
· Make sure that adjacent images overlap by about 50%.
· Stay away from shiny black, transparent or fuzzy/furry objects.
With some software (like the MakerWorld software), you will be asked to upload a video (rather than individual still images).
Generating a legal solid .stl file will be the first step towards creating a duplicate of your object using 3D printing. My suggestion is to use one of the capture softwares above to create a .glb file. Import this into Blender, use the hole filling feature, then export as a closed .stl (ready for next project). I am currently having trouble importing the .glb generated by Reality Scan, but I will keep working on it. That will be the NEXT assignment, so don’t worry about it…yet!
I am extending the due date of this project to March 13 at 11:59 pm.
As always, don’t hesitate to reach out with questions.
Have fun!
Dan
p.s. For a tutorial in Reality Scan, check this out: https://www.youtube.com/watch?v=spPIqK3NVwc (it’s two years old but gives the basics. It is now possible to shoot continuous video to generate point cloud).
p.p.s. Here is a link to Meshroom (an opensource solution for creating 3D models from multiple 2D images): https://alicevision.org/#meshroom Technically advanced, this software requires that you process the images on your laptop. You need to have an NVIDIA GPU card on your laptop, an ability to configure the processing software, and the willingness to upload images to your laptop from either your DSLR or your smartphone.