The model creation stage in Agisoft Photoscan can be quite easy, straightforward and "mechanical" but, depending on how well our object was shot, can become a very daunting process.
Note that processing settings at every stage of the model creation do not differ too much between projects but a basic understanding of how they work is essential.
Some of the steps take technical time needed for processing our images. Both CPU and GPU help this process. You need to enable GPU acceleration as it is not turned on by default.
Overview of the process of this stage:
01. Importing - import our photos
02. Masking - create masks to isolate our object from the background
03. Aligning - align our photos and see a simplistic voxel presentation of our model
04. Aligning Refinement - removing voxels that seem out of place, removing voxels based on program errors
05. Dense Cloud - building a high-density voxel cloud of our model
06. Model Creation - building a poly mesh model based on our Dense Cloud
07. Orientation and Scale - orienting our model to real-world space and scaling to real-world scale
Take a moment of your time to watch this tutorial that goes through almost all steps mentioned above. After watching continue reading below for some important notes, tips and tricks and additional resources.
A few notes on the aforementioned topics:
01. Importing - When importing our photos we should keep in mind to first use the high-contrast images exported from Lightroom/Bridge. This will help Agisoft Photoscan get most possible detail out of our object.
02. Masking - Masking can be done in a few different ways depending on the photography studio setup and the object.
- Method 1 - From Background - As illustrated poorly in the video above, masks can be generated from a background image. This method works properly if during the shooting phase for every set (circle the camera makes around the object) an additional photo is taken of the background only. A problem with this method is that the object has to be removed and then returned to the studio between shooting every set of images which might cause slight inconsistency between them. When the background is a uniform colour - black or white most commonly, and the lighting in the studio is flat enough, background masks can be created just by creating an additional black or white image as was shown in the video and use it instead of the needed background photo. Here we will have a different problem (again evident in the video above) - when masking with a black BG, dark parts of our objects texture will be masked out as well and respectively, when masking with a white BG, bright parts of the texture will be masked out. Both methods require checking of all the masks and manual refinement for best results.
- Method 2 - From File - Similar to the previous method but done in a slightly different way and requiring additional software - we can photograph our object on a green screen. We can then prepare our masks in After Effects or Fusion or some dedicated chroma keying software and then import them into Agisoft. This method might be a bit slower but might produce better results depending on the situation.
More visual representation of these two methods you can see in this short tutorial (masking part starts at 03:00 but check out the whole video anyway):
- Method 3 - From Model - The idea of this method is to create a very rough model first (skipping the whole masking stage whatsoever) with low settings and then using this rough model for the creation of our masks. You can check a bit more detailed explanation of it in Bertrand Benoits blog here (also check his whole process!): https://bertrand-benoit.com/blog/the-poor-mans-guide-to-photogrammetry/
Important! As with the first method this one also requires some manual fine tuning or at least checking all of the masks before continuing to image alignment! So try to famirialise yourself with all of the mask creation tools within the process.
All of these methods have their pros and cons and are more or less appropriate depending on the object. In the end, you will work with what you shoot or what you get from the photographer - so it is esential to understand the whole process and communicate if you have any specific requirements where needed.
03. Aligning - Some things that have not been addressed in the first video:
- Key Point and Tie Point limits - 400,000/10,000 for these are quite okay and can be used in all scenarios. Of course, if you have a very complex model you might want to increse these values. Important thing to note is that a value of 0 means you leave the program decide how many points are necessary for the alignment.
- Constrain Features by Mask - Just remember to tick this whenever you've made masks for alignment, otherwise they won't be used.
- General Preselection - Checking this option means that the software does an initial scan of all the photos prior to alignig and removes ones that it thinks don't have enough connections between eachother. While this option can greatly decrease alignment process time it can also leave out some important images that are needed. For simple meshes and good photo sets it can be turned on, but for more complex ones it is better to leave it off.
04. Aligning Refinement - Using Gradual Selection for removing voxels based on program error detection as shown in the video can have a very bad impact on the next stages of the process. When you have insufficient information in some areas of an object (often times, when you only shoot the top part of an object the lowest possible circle still leaves some very vague areas) the program decides to remove them but in fact, there is enough information there for the object to be created correctly. You will end up with half an object and possibly a very messy texture. A suggestion to avoid this problem would be to observe the voxel cloud, detect and manually delete possible incorrect voxels (you will usually notice quite a few flying in mid-air). Gradual Selection can then be used only on objects covered fully in the photo set (meaning not only the top half of the object was shot).
05. Dense Cloud - Keep in mind that the model you will be creating in the next step is dependant on the quality of the Dense Cloud. Higher quality Dence Cloud gives an opportunity to create a more high-poly/detailed mesh. If you have a simple object, it might not be necessary to even use the High quality setting, however, if you have a very detailed object and you would like to retain its details in the mesh you might want to choose Ultra High quality. The Ultra High quality setting is quite time-consuming so you must choose carefully if you really need it.
06. Model Creation - This stage isn't as time-consuming but keep in mind that it might require quite a lot of RAM depending on polycount. You have a few choices for Face Count - High, Medium and Low are predefined by the program and are basically different levels of decimation. Then you have the Custom option where you can specify the poly count you would like to have. With the Custom you also have the option of using the magic value of 0 - this skips the whole decimation process whatsoever and leaves you with the highest polycount possible. This can save some time but it can also require more RAM. General advice is to use the Custom - 0 setting so you can have as much detail as possible for normal/displacement map creation and use decimation in ZBrush before retopologising.
07. Orientation and Scale - After aligning the cameras in Agisoft Photoscan the orientation of the model is randomly positioned. Since we need to go back and forth between different applications a solid orientation and scale compared to world coordinate system is necessary. So let’s center our model.
- First of all, we need to define 2 points on the model with known distance. Viewing your model right click where you want the first reference point placed and choose Create Marker. Repeat that step with the second point.
- In the Workspace window select the 2 created Markers and click on the Add Scale Bar Icon from the shelf. A line appears at your model. (To see the line and markers you need to enable Show Markers in the right top corner of the toolbar.)
- Go to the Reference Tab and enter the known distance in cm. Hit update.
- Now we can use the Region bounding box and a python script to fix the rotation. It helps to alter the View: Perspective/Orthographic and turn on/off cameras whilst adjusting your region box correctly. Reset or rotate the view to zero out the axis. Shortcut for this is "0".
- Align the model using the Rotate Object tool. Hit update.
- Now switch to the Region tool and rotate the red, green and blue band perpendicular to the corresponding axis. This will give us the correct x,y,z orientation. The region box center is where the center 0,0,0 of our model will be.
- As soon as you are satisfied with the position and rotation hit Update. Now let’s run the “magic script” which you can download at the bottom of this page. Save the file in C:\Users\Your Username\Documents\Agisoft.
- After running the script a Custom Menu appears from where you should start it.
- Voila! It's that easy! :D
- Check out the video below for more graphical representation or check this tutorial from where all this information was adapted: https://2torialblog.wordpress.com/2016/04/16/align-photoscan/
Finally, we can conclude this part of the process and export our model for further refinement in ZBrush.