Needed Copilot's help converting displacement into real geometry, here is what it had to say to bake out the model. I used the command extractrendermesh and turned it into an OBJ file.
Apply Displacement
Select your object.
Go to Properties → Displacement.
Enable Displacement and adjust:
Height (amount of displacement)
Mesh Density (higher values = more detail, but heavier file)
Texture (image or procedural map)
Bake the Displacement into Real Geometry
Once you’re happy with the preview, convert the displaced mesh:
ExtractRenderMesh — creates a mesh from the render mesh (including displacement).
Or use ApplyDisplacement (in some Rhino versions) to directly bake it.
This creates a mesh object with the displacement baked in.
Clean Up the Mesh
Use commands like:
ReduceMesh — lower polygon count if needed.
Weld — fix vertex normals.
FillMeshHoles — close gaps.
Check — verify mesh integrity.
Export
Select the baked mesh.
Use File → Export Selected.
Choose a mesh format supported by your target software:
.OBJ — common for 3D apps and game engines.
.FBX — keeps scale and orientation for animation pipelines.
.STL — for 3D printing.
In the export dialog, ensure mesh settings are high enough to preserve detail.
White Point 5, Black Point 0, Very High on Initial Quality . Other settings on preset
Difficulty uploading model with these specs into ShapesXR. Issues from file size being too large, and
Adjusted Project Plan: I've decided that the SEM scans of the LCE material will have top take place for the second Project rather than the first due to difficulty procuring the material within the next week. Instead, I've decided to have students analyze a synthetic hydrogel under dry and wet conditions. This will require SEM for the dry and a stereoscope for the wet,
Learning Goals
1: novice
2: some experience
3: have practiced this in some real projects
4: feel very comfortable
5: could be an expert mentor
After this course students will be able to:
1: 4 : Goal 0: articulate AR/VR visualization software tool goals, requirements, and capabilities;
2: 4 : Goal 1: construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research;
1: 3 : Goal 2: execute tool evaluation strategies;
1: 3 : Goal 3: build visualization software packages;
1: 4 : Goal 4: comparatively analyze software tools based on evaluation;
2: 5 : Goal 5: be familiar with a number of AR/VR software tools and hardware;
2: 5 : Goal 6: think critically about software;
4: 5 : Goal 7: communicate ideas more clearly;
1: 3 :Goal 8: Learn some coding for data visualization and/or interactivity
Project 1 Proposal <ADD LINK>
Presentation for Project 1 Proposal <ADD LINK>
End Presentation for Project 1 <ADD LINK>
Project 2 Proposal <ADD LINK>
Presentation for Project 2 Proposal <ADD LINK>
Poster <ADD LINK>
In-class Activity <ADD LINK>
Public Demo <ADD LINK>
CONTRIBUTION 1 [short description] <ADD LINK>
CONTRIBUTION 2 [short description] <ADD LINK>
.....
CONTRIBUTION N [short description] <ADD LINK>
Potential Wiki Changes
1O minute change
Add about section to UW Reality lab and link
Added to Wiki- "The UW Reality Lab aims to advance the state of the art in virtual and augmented reality by developing new technologies and applications..." The lab includes a Reality Studio which " explores and promotes research and educational endeavors for effective production and clear storytelling for and in immersive environments."
Link: UW Reality Lab
Location of Changes: VR Software wiki
Add in cost of licenses in Vizard
Add more information to the Visualization Toolkit (VTK) from website
1 hour change
Investigate Eyecad VR software- Describe its capabilities and requirements
RhinoVR- Applications
Best software for designing UI for AR/VR specifically
10 hour change
Examples of interactive experiences in displaying data for younger audiences (Basically- Is this a tool that can be used to convey information across different age ranges)
Best software for designing UI for AR/VR specifically
Digital nudging- what it is and how it can be used for my intuitive interactions with data
Total: X hours
1/26/26 - 4 Hours
15 minutes setting up Journal
2 hours reading background papers, taking notes and snapshots (Empathetic Mixed Reality and Hands In Space: Gesture interaction with Augmented Reality Interfaces)
1hr looking through wiki for potential updates and project ideas
45 looking through Kenny Gruchalla's bio and website and taking notes
1/31/26 - 45 min
30 Reading background article
15 minutes looking over some project ideas
2/1/26
10 min looking for another article to read
30 min reading background article
1 hr 30 min looking over past projects for project ideas
45 min looking through pages in the VR Visualization software, and VR development software.
40 min looking through outside articles for different software and project examples
10 min Setting up DinoVR
5 min downloading Virtual desktop to laptop and headset
2/3/26
10 min reading through DinoVR paper (partial- need to finish this afternoon)
1 hr playing around in ShapesXR (partial- need to finish this afternoon)
2/8/26
3hr research for project presentation (SEM data translation mostly)
2/9/26
1hr putting together project presentation
2/11/26
45 min updating journal
1.5 hr research for project
20 min looking over projects page
Most Interesting phrases to me from the project ideas page: Core questions: How can existing VR software be redesigned to better support the needs of scientists? & Evaluate what collaborative techniques work well in VR and propose some new techniques that could aid scientific visualization.
For my project I'm interested in translating SEM data so that scientists (and those they would like to share their research results with) can view their materials/specimen in more detail. I would like to find a way to make viewing this data more intuitive in a VR and possibly explore some sort of collaborative platform. (Perhaps some sort of haptic feedback could be helpful?)- I
Context: ShapeXR's UI is quite similar to Figma
I would like to explore prototyping through platforms like ShapeXR. I'm especially interested in digital nudging in the VR environment. My main question is how to build interactions users can work with as seamlessly as they do with other devices.
1.) What I will do during the project:
a.) Create several prototypes for ways users can interact with
b.) Analysis UI's ease of use, and how complex of interactions can be created
c.) Compare ShapeXR to other types of software that allow users to build interactions with no coding.
d.) Research into effective UI and gestures for VR platforms.
e.) Explore cross platform design with ShapesXR, Figma, and Unity
2.) Activity: I will design an interactive scene for students to view different models of icesheet melt over time. The activity will focus on how effectively users can design their own interactive components for others to engage with (with varying levels of prior .
3.) Deliverables: Analysis of how difficult it is to prototype using ShapesXR with some prior experience in Figma, and an evaluation of differences between prototyping software like Figma and ShapesXR for intuitive user experiences, and feedback from users trying to navigate the UI I design in ShapesXR.
Context: Termite mounds are well known for their ability to have become inspiration for self-cooling architectural systems.
As a designer, I'm also interested in the self-regulation of termite mounds, and I think it would be interesting to have a flow diagram to see how mounds intake and expel air/moisture.
1.) What I will do during the project:
a.) Research software capable of showing flow diagrams through the air channels (SolidWorks/Fusion may be helpful since I believe that they have a function for this)
b.) Find or make a model to run the flow simulation through.
c.) Create an interactive model where students can test different wind conditions.
2.) Activity: where students get to play around with the wind intensity and view how the air flows within the system.
3.) Deliverables: tutorial of the software used, a copy of the model created for the simulation, evaluation of how difficult the software is to use.
Software: Fusion360 and Unity
Context: URI has been developing a liquid crystal elastomer, which is a programmable shape change material. I'm working with this lab for my thesis, in order to design a product inspired by the material's capabilities.
It might be helpful for the lab to view the material's composition before during and after the shape change to notice the changes on a microscopic level.
1.) What I will do during the project:
a.) Communicate with URI and ask for data and/or images regarding the material's structure
b.) Scan the material in the Risd Nature lab on the SEM (scanning electron microscopy) machine.
c.) Design a way to interact with the structure through zoom and swapping between different states of the material
d.) Research the best software for translating SEM scans into 3D models (Possibly Rhino).
e.) Use software like Unity to bring the models to life for simple interactions.
2.) Activity: Inform students about the material's properties and have them either try to deduce which scan is from which stage (before, during, or after shape change) or try to notice differences between the material's structure.
3.) Deliverables: Tutorial on how to scan the different states of the material, images and models of the scans used in the model, evaluation of the workflow from 2D to 3D image.
Software: Unity and Rhino
Context: Mycelium have hyphae which can serve different purposes from reproduction to nutrient absorption.
There are different types of hyphae, and it would be interesting to visualize the structural components of hyphae in VR to better understand fungal biology.
From a glance looking at some of the past projects, It seems that software like Unreal Engine and Unity are quite popular. I've also seen a student mention Paraview, which would be interesting to try out since it is beginner friendly. From some outside research I'm interested in Flow immersive and the blender plugin for data visualization in addition to that I'd like to do more research into platforms like Shapes XR with an interactive prototyping workflow that's very similar to Figma.
Flow Immersive
(Top Image)
Flow — Turn Data Into an Experience Over Your Table
Many of the exampled explore table top 3D graphical views of data from financial to environmental, energy, and sales.
& Blender Data Visualization Plugin
(Bottom Image)
Visualization of molecular data in Blender 4.2 update
-The plugin makes converting the data simpler but is sadly not interactive.
Software Interest:
Unity: I have some experience working with Unity for a game project, but I'm still relatively a novice when it comes to C++.
Unreal: I don't have any experience with Unreal. The interest in this software stems from wanting to understand the software more. On the left is an image of digital twin for Unreal.
Paraview: Looking at the wiki this software seems like a beginner friendly option to explore. The main downside is its incompatibility with collaboration between more than 2 users and the fact that the
ShapesXR: Since the UI is so similar to Figma I think that what I already know about that application would be fun to experiment with. I think this platform may be the easiest to at least create interactive mockups that could be transported to a different
Blender: I have some experience with the node UI for rendering objects in Blender, so the learning curve for figuring out the data visualization plugin and experimenting in there may be a bit more lenient. From the visualization of the MRI scan below, I'm interested in some sort of physiological data (in human or animal).
Source: MRI Visualizations in Blender — Curtis Holt
(need to finish notes)
The tool file for this project is available for free
User took a volumetric shader approach to the slices provided from their own MRI scan
This case study used real world point cloud data to drape digital meshes over real objects. This let users control digital cars in the real world with collision detection.
Article explores how to use natural gestures for more intuitive AR applications.
Starting from Page 54
This was a study using projections and predictive analysis of Tenebrio Molitor (yellow mealworm) production using simulation techniques.
Empathetic Computing: A research field that explores how tech can create a deeper understanding of empathy between people.
Mixed Reality (MR) can be a highly personalized platform that can capture gaze, movement, and physiological data. The user's environment can also be captured providing some context. This all provides an opportunity to identify a user's thoughts and state of mind for more thoughtful collaboration.
Sharing Where you gaze
This was a study for sharing gaze in asymmetric collaboration. One person uses an AR system with eye tracking on a head-mounted display (HMD) and head-worn camera. The local worker and remote worker worked together to assemble physical building blocks.
Results: It was found that sharing eye gaze from the local worker and pointing cues from the remote helper helped improve co-presence and collaborative performance.
CoVAR
This is a system that combines AR and VR tech for room-scale collaboration.
This study explored the use of different awareness cues to inform users what their collaborators are doing/where their gaze lies.
This was my first time getting to use Google Earth through VR. It was a lot of fun to visit places I've seen in real life from a different perspective and a different time of day. It also helped me better understand areas I'm interested in living after school. I'd say that the UI is pretty intuitive, it helps to have the Earth in the user's right hand for location selection.