Demo Video
Overall Implementation
The implementation of the step-by-step cooking instructions involves using Unity with AR Foundation to create an AR environment that overlays a recipe guide onto the user's screen. We use GameObjects containing different canvases to hold different UI menus, such as MainMenu and ChoiceMenu, which can be navigated by pressing buttons that activate scripts to disable the current menu and enable the subsequent one. The RecipeMenu canvas displays instructions using a TextMeshPro element. A script manages the sequence of instructions, storing steps in an array and updating the displayed text as the user interacts by tapping. For 3D cooking visual aids, the app uses Unity's asset management to import 3D models in formats like OBJ or FBX, rendering them in the AR scene with appropriate scaling and positioning using ARRaycastManager to anchor objects to detected surfaces.
User Scenario Feature Implementations
User Scenario 1: Step-by-step cooking instructions
Visual implementation: The UI menus are implemented using Unity's canvas system, with each menu represented as a separate GameObject containing UI elements like buttons and TextMeshPro components. The canvas render mode is set to "Screen Space - Overlay," ensuring that the menus appear consistently on the user's screen, independent of the AR environment.
Interaction implementation: Navigation between menus, such as from the MainMenu to the RecipeMenu, is handled by button click events linked to scripts. These scripts disable the current menu's GameObject using SetActive(false) and activate the next menu with SetActive(true). The RecipeMenu dynamically updates its instructions using a TextMeshPro text field, which is controlled by a script that references an array of instruction strings
User Scenario 2: 3D cooking visual aid
Visual implementation: Each step of a recipe is paired with a specific prefab 3D model that represents the current stage of the cooking process. For instance, in the chicken stir-fry recipe, the first step shows 3D models of raw chicken on a cutting board, providing a visual cue for slicing preparation. As users progress through the recipe, models update to reflect subsequent steps, such as a skillet with cooking chicken or sautéed vegetables. This dynamic spawning is achieved using ARRaycastManager to detect and anchor the models on flat surfaces
Interaction implementation: To implement raycasting and enable touch interaction for spawning and updating 3D objects, the app uses Unity's ARRaycastManager to detect trackable surfaces. When the user taps the screen, the script performs a raycast from the touch point to find a valid surface using TrackableType.PlaneWithinPolygon. If a surface is detected, a Pose object captures the position and orientation of the hit. The app then uses Instantiate to spawn a 3D object at this location. A button press linked to a script updates the recipe instruction by incrementing an index and changes the associated prefab. The script destroys the existing 3D object using Destroy and spawns a new one corresponding to the updated instruction. This ensures that the 3D visual aid aligns with the current cooking step, providing a seamless and interactive AR experience.
Images Throughout Our Process