2 Abstract Spaces

Transforming Immersive Spaces to Geometric Abstract Environments

 

Motivation

My goal for this assignment was to work more with physical objects and mediums and staying away from more programming-heavy projects like my first Lab assignment. However due to tunnel-vision while working on this project I automatically derailed in a digital rabbit-hole filled with Vertices and Edges. I wanted to explore immersive ideas within the Open-frameworks environment using 3D elements, visual footage and automatic generation of meshes and the manipulation of these generated meshes. I ended up with a different end result than envisioned, but the predefined goals were accomplished: a visual representation of a former environment abstracted thought geometry.

The Process

I started off with finding recognizable places in Leiden using Google Street View, after gathering a handful of these places I extracted the panorama images (1664x832) with a tool so I could use them for my project in OpenFrameworks. Initially, I tried to load these images in a texture variable and bind these to an 'ofSpherePrimitive' so that the user could be immersed in a digital reality. However, I found a interesting online article describing a method to convert an image to a 3D mesh, using the method I ended up with a flat 3D mesh representing the former image. (the mesh vertices were generated by the image's pixels, wherever in the image the intensity of the pixel's colour was the most bright a vertex was placed.) 

However, this was not the goal I had in mind, the generated 3D mesh should be in the shape of a sphere so that the user could experience the converted environment. Through trial-and-error and some articles later I found a solution to the problem: Mapping the brightest pixels of the image in a vector and processing these through a function that converts vertices forming a sphere. After this was completed the formerly saved colour code correlating each node were used to give the vertices colour in the newly created sphere. After this, another function joined close vertices together forming a web resulting in a nice representation of the image. Some jittery effects were added to the mesh to give the final result more 'Zing'.

Initial Ideas

Before committing to the formerly elaborated project my mind was still captivated by the 3D-scanning application used within my first lab-assignment. The idea was to 3D-scan statues throughout Leiden (using: https://nl.wikipedia.org/wiki/Lijst_van_beelden_in_Leiden, this page is in Dutch, unfortunately) and to display them in their appropriate environmental panorama using Google Street-view images. However, during tinkering with openFrameworks, this initial idea didn't fit the current project anymore: the 'high-definition' 3D model contrasted the geometric panorama to much to get the message across. In future works, this could be negated by altering the 3D-model and reducing its poly-count to fit the background panorama, and maybe by removing the faces this could fit even more in the current context (only displaying the model's composition with lines and vertices.

References & Resources (+ Source Code)

https://e4youth.org/blog/2019/02/05/snapping-360-images-from-google-street-view/

https://medium.com/@nocomputer/creating-point-clouds-with-google-street-view-185faad9d4ee

https://forum.openframeworks.cc/t/high-resolution-360-photo-in-of/23676

https://openframeworks.cc/ofBook/chapters/generativemesh.html

The source code can be found here: https://drive.google.com/file/d/19m4Bpo_Nm2AwqgsOE1z7mmPPWw2eI_MT/view?usp=sharing