Update 9/22/2016


We have
updated the resolution on our pointcloud for the canopy. We are now using it at full resolution instead of a 32% reduced size image. This results in a text file that is 27.2 GB that holds x, y, z, r, g, b, a values. This is mostly possible because we have just updated the graphics cards in the Cave2 at UIC. The resulting 3D model has a more full canopy. We output 1/20 dots at each x, y, z, r, g, b, a point.







Update 9/20/2016

We have applied a texture to our mesh. The first shot is under the canopy and the second shot is with the point cloud turned off in our settings. This texture was applied using Blender. In order to apply this mesh we loaded in an fbx that contained our untextured mesh. We opened the mesh in edit mode and chose the option to Project view from bounds. Afterwards, in the UV Editor Mode we opened our texture (a jpg of our island) and constrained the image to the view. After this, we clicked on materials and added our texture, and made sure to select the options to clip, and under UV we selected generated. After this we exported as an ascii fbx. We need to make sure to export it with armature and mesh options checked. Then we loaded it into Omegalib using the following code:
def loadModelAsync(name, path):
    model = ModelInfo()
    model.name = name
    model.path = path
    model.optimize = True
    model.usePowerOfTwoTextures = False
    scene.loadModelAsync(model, "onModelLoaded('" + model.name + "')")

def onModelLoaded(name):
    model = StaticObject.create(name)

    model.setEffect('textured')

modelPath = "/home/evl/jhwang47/v1"
def loadModel(name, path):
    model = ModelInfo()
    model.name = name
    model.path = path
    model.optimize = True
    model.usePowerOfTwoTextures = False
    scene.loadModel(model)
    model = StaticObject.create(name)
    #model.setEffect("colored -d #4d4d4d")
    model.setEffect("textured")
    #model.setEffect("20Island")

loadModelAsync("Terrain", "scale100HM.fbx")





Update 9/13/2016
Photo Credit: Lance Long
We have been selected as an honorable mention for this years Images of Research Exhibition at UIC. For more information please visit: http://grad.uic.edu/ior-results/2016



Update 9/05/2016
A sample picture of our application characterizing our encoding of movement. The color scheme filters by hour, using bluer shades at night and yellow during the day.

After a large amount of work, we have found a way to show lines without slowing down the application. We ran into a problem where each line had it's own transformation matrix. This means that the CPU was having to make a lot of calculations, and as a result the program became very choppy. We have devised a way to make a single line for every single animal using a Custom Geometry class in Omegalib.
 
def createCustomGeom(f, scene, geomName):        #Function parses file and creates lines
                                                        #that represent movement into a single
                                                        #custom shape.
    global moveLineProgram

    firstRun = True

    numVertices = 0
    prevV3 = Vector3(0,0,0)
    prevV4 = Vector3(0,0,0)
    prevV7 = Vector3(0,0,0)
    prevV8 = Vector3(0,0,0)

    prevID = ""
    prevLine = ""

    unitY = Vector3(0,1,0)
    unitZ = Vector3(0,0,1)

    thickness = 2
    geom = ModelGeometry.create(geomName)
    for line in f:
        # if numVertices == 6:
        #     break
        if line == '-999':
            break

        line2 = f.next()
        
        if line2 == '-999':
            break

        tokens2 = line2.split(" ")

        if prevID != int(tokens2[6]):
            firstRun = True

        if firstRun:
            tokens = line.split(" ")
        else:
            tokens = prevLine.split(" ")

        pos1 = Vector3(float(tokens[0]), float(tokens[1]), float(tokens[2]))
        
        pos2 = Vector3(float(tokens2[0]), float(tokens2[1]), float(tokens2[2]))

        vec = pos2 - pos1
        d = vec.normalize()
        unitZV1 = d.cross(unitZ)
        unitYV1 = d.cross(unitY)

        v1 = pos1+thickness*unitZV1+thickness*unitYV1       #list of vertices
        v2 = pos1+thickness*unitZV1-thickness*unitYV1       
        v3 = pos2+thickness*unitZV1+thickness*unitYV1
        v4 = pos2+thickness*unitZV1-thickness*unitYV1
        v5 = pos1-thickness*unitZV1+thickness*unitYV1
        v6 = pos1-thickness*unitZV1-thickness*unitYV1
        v7 = pos2-thickness*unitZV1+thickness*unitYV1
        v8 = pos2-thickness*unitZV1-thickness*unitYV1

        dayDelta = int(tokens2[3])
        hr = int(tokens2[4])
        minute = int(tokens2[5])
        individualID = int(tokens2[6])

        if firstRun:
            dayDelta = int(tokens[3])
            hr = int(tokens[4])
            minute = int(tokens[5])
            individualID = int(tokens[6])
            firstRun = False

    #####################Front Panel##################################################
        geom.addVertex( v1 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v2 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v3 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))

        geom.addVertex( v3 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v2 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v4 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
    ##################################################################################
    #####################Back Panel##################################################
        geom.addVertex( v7 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v6 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v5 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))

        geom.addVertex( v8 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v6 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v7 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
    ##################################################################################
    #####################Top Panel##################################################
        geom.addVertex( v1 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v7 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v5 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        
        geom.addVertex( v7 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v1 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v3 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
    ##################################################################################
    #####################Bottom Panel##################################################
        
        geom.addVertex( v8 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v4 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v2 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))

        geom.addVertex( v6 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        geom.addVertex( v8 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))    
        geom.addVertex( v2 )
        geom.addColor(Color(dayDelta, hr, minute, individualID))
        
    ##################################################################################
        
        numVertices = numVertices + 24

        prevV3 = v3                 #Store beginning points of next line
        prevV4 = v4
        prevV7 = v7
        prevV8 = v8

        prevVec = vec
        prevID = int(tokens2[6])
        prevLine = line2
        
    f.close()
    geom.addPrimitive(PrimitiveType.Triangles, 0, numVertices)

    scene.addModel(geom)
    obj = StaticObject.create(geomName)
    obj.setPosition(0, 0, 0)
    obj.getMaterial().setTransparent(True)
    print 'finished Parsing'

    return obj


    We essentially used a little vector arithmetic to calculate how to draw a rectangle from one point to the next. By holding the x-coordinate constant, we oriented the rectangles such that they created a 3D rectangle. In order to draw our rectangle, we used triangulation and kept our primitive type as triangles. We attached a GLSL program to our custom geometry to allow for rapid filtering by day, hour, minute, and individual id. This was possible because a typical file for GLSL imports an x, y, z, r, g, b, and a value. We wrote over these with filtering criteria, and then colored our lines in a vertex shader.
    We wish we could have created the vertices of our points with a binary file; however, due to GLSL's parallel nature this would be difficult. We would like to do this as an improvement in the future. We also plan to clean up lines, and add spheres between each line to mark exactly where each point is in space.


Update 6/21/2016
    We have finished creating a mesh of our terrain. We received MDT files. These files include a .TIF, .TFW, .TIF.AUX, and a .TIF.OVR. We processed these images using ArcGis Pro. We first created a map, and then used the import data option to import data from all 4 files. We then exported our map along with a world file as a .png and a .pngw. The world file consists of the amount of meters each pixel represents, and the UTM coordinates of the top left corner of our image. The following image was created
We used photoshop to isolate the island, Barro Colorado Island.
     We created a Python script that generates a .obj file using a height map, and scaled down version of our original island image. We translated each point in our scaled down image to a UTM coordinate. We found the z-value that corresponded to each point in our scaled down image using this function. We did this using the same method we used earlier.
def computeIMGUTMtoHMXY (imgx, imgy):               #function converts pixel in original image 
                                                    #to utm coordinate
                                                    #then converts the utm to pixel in height map
    #Images UTM Coordinates
    startIMGUTMX = 624030.0137255481
    startIMGUTMY = 1015207.0834458455
    #Height Maps UTM Coordinates
    startHMUTMX = 619764.479784366560000
    lastHMUTMX = startHMUTMX + 5000*4.335008086253366
    startHMUTMY = 1023894.520215633300000
    lastHMUTMY = startHMUTMY - 5000*4.335008086253366
    imgResRatioX = 0.18/(float(962)/32064)
    imgResRatioY = 0.18/(float(924)/30780)

    utmX = startIMGUTMX + (imgx*imgResRatioX)       #Calculates the E-W utm coordinate
    utmY = startIMGUTMY - (imgy*imgResRatioY)       #Calculates the N-S utm coordinate
    hmx = int(((utmX-startHMUTMX)/(lastHMUTMX-startHMUTMX) * 5000))     #X-coordinate of height map
    hmy = int(((utmY-startHMUTMY)/(lastHMUTMY-startHMUTMY) * 5000))     #Y-coordinate of height map
    return (hmx, hmy)                               #return as a tuple
     The format of a .obj file is to first output the vertices and then the faces. We created a script that created an ascii file, which output the vertices; however, we ran into a problem where our mesh was a mirrored image of our island. This most likely had to do with the PIL library. We ended up fixing our problem by outputting our x-values for each vertex in reverse order. Since we output our x-values in reverse order, we had to ensure that we were still getting correct values for our z. In order to do this, we used a Python List to store our z-values. Then we popped from this List and appended the x, y, and z to our file string.
for x in range(i):
    for y in range(j):
        utmXY = computeIMGUTMtoHMXY(x, y)
        hmX = utmXY[0]
        hmY = utmXY[1]
        if (hmX >= 0 and hmX < 5000 and hmY >= 0 and hmY < 5000):
            if hmPixInfo[hmX, hmY][3] != 0:
                val = float(hmPixInfo[hmX, hmY][0])
            else:
                val = 0.0
        else:
            val = 0.0
        vList.append(val)

#use a list as a stack to reverse order of values
for x in range(i):
    for y in range(j):
        if vList:
            val = vList.pop()
            fileString += "v " + str(18/3*(i-x-1)) + " " + str(18/3*y) + " " + str(val) + "\n"
     Finally, we can output our faces. Typically, in order to generate normals that face the correct direction we must output our vertices in counter-clockwise fashion. Since we mirrored our image, we had to output our vertices in clockwise fashion.
for v in range(vertex, i*j-j):
    fileString += "f " + str(v) + " " + str(v+1) + " " + str(v+j+1) + " " + str(v+j) + "\n"
We then opened our .obj file in meshlab. We did an export as .obj and deselected the camera. This effectively added in our face normals for us. Generally, not specifying the normals is not a problem; however, omegalib does not have the ability to auto generate normals for us. We then converted our .obj file to a .fbx using AutoDesk FBX Converter. The following code loads our .fbx file into the omegalib framework.
def loadModelAsync(name, path):
    model = ModelInfo()
    model.name = name
    model.path = path
    model.optimize = True
    model.usePowerOfTwoTextures = False
    scene.loadModelAsync(model, "onModelLoaded('" + model.name + "')")

def onModelLoaded(name):
    model = StaticObject.create(name)
    model.setEffect('colored -d #404040')

loadModelAsync("Terrain", "m3terrainMap.fbx")
     The previous and following images are what resulted.
We have plans to add a texture to the mesh. This will allow us to retain a waterlike border for our point cloud, and fill in the gaps so we produce a more visually stimulating image.

From,
The SENSEI-Panama Team     


Update 5/22/2016
    We have received locations of fruiting trees on the island from a different team. Each black and red line represents a blossoming fruit tree, and the z values vary by their height in real life. We have added a button that will toggle this feature on and off. This will allow biology researchers to determine which fruit trees the animals frequent more often. Our first task was to parse a text file for relevant information about each tree. We created an ascii text file containing an x, y, and z position for each tree. This text file was parsed while loading our program, and the contents were copied into a multidimensional Python list. The lines were created using Omegalib's LineSet class. Documentation can be found at https://github.com/uic-evl/omegalib/wiki/LineSet. Our code is as follows:

toggleTrees = False                                 #Turns tree markers on and off

thickness = 13                                      #thickness of markers
treeNode = SceneNode.create('treeNode')             #create a new sceneNode to hold markers
getScene().addChild(treeNode)                       #add it as a child to parent scene

trees = open("treesFloat.txt", "r")                 #open the file
content = trees.readlines()                         #load the file
c1 = LineSet.create()                               #create a LineSet object

treeList = []                                       #multidimensional array holds x, y, z of trees
treeIndex = 0                                       #used to index trees
for line in content:                                #loop through file

    tokens = line.split(" ")                        #split the line into tokens

    treeList.append([])                             #append another list to the current list
    treeList[treeIndex].append(float(tokens[0]))    #append x, y, z info
    treeList[treeIndex].append(float(tokens[1]))
    treeList[treeIndex].append(int(tokens[2]))
    treeIndex += 1                                  #increment the treeIndex counter by 1 for 
                                                    #next line

    l = c1.addLine()                                #add a line to the marker object
    l.setStart(Vector3(float(tokens[0]), float(tokens[1]), 1))
    l.setEnd(Vector3(float(tokens[0]), float(tokens[1]), int(tokens[2])*0.18))
    l.setThickness(thickness)
    s = SphereShape.create(thickness/2, 2)           #create a cap for the marker
    c1.addChild(s)                                   #add the cap as a child to the current                                                         #line
    s.setEffect('colored -e black')                  #color of the cap
    s.setPosition(Vector3(float(tokens[0]), float(tokens[1]), int(tokens[2])*0.18))
    c1.setEffect('colored -e black')                 #color of the stem
trees.close()                                        #close the file
treeNode.addChild(c1)                                #add the object as a child to the scene
treeNode.setChildrenVisible(False)                   #set it as invisible

    Wrapping the LineSet into a SceneNode allows us to set the lines as visible or invisible. This means that when our eventhandler is called for the "Show Fruit Trees" button then there is only a change to a single boolean. This makes toggling the trees on and off extremely fast; however, loading the textfile makes loading the program a bit slower. This is because loading is being done by the CPU. We believe we can increase performance if we move parsing to the Shader's using OpenGL.
    We have also been working towards creating lines instead of points for GPS movement tracking, and reducing each GPS burst to a single more accurate point. So far, we have succeeded in turning the lines into little rectangles in our Shaders. We ran into a problem, because Omegalib did not have support for the LineStrip primitive. We are currently looking into adding this into Omegalib, or trying to come up with a way to use TriangleStrips to acheive the same goal. Be sure to check back for more updates!

From,
The SENSEI-Panama Team


Update 5/10/2016
    We have successfully added tracking data to our 3D model of the island. The image above shows one spider monkey individual's movement. The points go from white to a darker shade of blue as time passes by. In order to obtain these results we played with the shader's in Omegalib. This is very similar to OpenGL code. As for parsing the data files and creating a file that lists each point, we have written a python script to parse the data and create a .xyzb file the same way we generated the point cloud for the island itself. Typically, GPS tracking gives bursts of points at the same time. are currently taking all of them, but we will need to create a smarter algorithm that will condense this burst into a single point. Furthermore, we have plans to create a solid line rather than to display single points. This will most likely involve us interpolating our data as well.
     Our group has also been discussing several different menu options that would be helpful for biology researchers to visualize their data. The biology researchers we are helping are interested in answering the question whether the mammals they have tracked use memory or visual cues in their search for food. This will involve us creating several different views that will allow them to look at the island in the desired orientation. We will have a vertical overview of the island, a more horizontal view that allows them to see more in depth, and we also plan to allow researchers to click on a button to snap them to the top of a tree which will allow them to envision what the mammal could see. We also will be allowing the researchers to step through the gps tracking data in several different ways. As shown above, we will allow them to move day to day, by increments of 7 days, or all days. We will also be adding functionality to show several different mammals at the same time, and for the researchers to be able to pick the color that they want each mammal's trajectory to show up as. Please stay tuned for more updates!

From,
The SENSEI-Panama Team



Update 4/17/2016
    This update will consist of two parts. Our team has finished calculating UTM coordinates from our original image to correspond to points in the Height Map image for a more accurate representation of our island. We have also laid our point cloud over our plane view image for a fuller visual effect. We will first discuss how we calculated the UTM coordinates.
    The conversion from UTM from the image to points in the Height map were done using two functions that was later combined into a single function.
#computed using img
def computeUTM (imgx, imgy):
    #Images UTM Coordinates
    startIMGUTMX = 624030.0137255481                #Start of x-dimension UTM for image
    startIMGUTMY = 1015207.0834458455               #Start of y-dimension UTM for image

    imgResRatioX = 0.18/(float(10260)/32064)        #Calculate what each pixel is in meters
    imgResRatioY = 0.18/(float(9850)/30780)         #Calculate what each pixel is in meters

    utmX = startIMGUTMX + (imgx*imgResRatioX)       #Calculate where UTM corresponds to in
                                                    #image by x pixel
    utmY = startIMGUTMY - (imgy*imgResRatioY)       #Calculate where UTM corresponds to in
                                                    #image by y pixel
    return (utmX, utmY)                             #return tuple

def UTMtoHM (utmx, utmy):
    #Height Maps UTM Coordinates
    startHMUTMX = 624079.8465020715
    lastHMUTMX = 629752.8465020715
    startHMUTMY = 1015157.5668793379
    lastHMUTMY = 1009715.5668793379
    
    hmx = int(((utmX-startHMUTMX)/(lastHMUTMX-startHMUTMX) * 5673))
    hmy = int(((utmY-startHMUTMY)/(lastHMUTMY-startHMUTMY) * 5442))
    return (hmx, hmy)
The function computeUTM calculated the UTM coordinates of each pixel in our original image. The second function UTMtoHM calculated the x and y coordinates based on the UTM that was passed in as parameters.
    Our next goal was to lay our new point cloud over our plane view in omegalib. We had some slight calculation's to do in order to achieve this.

imgResRatioX = 0.18/(float(10260)/32064)
imgResRatioY = 0.18/(float(9850)/30780)
plane = PlaneShape.create(imgResRatioX*10260, imgResRatioY*9850)
plane.setPosition(Vector3(imgResRatioX*10260/2, imgResRatioY*9850/2, 0))

First, we needed to calculate the size of our plane to scale to the size of our point cloud. This was done in the create function call from PlaneShape. We then had to set the position of our plane view so that it would lay under our point cloud. This was done in the setPosition function call.
    We will continue developing an interpolation algorithm to smooth the differences in our z-values. We will also begin to display tracking movement over the island. Stay tuned for more updates!
From,
The SENSEI-Panama Team


Update 4/06/2016
    We have successfully introduced the Z-values from our height map into an image of our island. We used a point cloud to draw a point for every pixel using a .png image of the island. Our z-values came from the pixel color in a height map that we received.

filename = "hmColorHigh.xyzb"                       #output filename
hm = "heightMap.png"                                #height map filename
pic = "32Island.png"                                #picture filename
hmImage = Image.open(hm)                            #open the height map
hmImage = hmImage.transpose(Image.FLIP_TOP_BOTTOM)  #correctly orients the height map
picImage = Image.open(pic)                          #open the picture
picImage = picImage.transpose(Image.FLIP_TOP_BOTTOM)#correctly orients the image
hmPixInfo = hmImage.load()                          #loads pixel data of height map
picPixInfo = picImage.load()                        #loads pixel data of picture
(hi,_) = hmImage.size                               #gives x-size
(_,hj) = hmImage.size                               #gives y-size
(i,_) = picImage.size                               #gives x of image
(_,j) = picImage.size                               #gives y of image
f = open(filename, 'wb')                            #opens the output file and gives option
                                                    #write in binary


The code above opens both files and loads them both into an array indexed by a tuple. The transpose is used because the images are loaded in upside down, so this will correct the orientation when it is being viewed in our virtual reality environment. We then have to iterate through the large image, and do some math to find the correct pixel in our height map. This method uses approximation to grab a height value that is relatively correct.

for x in range(i):                                  #iterate through all pixels
    for y in range(j):
        #if the pixel is not black in the height map
        if hmPixInfo[int(float(x)/float(i)*hi), int(float(y)/float(j)*hj)] != 0:
            #z value from color in height map
            z = 0.6*float(hmPixInfo[int(float(x)/float(i)*hi), int(float(y)/float(j)*hj)])     

            r = float(picPixInfo[x, y][0])          #r value from tuple
            r = r/255.0
            g = float(picPixInfo[x, y][1])          #g value from tuple
            g = g/255.0
            b = float(picPixInfo[x, y][2])          #b value from tuple
            b = b/255.0
            #a = float(pixInfo[x, y][3])            #a value from tuple
            #a = a/255.0
            a = 1.0
            dataBytes = struct.pack('ddddddd', .18*x, .18*y, z, r, g, b, a) #pack into a struct
            f.write(dataBytes)                      #write to the output file
f.close()                                           #close output file

We have found a simple way to reduce the size of the file by excluding points where the height map is at sea level. The only points at this position are points that correspond to the ocean, so we have no interest in them.
    We will be improving our method by developing ways to convert our points on the map to UTM coordinates for both images. We will then find r, g, b, and z values according to the UTM coordinates in each image respectively. We will also be working on an algorithm for interpolation which will smooth out the transitions in our canopy.

From,
The SENSEI-Panama Team



Update 3/20/2016
    We have investigated using a .obj file in order to create a 3D model of our island. A .obj file consists of a list of vertices followed by a list of faces defined by our list of vertices. We took a file that depicted the canopy of the island in Panama. We had a .png file that was in gray scale, where each pixel represented the height of the canopy. We created the .obj file in a very similar fashion to how we created the point cloud image.


picname = "hm.png"                            #image to load
image = Image.open(picname)                   #open image
pixInfo = image.load()                        #load image into an array
(i,_) = image.size                            #get x dimension
(_,j) = image.size                            #get y dimension
text_file = open("terrainMap.obj", 'w')       #open destination file

    We first loaded an image using python's PIL library in order to take a .png image and load it into an array that is indexed by a tuple (x, y). We then opened a file and wrote to it in ascii.

fileString = ""
for x in range(i):
    for y in range(j):
        val = float(0.2*pixInfo[x, y])
        fileString += "v " + str(y) + " " + str(x) + " " + str(val) + "\n"

    Afterwards we created a string to hold the lines of our file. We noticed that our image was mirrored along the y-axis, so we decided to write the y-coordinate before our x-coordinate. Each iteration of our for loop appended a new line to our string.

vertex = 1                        #vertex starts at 1
for v in range(vertex, i*j-i):
    fileString += "f " + str(v) + " " + str(v+1) + " " + str(v+j) + "\n" + "f " +                str(v+1) + " " + str(v+j+1) + " " + str(v+j) + "\n"
    vertex += 1
text_file.write(fileString)      #write string to file
text_file.close()                #close file

    We then appended the faces of our 3D image to our string. We decided to use triangulation in order to create our 3D object. We start from the starting vertex, and go column by column until we have covered the entire image. We had to use columns instead of rows because our y-coordinate and x-coordinate were flipped. We will compare and contrast this method with using a point cloud in order to produce better results for our 3D object. Expect updates soon!
From,
The SENSEI-Panama Team



Update 3/6/2016
    We have been investigating several different options in visualizing our island. The first option is to use a point cloud, which makes a sphere for every single pixel in a flat .png image. We used python's PIL library in order to take a .png image and load it into an array that is indexed by a tuple 
(x, y). 

filename = "island.xyzb"                            #output filename
picname = "Island.png"                              #picture filename
image = Image.open(picname)                         #open the image
pixInfo = image.load()                              #loads pixel data into pixInfo
(i,_) = image.size                                  #gives x-size
(_,j) = image.size                                  #gives y-size
f = open(filename, 'wb')                            #opens the output file and gives option
                                                    #write in binary

    We then created a struct that took an option 'dddddd', x, y, z, r, g, b, and alpha value.
This was written to a file and uploaded to the Cave 2, our group virtual reality environment. Unfortunately the image loaded very slowly as depicted in the picture below.

for x in range(i):                                  #iterate through all pixels in reverse
    for y in range(j):
        if (pixInfo[x, y][0]) != 255:
            z = 0.3*float(pixInfo[x, y][0])         #z will remain 0 for now
                                                    #later we will get it from
                                                    #terrain map
            r = float(pixInfo[x, y][0])             #r value from tuple
            r = r/255.0
            g = float(pixInfo[x, y][1])             #g value from tuple
            g = g/255.0
            b = float(pixInfo[x, y][2])             #b value from tuple
            b = b/255.0
            a = 1.0                                 #alpha is always 1
            dataBytes = struct.pack('ddddddd', .18*y, .18*x, z, r, g, b, a) #pack into a struct
            f.write(dataBytes)                      #write to the output file
f.close()                                           #close output file



Using imagemagick, we scaled our original .png to 50% of it's original size, which gave us a resolution of 16,032 x 15,390. This meant that there was a lot of processing that needed to be done in order to display the image. If we wanted to go ahead with this option then, it
would be necessary to divide the image into sections, and load only the sections in view. This is a technique that is currently used by google with google maps.





The second option that we considered was to display a flat map .png file on our 3d environment. This allowed us to use a higher resolution photo.

We tried to upload the image at it's full resolution, but this resulted in a segmentation fault. We reduced the size to 60% of it's original size. This means that we could also try to divide the image into sections, and load only the sections that are in view in order to maximize the resolution shown. In order to create a 3-D image, we will be employing this technique in a mixture with the point cloud method, and also creating a mesh of the island to lay this flat .png on top of.



Both images were displayed on University of Illinois at Chicago's Cave2 using their omegalib framework.



Currently, we are investigating ways to create a 3-D model of the island using Meshlab and Blender. More updates will be coming soon!
From,
The SENSEI PANAMA Team



Developers: Jillian Aurisano, Ishta Bhagat, James Hwang, and Oliver San Juan

SENSEI-Panama, SENSor Environment Imaging-Panama, is a environmental visualization approach developed at the Electronic Visualization Laboratory (EVL) to investigate the movement patterns on newly isolated species in the anthropological sciences. Researchers who have left the field are unable to visually determine what animals see in their environment; however, they produce large amounts of data through the use of GPS collars, drone imaging, terrain maps, and accelerometers.

This data can be transformed to create a map that enables the researchers to easily track the movements of animals through the island.