What questions do you still have about the model and the associated data? Are there elements you would propose including in the biography?
I am surprised by so many detailed guidelines for building the model and dataset. The bibliography delivers a lot of useful information for the users. The bibliography not only tells people basic information about the model but also more importantly addresses that the model user downloaded within a moment actually costs a humongous amount of labor. It also hints at the potential danger of using such model because some part of the data may come from the internet and be used without consent. We can't take the already-made model for granted without knowing anything of it. One question I would have is that as the developer update the model, imploring new data, will they update the biography too? And also should the biography includes the updated content, when it is updated, by whom, and what is the updated content. The modification in model and dataset is really hard to track since only the developer can have the access to the original dataset. Therefore sharing the update, new data, any modification in the biography is a good practice.
How does understanding the provenance of the model and its data inform your creative process?
As we learned in class and mentioned in the slide, the majority work of training a model is searching for various data, catagorizing them and feed it to the machine. How the model will perform is largely affected by the source of data and the method of collecting it. Therefore, tracking where the data comes from tells us a lot of things about the model. Data is the building block of the big project. If the buildblock is misplaced- developer gathering data from an illegal, unconsented source- then the whole building falls apart.
I used Postnet as the model for this assignment because I think recognizing face is more fun. The template I use is the PoseNet Part Selection. https://editor.p5js.org/ml5/sketches/PoseNet_pI want to p5 can tell when I am looking straight at the screen and when I am looking left or right. So I use the difference between the x position of nose and the x position of left eye or right eye to represent turning left or right. When I am turning right, the difference between nose x position and left eye x position is decreasing. When I am turning left, the difference between nose x position and right eye is decreasing. So I created three if statement: If the difference between one eye and nose x position is less than 30, then the machine determine that I am turning around. I didn't consider depth this time. If I am going to consider depth/z axis, I can't just set a fixed number 30. Perhaps I have to compare the both x axis and y axis to get a more accurate prediction.
After I successfully teach p5 how to recognize my tuning, the next thing I did is to use map function to create the gradient change of background color when I slowly turn my head. I map the difference between one eye and nose to the alpha value(opacity) of background color from 50 to 150.
Here is the link to my p5 sketch: https://preview.p5js.org/Alicelong/present/MUekJmFHn
I think one problem I found with this model is that color sometimes flash back and forth. I think it is because the x position number are changing very frequently. As the number changed, the color also changes. A possible way of solution is multiplying the number by 0.1 or 0.5 to decrease the difference.