Question about ImageNet: ImageNet has so many categories and is able to recognize so many different things. Based on the machine learning mechanism we learned, the training process of ImageNet involves processing millions of images. Where do those images come from? Do they come from the internet or from other agencies? licensed or not? What is the standard of choosing the training images? Do the choices of image influence the final learning results?
Also when we are using Imagenet on p5js with our camera on or uploading images to the browser, where do the uploaded images and the images in our camera go? Can this process be a concern about our privacy safety? Will the images users uploaded be a part of the training process and influence the machine?
I use the image classifier to test things on my desk: a pair of glasses(reflex camera), mouse(television), Bluetooth keyboard(keyboard, keypad), mug(coffee mug, microphone, mike) , phone(cellar phone) , pen(ballpoint), napkin( handkerchief), key(nail) , bottle(paper towel). It recognizes half of the items, or close like napkin and handkerchief I think what impacts the accuracy of recognition is the background, lighting ,and how close it is to the camera. I tried to adjust all of these elements the program can't recognize the items(saying mask or other random words for the whole time). Objects on clear background(on my clean desk) are easier to be recognized than a complex background(my face). Also, big objects with obvious features are easier to be recognized like the keyboard.
I tried the image classifier on my Macbook, ipad and iphone. Surprisingly it runs much quicker on my ipad than macbook. I think Imagenet that can runs on mobile and browser really opens the machine learning gate for the public. I can have the access to machine learning on an iPad browser, opening it up and training my own model at any time any places.