A few updates on some ongoing "non-astro" projects :-)


Also, have a look to the blog ikeloablog.com, which I recently started! 

Southern Ring Nebula.mp4

Herakoi - the sound of the Universe


Herakoi is a motion-sensing sonification experiment, created by Luca Di Mascolo and myself.

It leverages a machine learning model to track in real time the position of your hands in the scene observed by a webcam connected to your computer. The model landmark coordinates of your hands are re-projected onto the pixel coordinates of your favorite image. The visual properties of the “touched” pixels (at the moment, color and saturation) are then converted into sound properties of your favorite instrument, which you can choose from your favorite virtual MIDI keyboard.

just pip install herakoi and have fun

Links to the documentation and the github pages:

https://herakoi.readthedocs.io/

https://github.com/herakoi 

The complexity of the Universe in a cup of coffee

... and how JPEG files can help studying it :-)

Imagine a nice, simple, low-entropy cup of coffee. 

Pour some milk and you'll see beautiful and complex structures arising, before the cup reaches a high-entropy homogenous status, back to simple. Interesting things happen while entropy increases!

Here is a funny experiment.

I took a video of a milk & coffee system from a free-video web repository and I analysed what's going on in the cup, frame by frame. To estimate the "complexity" I checked the file size of JPEG-compressed images of each frame, following the idea I read in a wonderful book by the brilliant Sean Carroll (check this and this!).  Complexity (traced, with some caveats, by the amount of info needed to describe the frames) peaks in the middle, between low- and high-entropy states. Galaxies, stars, ourselves, are like those beautiful structures of coffee & milk, part of the most amazing and complex phase of this Universe.

Read my Medium article: https://medium.com/@micheleginolfi/the-complexity-of-the-universe-in-a-cup-of-milk-and-coffee-9643f45b3f86 

Writing with eyes using computer vision

Machine learning-based facial mapping (landmarks) with Dlib + python + openCV, with eyes projection on a virtual keyboard. The algorithm works in real-time on the video-stream from the webcam. To "press" the key I use blinking detection.

code (simple): https://github.com/michemingway/eye-writing-easy  feel free to play around it :)

Self-driving mini-car  

line tracker with Raspberry Pi + Pi Camera + openCV

(work in progress)

Simple example of Color Thresholding - python + openCV

see how to easily separate colours in a Lab color space, which is less sensitive to illumination. 

Email: michele.ginolfi@unifi.it


cover image: HUDF (NASA, ESA)