Support this project on flattr .
I just released a new version with some bug fixed. Nothing major, but if you use the method findCentroids() and the the two methods for retrieving the centroids coordinates, you need to upgrade. Special thanks to Mauro Herrera for reporting a nasty bug.
Today I released the new version of Blobscanner, which is available for download on this website and on github, the new home for the source code of this project.
The first thing you need to know, especially if you have already used a previous versions, is that the import's name has changed from Blobscanner to blobscanner, to be more in line with the Java guidelines. The second thing, is about the constructor of the Detector class, which also (for the new users) is the only class. Two new constructors have been added:
Detector(PApplet parent) (1)
Detector(PApplet parent,int threshold) (2)
if you use the #1 you need to call the method to set the blob's threshold value at least one time:
setThreshold(int threshold) (3)
The new constructors set automatically the area searched for blobs to the entire image, by default, but this behaviour can be altered by calling the following method:
setRoi(int startx,int starty,int roiwidth,int roiheight) (4)
#4 defines a Region Of Interest to be searched for blobs, minimizing the amount of pixels to be scanned, with a consequent increasing in the speed of execution. The method is paired with another one which revert to default the size of the ROI :
after calling #5 the entire image is once again scanned for blob, as it was before calling #4. The old constructor was left to provide some backward compatibility for users who decide to install the new version. It will be removed in a future release. As usual the zip archive contains plenty of examples for the new methods, though I will expand this post in a tutorial as soon it will be possible.
If you have already update to Processing 1.5.1, most probably you have noticed that things have changed a little. After unpacking the compressed archive file you find a completely different directory structure from the previous release.
How to install Blobscanner in Processing 1.5.1.
First thing you want to unpack the archive to grab the folder Blobscanner and navigate down the following Processing's directory tree:
...\processing installation folder\processing-1.5.1\modes\java\libraries
once there, drop the folder in it.
How to install Blobscanner's examples.
Here it comes the tricky part. With processing 1.5.1, people developing contributed libraries should place the "examples" folder in the same folder where the folder named "library", containing the library's jar file, must be placed, which in the Blobscanner case it is indeed the folder called "Blobscanner". That's because in the new release of Processing the access to the examples it is encapsulated in to an applet which it's lunched every time you click on File > Examples... Thus if the "examples" folder it's not placed as I explained above, the examples won't show up in the applet. At this point all the people who have already downloaded Blobscanner will think the same thing: why in Blobscanner the examples folder was not placed in the "Blobscanner" folder ? Well the reason is very simple: when I released version 034 I was still running Processing 1.2.1, so I didn't know about the changes coming with the new release. Now, to make things right, all you need to do it is to rename the folder "Blobscanner_examples" to "examples" and move it to the "Blobscanner" folder. In this way you should find in the examples applet, under the libraries node, a folder named Blobscanner containing all the examples. Do not forget to restart Processing after applying the changes.
Please read the post above this for setting the examples in Processing 1.5.1. Also check back for info and tutorial on API changes.
One of the hardest task in computer vision, and most probably the one which takes the majority of the efforts by CV researchers today, is the detection of human hand finalized to achieve touch-less human-machine interfaces.
Recently, on the big wave of hundreds of scientists, computer vision professional, students and hack-mateurs of this computer science field, a proprietary piece of hardware has been released and distributed on the market indeed as touch-less game control interface, costing the same amount of a standard digital camcorder. But what if we can't or we do not want spend such money ? Well with an old camcorder or a simple web-cam is still possible interacting touch-less with a computer in an effective way, building a touch-less software interface. How much effective it depends on many factors, but the most important of all it is without doubt the environment where the touch-less interface we are going in to build has to be used. That's the major discriminant in the choices we will make writing the algorithm. In any case, isolating the hand from the rest is the first step; obtaining an image with a single element: the hand. And that's may be or may be not a trivial task, it all depends again by the environment, especially the part that constitutes the image's background. It may be a trivial task if we need to remove a single colour or an homogeneous background. With a noisy background or with an inhomogeneous luminance distributions in it, isolate the hand becomes crucial and much more complicated.
Once the hand is isolated the next step is determined by the type of interface we need to built. We must find a suitable way to translate the hand image we dispose of in to data which the interface will use to generate events.
The computation of this data may be simple like for the hand's centroid, or more complex like finger(s) tips position, hand's silhouette curvature, etcetera. Also, the more events our interface needs to generate the more data we need to gather from the image.
A Processing implementation
With the next release of Blobscanner I will publish the code of a touch-less interface implemented in Processing, using Blobscanner code as starting point. This is only a little and raw thing, but I honestly think that it will be quite unique at least for the fact that it's nearly impossible finding a complete implementation of such complex algorithms for free. Also I hope to find the time to write a tutorial explaining the whole thing step by step. For the moment satisfy yourself with the video of the project.
Yesterday I was googling the words blob + contour + ...I don't remember what, when I came across to the onformative's website . onformative is a Berlin studio for generative design which is using Processing as artistic medium. In the last article on their home page, Cedric Kiefer, from onformative studio, proposed a method for creating 3d contour maps using heightmaps with Processing plus a blob detection library (v3ga). The approach is very simple but smart and effective and I think the result speaks for itself. As soon I saw the images from the experiment I became soon curios to see what would have been the result using Blobscanner as blob detection tool, so I grabbed the code from Cedric page plus the heightmap image and I wrote an implementation using Blobscanner. The code uses two new methods not yet implemented in the last release so I will post it when will be usable. For the moment you can see same screen-shots on the gallery page.
This is one of them :
It’s passed nearly a month since the first release of Blobscanner and I can certainly say that this project has generated interest among the members of Processing’s community. I can say that from the feedback I received through the Processing’s forum and the many emails and from the number of downloads of the first four versions , which as you may know were only tested on Windows. Considering these things, it makes really worth every minute spent. But probably what pays back more is to know that same guys have already tested the library for same of their projects, showing the potentiality of computer vision applied in the computational arts,
like Amnon Owed, who also was one of the first of those to step forward with same constructive feedback about
the library. This is what he came up with ...
Hi guys, V 032 is ready packed. As always you can download the complete release zip or only the one with library jar. This time I added a zip with all the examples( two new ), too. To whom downloaded v 031, please update at least the library jar, as a bug involving the isEdge methods has been corrected.
It was quite early for a new release but today I found ( thanks to proce55ing's question on the Processing's forum ) a significant bug which prevented the findBlobs and imageFindBlobs methods from detecting black blobs. On the download page it is also possible to download only the library .jar.
In the compressed zip I only made few changes suggested by Amnon on the Processing's forum, I modify an example and I added another one. So , for all of you who downloaded the v01, you can very easily download only the Detector.jar file and swap it with the old one.
Link to the download page:
Yesterday I uploaded two videos on YouTube which I made while I was running same of the code examples coming with the library .zip file. Here they are:
This is an example of how it is possible to select the mass of the blobs
detected by Blobscanner. The blobs with yellow bounding box and
light blue contours, contain at least 1000 pixels. This is done by calling
the weightBlobs method and than drawSelectContours and
drawSelectBox methods.These latter take as argumet the minimum
weight of the blob for which contours and bounding box must be drown,
unlike their sisters methods drawBox and drawContours that execute
the operations on all the blobs detected by the library.
In this example the method isBlob is used to isolate the blob pixels from
all the others. After that two similar gradient are applied with opposite
direction, first to the non-blob pixels and than to those isolated before.
1-10 of 13