3D Tag Cloud for Android

In this page we focus on creating a 3D Tag Cloud for Android devices. Such a Tag cloud will visualize the existing tag data in 3D and will allow the users to interact with it.


A tag cloud is a visual representation of text data. It consistes of several Tags arranged next to each other in plane or in space. Tags are usually single words or phrases and in most cases, the size of each tag is related to its importance with respect to the others. A well-designed tag cloud is useful for quickly perceiving the most prominent terms in the context. While 2D tag clouds are the most popular ones in many applications usch as summarizing a web page content, in recent years 3D tag clouds have become popular as well. Figure below shows a sample 3D tag cloud developed for Word Press called WP-Cumulus borrowed from http://wordpress.org/extend/plugins/wp-cumulus/ . 

Although there are a few free implementations of tag clouds for web development purposes, there is no knwon 3D tag cloud development for Android devices. As a result, we are focusing in this project to create a 3D tag cloud for Android.

The developed 3D Tag cloud will be part of a demo application for Tagin! SDK. The 3D Tag cloud will be used to show to users the existing previously-recorded tags based on their location. In other words, based on user's current location, all previous recorded tags are retrieved and visualized as a 3D Tag Cloud. Moreover, the 3D tag cloud will automatically get updated when the user move from one location to another. This page will further discuss the details of implementing such a 3D tag cloud.



Our first effort was to implement the 3D Tag cloud using OpenGL ES, OpenGLfor Embedded System. OpenGL ES provides a 3D graphics API for graphic and animation development on Android. The main reason for selecting OpenGL ES was because of its native support for 3D perspective. In other words, all we had to do was to create the 3D objects and place them in their correct 3D location. Our initial development was divided into the following steps:

Step 1: Selection of the best technqiue for the implementation:

Studying different visualization/animation techniques availanle in Android, evaluating the performance of each for our case, and reaching a conclusion about the best technique(s) for our case

Step 2: Creation of the 3D Tag layout:

Creating the basic layout for the 3D tag cloud. Such basic layout requires a 3D model in which arbitrary number of tags can be inserted and distributed evenly across the surface of a sphere with given radius. Moreover, all tags should be color-coded based on their importance and from a gradient between the starting and ending color entered by the user.

Step 3: Addition of user interaction to the model. Addition of transition and animation to the model:

Addition of the required event handling to the 3D tag layout. We want the user to be able to interact with the 3D model using the trackball or by touching the screen. Since the user must be able to interact with the 3D model, we need a smooth transition between frames while the 3D tag cloud is being rotated. This will further enable us to create smooth animation by simply rotating the 3D tag cloud.

Step 4: Showing text as 3D object in the model:

Since OpenGL ES and 3D objects are selected for visualization, texts have to be converted to 3D objects in order to become part of the model. At this step, we will write methods to convert a given text to 3D mesh and texture in order to visualize them together with the rest of the 3D objects.

Step 5: Making Tags clickable and adding filtering feature for the visible tags

Since texts are visualized as 3D objects using OpenGL ES, appropriate methods have to be developed to make them clickable. More precisely, we need to develop methods to support ray picking for our 3D model.

Step 6: Testing and Debugging:

Since Android is used on many different hardware, the implementation has to be checked for different versions of Android and different type of devices. Further testing is required to make sure all the modules are working as designed. In case of any bug, debugging and solving the problem is necessary.

Step 7: Documenting and wrapping up the implementation

The implementaiton has to be documented and the code has to be wrapped up in a way that it can later be used without much effort by other developers.

Updated Design:

After implementing the first 4 steps above, we noticed that OpenGL ES might not be the best solution. Although we expected performance issues due to having texts as 3D meshed objects, the main issue was with making such 3D meshes clickable. The conversion of texts to 3D meshed objects and placing several of them did not lead to significant performance issue. The performance of the GPU on the first version of mobile devices (HTC G1 dream phone) was acceptable. However, the problem occurs when we tried to make the objects clickable. The way to make a 3D object clickable was to consider a ray from the point where user click and pass that ray through the model to find the objects colliding with that. This lead to too much problems with both the algorithm and the implementation. Thus, we tried a second approach. Using Android View and its animation capabilities.

In order to creating a simple 2D drawing using Android View, Our class has to extend the android.view.View class. We then have to override the onDraw(Canvas c) method. onDraw is very similar to onCreate and is invoked when the view is first loaded. In addition, whenever we need to redraw the view we just have ti call the invalidate() method which invalidates the current view and redraws it using the redraw() method. Although the goal is to enable user to create simple 2D graphics and modify them when needed, we were able to implement our 3D tag cloud using it by keeping our tags in 3D coordinate and then manually projecting them into 2D space. In other words, we did all the calculations in 3D and projects the final result to 2D space right at the end. Besides the fact that this will make the calculation a little bit more complex, handling the elements on screen and more importantly making them clickable become much easier. Our initial efforts lead to great results as we ended up with having our 3D Tag Cloud implemented using simple View methods. This second implementation is stored as TagCloud_imp2  on the Tagin! repository.

Proposed timeline and current progress:

The proposed timeline is based on current implementation using OpenGL ES and the progress will be updated as project proceed:

May 23 - May 29:

Step1: Selection of the best technique for the implementation

Progress: Different possible Android animation techniques studied. These techniques include: 

        1. use of XML file, Views and Android draw/invalidation, 
        2. Frame-by-Frame animation 
        3. OpenGL ES

Our study showed that except for Frame-by-Frame animation, both XML+View and OpenGL ES have the potential to be used for the development of the 3D tag cloud. I ran several existing examples and read a bunch of tutorials to investigate both. The result showed that android.view.View class provides easier implementaiton while OpenGL ES provides smoother animation. In addition, OpenGL provides easier transition but harder selection/picking of the objects. As a result of these studies, OpenGL ES was selected as the primary technique for implementing the 3D tag cloud. Android demo API contains a bunch of examples using OpenGL ES among which Sprite Text and Touch Rotate were used as references for developing in OpenGL ES.

I also found out that if parameters are set correctly and objects are created using vertices, edges, meshes, and texture, interaction with the model including rotation using trackball or touch is much easier to implement in OpenGL than in XML+View. In fact, OpenGL engine will automatically handle the calculation of the location of the objects in addition to their scale relative to the camera while in XML+View technique it is the user who has to manually calculate and handle the locations, the movement, and the perspective parameters. However, in XML+View technique we deal with simple View objects such as TextView that makes it easy to make them clickable but in OenGL ES everything including the text will be 3D volumes and seleciton/picking of a text has to be implemented manually.

May 30 - June 5:

Step 2: Creation of the 3D Tag layout

Progress: After investigating the existing examples/tutorials, a simple sphere was created and the location of arbitrary number of tags was calculated on it. In other words, the developer can specify an arbitrary number of tags to form a 3D tag cloud and my calculation will distribute them on the surface of an imaginary sphere with given radius. Developer can specify whether he/she wants even distribution or random but based on my experience even distribution leads to much nicer visualization. Since text implementation is step 4 of this phase, during this initial layout implementation simple cubes are used to represent tags. These cubes will later be replaced by texts representing tags in step 4. The following figures show the initial layout implementation for 15 cubic tags evenly distributed on the surface of the cloud sphere:

I was also able to remove the project title from top of the screen so that the visualization can fill up the whole screen. The current visualizaiton is sensitive to the screen orientation/resolution and adjusts itself if the orientation is changed. 

In addition to OpenGL managing the location/scale of the objects, I also calculate the new location/scales for each object. If needed, these calculations can be used to intensify OpenGL results. The effect of having these additional scaling lead to exaggerated Tag Cloud in most cases. Thus,  in the final implementation only OpenGL default scaling system is used for handling rotation, movement and perspective view.

June 6  - June 12:

Step 3: Addition of user interaction to the model. Addition of transition and animation to the model. Addition of color-coding to the tags

Progress: Users should be able to interact with the visualization. Such interaction include rotation of the tag cloud along x and y axis. this interaction can be done using the phone's trackball or by touching the screen. Moreover, the initial size of the tags are adjusted by their relative importance. In other words, each tag has an importance property which can be set by the user at the time of creation of each individual tag. At the time of creating the Tag Cloud, these importance properties are used to calculate the initial relative scale/size. I should add here that this initial relative scale/size is then multiplied by the scaling factor resulted from the depth of the perspective view. In other words, the importance factor results in initial scaling factors which are calculated only once and used thereafter. 

As shown in the 3d Tag cloud example above, color-coding can enhance our visualization. Thus, the users are allowed to enter 2 color as the initial and final values for the tag cloud. the program interpolates between these two colors to select the appropriate color for the tags. Using only two colors as extreme values for painting the 3D tags creates a smooth transition between those two colors and leads to a very nice final visualization. Such colors are selected based on the imporance factor of the tags. Color coding has been implemented and added to the 3D model. Figures below show the effect of color coding on the previous 3D Tag cloud for two extreme colors of dark grey and orange. these two colors are hard-coded as default values for the case where the user has not specified exact extreme colors.

June 13 - June 19:

Step 4: Showing text as 3D object in the model

Progress: In order to have a 2D text in a 3D model, we have to setup a 2D projection matrix. Moreover, if OpenGL is used texts have to be create using simple volumes such as cubes,... The creation of 3D text from simple objects was implemented. However, as we want texts to be clickable, the process of making them clickable became the trouble. As an alternative, I tried the other animation technique relying on android.view.view class.

The android.view.view class allows us to define a view and then refresh it whenever needed. We created a class of tags in which each tag has a text, popularity, URL, 3D_X, 3D_Y, 3D_Z, 2D_X, 2D_Y, color, alpha (Transparency), and depth_Scale. A brief explanation on Tag elements include:

    • Text: this is the Tag Text String. it can be get/ set by getText()/ setText().
    • Popularity: this is the popularity. popularities are used to set the base scale for the Tags.   Larger popularities lead to larger base text size. it can be get/ set by getPopularity()/ setPopularity(). In fact, similar getters/setters exist for all other properties as well.
    • URL: this is the link that the Tag is pointing to.
    • locX, locY, locZ: these are the X, Y, Z location of tags on the 3D Tag cloud imaginary sphere. Tags are centered on these location.
    • loc2DX, loc2DY: these are the 2D projection of the 3D coordinates of tags on the phone's screen. It is used to find the location where the tag should be placed on screen. There is no Z involved since it is a flat screen with no Z.
    • colorR, colorG, colorB: These are Red, Green, Blue values. These values are oftype float and range from 0~1.
    • alpha: This is the opacity value where 1 means fully opaque and 0 means fully transparent. It is also of type float and ranges from 0~1.
In summary, we manually distribute the tags on an imaginary sphere and record their 3D_X, 3D_y, 3D_Z coordinates. Moreover, we consider perspective view and adjust the alpha value and initial text size based on a scale factor. This implementation is in its early stages but has quickly reached a point that shows its potential for our case. I was able to make simple 3D Tag clouds and have interaction with it. Although it contains several bugs and the current implementation is not working smoothly, I think this is much more effective than the OpenGL version. Here below you can find some screenshots of my initial implementation. I will add more details next week when I solve the bugs.

June  20- June 26:

  Step 5: Adding user interaction including track ball and touching screen for rotating Tag Cloud.

Progress: Using android.view.view class I was able to create the 3D Tag Cloud. TagCloud class can be used to create a 3D Tag Cloud with arbitrary number of elements. The constructor receives following parameters:

TagCloud( List<Tag> tags, int radius, float[] tagColor1, float[] tagColor2, int textSizeMin, int textSizeMax)

where tags is a list of arbitrary number of tags, radius is the desired radius for screen and tagColor1 and tagColor2 are the range of colors to be used for the tag cloud. In other words, tag colors will be interpolated between tagColor1 and tagColor2 based on their Popularity. The constructor also receives the Min and Max size for the tag texts. Again, Popularity is used to calculate the base size of texts. it should be mentioned that default text size for each tag is later multiplied by a factor representing its location on screen. In fact, location on screen is the result of a perspective view of the 3D tag cloud and is multiplied by the base text size to represent the 3D nature of the cloud.

TagCloud class implements Iterable which allows the user to later use iterators to iterate through Tag elements. In order to create a TagCloud instance, the user should first create a list of Tags. here below you can find an example of how a Tag Cloud can be created:

List<Tag> tagList= newArrayList<tag>();

tagList.add(new Tag("Tag 1", 7));

tagList.add(new Tag("Tag 2", 2));

int radius = 10;  //radius of the Tag Cloud sphere

TagCloud mTagCloud = new TagCloud(tagList, radius);

float[] color1 = { 0.5f, 0.5f, 0.5f, 1};  //rgb alpha for color1

float[] color2 = {0.1f, 0.1f, 0.1f, 1};




the create() method receives one parameter of type boolean. This parameter specifies whether tags should be randomely distributed around a sphere or they should be evenly placed.  It first calculates the 3D location of all tags around an imaginary sphere and updates the Tag locations. It then calculates the color/size of all tags by interpolating between tagColor1 and tagColor2 based on tag popularities. Later whenever it is needed to update the 3D tag cloud such as when the user wants to rotate the cloud, method update() should be called. however, before calling the update() method, the user should update the rotation angles by setting the AngleX, and AngleY. 

float angleX = 10;  //the amount of rotation along X axis
float angleY = 20;  //the amount of rotation along Y axis
mTagCloud.setAngleX( angleX);

//now call the update() method:

//and now all Tags are updated and we can iterate through them and redraw them

In order to prevent unnecessary calculations, the update() method only performs the calculations if the amount of rotation around X or Y axis is greater than a threshold. The amount of rotation is calculated based on the distance of the point that the user touches from the center of the Tag Cloud sphere. In order to track user interaction, the onTrackballEvent() and onTouchEvent have been overwritten. These events call the invalidate() method whenever it is necessary to redraw the tag cloud.

It should be noted here that Tags are constantly sorted based on their Z coordinates. The reason is because of the fact that we want Tags with higher Z values to cover the Tags with lower Z values. Keeping the TagList sorted based on Z, allows us to draw tags while respecting their Z coordinates.Figures below show how depth sorting affects the overlapping of Tags and creates a more realistic Tag Cloud:

Current implementation allows user to interact with the 3D tag cloud using either trackball or touching the screen. It works smoothly and updates and redraws all tags based on user request. Animation below shows several screen captures from our 3D Tag Cloud:

When working with Tag Cloud, I noticed that if setFakeBoldText is set to true, the 3D Tag Cloud will look much nicer. However FakeBoldText requires lots of internal calculation which severely affects the performance of the model and causes slow movement of the cloud. Thus, in final implementation FakeBoldText is set to false. Figures below show the effect of setting FakeBoldText to true:

The model has been tested with different numbers of Tags up to 50 tags and no performance issue is detected. The Tag Cloud also works fine when the phone is switched from portrait to landscape screen. figures below show the screenshot from a dense Tag Cloud including around 50 tags with two different range of color codings (red-blue, orange-grey):

June 27 - July 3: 

Step 6: Making the Tags selectable/clickable and adding filtering feature for visible tags

July 4  - August 10:  => Previous design has been completely changed.

Step 7: Testing, debugging, documenting and wrapping up the implementation

Progress: The final implementation is based on Android.View class and it has methods for update(), add(), and reset(). update() replace one of the existing tags with a new tag. Thus, the assumption for final implementation is based on uniqueness of Tags. add() adds a single tag to the Tag Cloud and places it at a random location around the Tag Cloud Sphere. Finally, reset() method resets the Tag Cloud view resulting in correct position of the newly added Tag. The final implementation also contains clickable tags which allows user to click on tags and be directed to the URL related to that Tag. Figure below show the final result: