IIOTLab consists of a web application that allows for basic user management, including the creation of new users, user login, and the ability to recover their password if they forget or lose it. Once inside the portal, users will have access to the two main modules, through which they can remotely control and monitor the mobile robot using the first module. They will also be able to save the obtained values for later viewing and exporting them from the second module.
The backend and frontend will be responsible for sending, receiving, and displaying data to and from Turtlebot3, which will have odometry and LiDAR sensors. Additionally, the robot will be connected to a PC that will act as a server, through which the aforementioned services can be used. The two aforementioned components will run on this PC, and user information and recorded data will also be stored using the teleoperation module.
2021 - SISCAM, Software Tool to Automatically Detect and Count Motorcycles using Computer Vision
SISCAM allows detecting and automatically counting motorbikes in an urban road in day-light time using computer vision. This software tool was designed to store all information generated in a SQL database. In addition, the software tool has a GUI to ease the operation by users; the software tool has a configuration GUI where users can perform tasks such as login, introducing the location of data acquisition, selecting the source of data (IP cameras, video or a set of consecutive images), selecting the region of interest, and defining the traffic flow direction; also, the software tool has a GUI to observe the process of detection and motorbike counting; finally, the software tools allows generating a report using the data stored in the database according with a search criteria.
It is worth noting that motorbikes have dark colors which are very similar to most paved roads. In addition, motorbikes are relatively small in comparison with other vehicles, causing in this case that motorbikes are occluded by cars or bigger vehicles. In this work, the motorbike detection was performed using YOLOV5 architecture achieving a mAP@50 of 99 %. Afterwards, the detected motorbike must be counted, in this work a novel proposal for counting vehicles was implemented. Results suggest a mean accuracy of 91% when counting motorbikes in unstructured environments and in day-light time.
2021 - DepTherm, Software Tool to Extrinsically Calibrate Thermal, RGB and Depth Cameras
DepTherm allows merging 3 types of images of different kind: thermal images, thermal images, visible spectrum images and depth images. The system presented is composed of 2 calibration patterns, an image acquisition system and a software application that includes the following functionalities: the camera calibration procedure, image fusion, point cloud registration, 3D point cloud visualization with information from temperature and generate a thermographic inspection report.
The acquisition system consists of a thermal camera and a Kinect sensor, which are placed on a fixed base with a small separation between them to increase the overlap of fields of view. The calibration patterns consist of a chessboard with copper squares and a chessboard with rectangular holes respectively.
Extrinsic calibration uses the patterns in such that they are visible in the fields of view of a pair of cameras, capturing pairs of images at different positions, and creating ordered pairs of points that correspond to the calibrate pattern corners in both images. These points are used to calculate the homography matrix, which allows applying a projective transformation these cameras. Since there are 2 pairs of cameras: thermal-rgb and rgb-depth, there will be two homography matrices, allowing an image from any camera to be projected onto the image from any other camera.
Performing the facade monitoring process of a building on a regular basis allows the development and implementation of preventive maintenance at points of interest on the façade, preserving its condition and prolonging its useful life. Monitoring tasks are also especially useful when the façade and the structure in general have been exposed to external factors that can generate considerable effects, natural disasters, for example, situations under which the monitoring task allows to evaluate the safety of the building. and deploy timely repair and maintenance actions to preserve its integrity. Both cases present one of the greatest challenges, since the monitoring of building facades can be tedious due to the large size of these surfaces.
DroneFacade allows the generation of the flight mission from waypoints for the construction of a vertical type trajectory necessary for the acquisition of data from the facade. In addition, the application allows the control of the UAV during the execution of the trajectory, a module for the stage of image processing and generation of the mosaic, as well as a module for the administration and generation of reports with the information of missions contained in the application database.
3D reconstruction technology is widely used in several applications, some of these are found in media, film, video games, industries and teaching. Unfortunately, the use of these devices requires some computer knowledge and the acquisition costs are expensive (~ 2000 USD). DoEveryThing3D is an affordable alternative with an intuitive interface for automated extraction of 3D models of objects by using the Microsoft Kinect sensor. The design of DoEverything3D started mainly with the gather from state of art about analysis and generation of 3D models.
On the other hand, for the validation of DoEverything3D a protocol of integration and quantitative tests was developed. The integration tests were successful, evidenced by the correct implementation of each system requirements. The quantitative tests verified the good quality of the 3D models generated by the system, getting a mean error at 0.3988 mm with a standard deviation (5σ) of 8.846 mm.
2020 - SICECA, Portable Software Tool to Identify and Classify Sugar Cane Diseases
Currently, the process that is carried out for the diagnosis of diseases in sugarcane starts with the cultivator, who must perform a complex process for its diagnosis. In response to this problem, SICECA is a system to identify and classify diseases such as Roya café, Mancha anillo, Mancha púrpura y Muermo rojo using the Faster R-CNN architecture, which eases the diagnosis of a sick sugarcane plant through images of its leaves.
The methodological development is based on ANN, methods and techniques used to solve this kind of problem. Lately, a procedure of annotation and depuration was performed to a private dataset of images with diseased leaves on uncontrolled environments, such that the images with appropriate conditions were selected.
Communication is one of the most basic needs that a human being shows throughout his life, since it promotes his integral development and facilitates the interaction with his environment. However, some individuals may not make full use of this ability because of a disease, disorder or accident, with Cerebral Palsy being one of the most common causes of limitation of communication.
Speak2U is an application for Android mobile devices, which has a graphical interface basically composed of photo-frames, which express ideas or desires and which, when selected by the user, create audible sentences, and which are listened to by their interlocutor(s). The application is also composed of "action icons", for example, play action, delete action, action to go to another window, among others.
2019 - EVP, Software Tool to Estimate Drone Position in Space using a Smartphone and Computer Vsion
EVP was developed considering two software applications, an Android app capable of acquire images and send them to a second app implemented on a computer capable of estimate the phone position based on the images obtained by the last one. The Android app was programmed in such way that all its management can be done from the computer app through a wireless connection, permitting to the system to be used in unmanned aerial vehicles flight for images acquiring, scene reconstructions and vehicle position estimation.
The server client structure lets the computer app, as a client, to connect and to disconnect in any moment from the Android phone app (server) manually. In addition, it could start and stop the estimation and the images acquiring with the same autonomy. In the same way, the computer app lets the user to save the system execution collected position data for a subsequent usage.The system was implemented following a software development methodology. As a result of this, the design, building and tests are totally documented. The system requirements are pointed to offer east skillfulness by the usage of graphical user interfaces in both app
AutoNavi3AT is a software tool designed and implemented to allow a mobile robot to navigate autonomously along urban roads by the use of omnidirectional vision. AutoNavi3AT uses following hardware configuration: on board Mini-computer, wireless communication, catadioptric omnidirectional camera, laser range finder, and a Pioneer 3AT mobile robot. AutoNavi3AT allows usersto manage image processing, prediction and estimation of vanishing points, obstacle evasion, capture user events and automatic robot heading calculations.
The results show that omnidirectional vision has fundamental advantages over other types of computer vision by not requiring additional hardware to move the camera around, and providing the robot with a greater amount of useful data of the environment.
2019 - GALS, Group Autonomous Location System
GALS was developed to measure the relative position among a group of UAVs . A 2D LiDAR sensor (Light Detection And Ranging) was added to the structure of the UAVs. The relative position is calculated from the analysis of range images obtained with the LiDAR sensor. The analysis of the images allows to calculate the distance and the direction at which the nearby UAVs are located. The obstacle’s data in the operating range of the sensor are also obtained. The points that do not belong to UAVs are considered as possible obstacles. The data generated by GALS is reported to the flight control software of the UAV.
The user has the possibility to visualize the information generated by GALS. The visualization is done through the base station’s graphics interface. From the interface, the user has the possibility to configure the different system parameters such as: LiDAR sensor’s configuration; Modification of visualization limits; Calibration of the laser sensor; Start and stop the operation of GALS. The LiDAR sensor is small and light, important features to use it in an UAV. Its maximum range is 40 meters, which allows a wide visualization of the environment. The accuracy of the range data allows GALS to detect and locate UAVs in a range between 1 and 4 meters.
2019 - Arv2Counter, Software Tool to Detect and Count Vehicles in Urban Roads using Computer Vision
Arv2Counter was developed for the detection and counting vehicles in urban roads ArV2Conter, which uses a set of computer vision techniques for the subtraction, improvement, tracking and counting of moving objects that they are considered vehicles. In this way, it allows the user to collect the analysis of vehicular traffic of an urban road, and then statistical reports can be made for periods of time such as time slots and days of the week.
Arv2Counter has a user interface to configure the software tool, count the vehicles and generate the statistical reports using the Python programming language and the PyQt library. Finally, field tests are carried out with an AXIS 214 PTZ IP camera to validate the performance of the tool for the effects of light reflection and of areas with saturation and shadows, generated by lighting changes in the scene.
In order to comply with the implementation of software for the administration and analysis of data acquired from the RUVEM exoskeleton [2], a state-of-the-art review of exoskeletons of the same type has been carried out worldwide. In this way, the way in which they solved the software component was identified, concluding to build a custom application. This was because each exoskeleton reviewed in the state of the art had software that was coupled to the instrumentation of the same, and did not show the use of generic software that could easily be configured to any device.
The requirements that gave rise to the development of the software, are composed mainly by a clinical leaf module; which allows recording the patient's data, including his injuries and physical evaluations performed by the physiotherapist. A therapy module, responsible for communication with the exoskeleton, and the presentation of information in graphs. Finally, a report module that generates in standard formats the information obtained in the patient's walking sessions with the exoskeleton.
JLOC is a Java based software tool to get the extrinsic calibration between an omnidirectional camera and a 2D laser range finder. The fundamental concepts of this software tools were published in the paper titled "Embedding Range Information in Omnidirectional Images Through Laser Range Finder". Using this calibration, 2D laser points can be projected on the omnidirectional image as follows: 2D points are transformed to the camera frame, then these points are projected on the normalized plane, and finally camera intrinsic and distortion parameters are considered to project laser points on the omnidirectional image. Thanks to this projection, it can be know what omnidirectional image points are placed at specific distance from the camera frame.
NAOMOBBY was developed to handle by therapists using a friendly GUI, a Kinect sensor and an interactive humanoid robot NAO to increase the patient willingness regarding the rehabilitation physical therapy. NAOMOBBY includes the following modules: configuration/management, movement reproduction, and results report using GAS (Goal Attainment Scaling) methodology. NAOMOBBY was tested using quantitative and field tests. Quantitative tests measure the error in the Kinect sensor of the NAO robot joint motions to bring users a suitable feedback. Quantitative results were obtained using three basic functional motions.
2017 - GUI3DXBot, Software Tool for Guiding Service using Mobile Robots
GUI3DXBot is a server-client application, where the server side runs into the mobile robot on board computer, and the client side runs into a 10-inch Android based tablet. The GUI3DXBot server side is in charge of performing the perception, localization, mapping, and path planning tasks. These tasks use as a main sensor the LMS200 laser range finder, and they were implemented on the ROS framework. The GUI3DXBot client side implements the human-robot interface in order to bring a friendly, and interactive user interface.
TherapyBot is a system composed of a mobile app for Android devices and LegoEV3 robots that serves as support in rehabilitation therapies for children with cerebral palsy (levels I to V in MACS scale) is developed. Mainly it seeks to motivate children through play and technology so that they can perform the therapies and find them friendlier. Four games with different levels of difficulty were implemented: CrashCar, PaintBot, Brazo móvil and Laberinto, which aim to strengthen the concepts of causality, laterality, inhibition and problem resolution respectively.
For this purpose, a friendly graphical user interface was implemented and two kinds of robots: a mobile arm and differential robot were designed. A local database in order to have a record of therapists and patients with their progress during therapy sessions was also implemented. As to the interaction of the user with the application, as well as the touch interface of the android device, there is the option of working with an external interface that is composed of special switches for disability.
2017 - SCGI, Drone Flight Management Software Tool to Capture Geo-referenced Images
SCGI is a technological solution for the aerial capture imagery based on the human-machine interaction between a user and a drone. This interaction is made through a computer that works as a control station and its purpose is to manage the trajectory of the drone to have the right mission’s execution of the autonomous and configurable flight. This interface lets the user establish the desirable route, height and velocity to explore an area of interest, and at the same time acquiring Es geo-tagged images through an integrated photo sensor.
Likewise, the system provides conditions to make the trajectory of the drone stable through the calibration of each sensor. Also, it provides the appropriate commands to manage the trajectory of the flight in case the mission needs to be paused, resumed or cancel. As a result of these missions an information package is acquired and loaded to a data base that manages the information efficiently so that it can perform an evaluation of the missions and compare for future testing. This package is made up of telemetry database acquired in the entire mission, the paths points and establishes configurations for the mission, the selected photographic sensor’s calibration parameters, the drone’s sensors, each Es geo-tagge dimages and the pictures obtained in the mission.
UVBotsV2.0-APP is a Web responsive application which is able to display three different programming environments namely: Basic, Intermediate and Advanced. The basic level is commonly used by newbie users, and it uses graphical programming to learn basic programming concepts. The intermediate level uses Python based programming; and the advanced level uses ANSI C language. In all three cases, users are able to program different functionalities of a set of mobile robots called UVBotsV2.0. The program is compiled on line, then it is downloaded, and a local application is used to program the robot.
UVBotsV2.0-Firmware is a Framework implemented on FreeRTOS operative system in order to program a set of mobile robots. This framework considers three levels of knowledge on mobile robotics namely: basic, intermediate and advanced. Each level differentiates each other depending on the robot functionalities, and the data abstraction level related with sensors and actuators. Also, this framework includes the behavior based programming, which is used basically to setup robot applications quickly.
2016 - E2CAV, Software Tool to Estimate the Local Pavement Thickness using Computer Vision
E2CAV is a software tool which consolidates data information from two sub-systems. The first one, it is a image acquisition system based on a videoscope. And the another one, it is a vertical motion system which introduces the videoscope probe into a hole on the asphalt pavement. E2CAV has three software modules: configuration, image acquisition and report generation. The configuration module calibrates the videoscope camera, and the pixel / distance ratio. The image acquisition module synchronizes the vertical motion unit to capture images from the videoscope; afterwards, a mosaic is built, and it is used to detect the changes in texture using Gabor filters; then, the resulting images are post-processed in order to find the asphalt pavement layer thickness. And, the report module generates a PDF report of the pavement local thickness estimation.
INVIFusion 1.0 is a thermography software tool developed to deal with any hybrid camera system composed by a thermal camera and a visual spectrum camera. INVIFusion has three main modules: first, the calibration module where intrinsic and extrinsic calibration of the camera setup is performed; second, the image processing module where visual and thermal images are available, and all or part of them can be projected each other; and third, the report generation module which according with the ASTM international standarization comitee.
The main advantage of this software tool in comparison with other thermography software is focused on performing the fusion task using any hybrid camera system.
2015 - ARMM, Multi-goal Picking / Loading Software Application
ARMM multi-goal tool is a software executed into a P3DX mobile robot platform, which solves the problem of computing a near optimal solution for multi-goal path planning in the conext of picking and delivering applications.ARMM is based in the Lin-Kernighn algorithm, but modified to consider non-Euclidean distances and Hamiltonian paths. ARMM is a client/server application, the user interface is text based, and the server application deals with client request and the map localization issues.
2015 - Feed-Your-Pet, Automatic Pet Food Dispenser with Remote Access
Feed-Your-Pet (FYP) is an automatic solid pet food dispenser with remote acces using an Android app. FYP includes an automaticed chute and the Android app. The former is composed by an embedded system based on an ATMEGA32 microncontroller with GPRS capability to exchange text messages with the Android app. This hardware is able to accept configurations such as the cell phone number to report events, feeding schedules, seting up alarms, date and time. The Android app is able to receive different events such as food level, feeding alarms and feedback information about the food actually eaten by the pet. In addition, the Android app also can watch the feeding schedule and modify it.
2009 - RoboWeb-UV, Remote Interface for Mobile Robotics Experimentation
RoboWeb-UV is a software tool for experimenting with mobile robotics remotely. It is designed using Servlet technology, and a Pioneer 3DX mobile robot. Using this software platform users can control the robot remotely, as well as they are able to monitoring the robot activity using video streaming. The software has 6 different panels as follows: the mimic panel shows the environment reconstruction using sonar sensors, the motion panel shows robot odometry information; the sonar panel shows the obstacles level of proximity; the video panel shows the image feedback to users; the tele-operation panel shows different motion controllers to move the mobile robot; and the console panel shows the current state of the mobile robot.