"Without data, you're just another person with an opinion."
~ W. Edwards Deming, Statistician
~ W. Edwards Deming, Statistician
Our Human Activity Recognition data set is rather limited and includes the following categories: "Walking," "Walking Upstairs," "Walking Downstairs," "Standing," "Sitting," and "Lying." We had obtained the HAR Dataset from the UCI dataset repository, and we were aware that it was divided into two parts: the RAW Data Set and the Pre-Engineered Data Set, which had been designed by a signal or domain expert engineer. Therefore, in order to learn from the data and predict human activity, we first employ a pre-engineered dataset with traditional machine learning (ML). Then, using a RAW dataset and a deep learning model, we could anticipate human activity by learning from the data.
30 volunteers carried out daily tasks while wearing a smartphone strapped around their waist. Two implemented sensors were set up on the phone to record (accelerometer and gyroscope). The directors of the underlying study constructed the dataset for these time series by sliding a fixed-width window with a width of 2.56s over the series and performing feature generation. Since there was a 50% overlap between the windows, the points are evenly spaced (1.28s). This experiment was videotaped so that the data could be manually labeled.
They have recorded "3-axial linear acceleration" (tAcc-XYZ) from the accelerometer and "3-axial angular velocity" (tGyro-XYZ) from the gyroscope using the sensors (gyroscope and accelerometer) in a smartphone with various variations. In those metrics, the prefix "t" stands for time. 3-axial signals in the X, Y, and Z directions are represented by the suffix "XYZ."
In the dataset, the activities are represented as numbers from 1 to 6 as their identifiers, as follows:
WALKING as 1
WALKING_UPSTAIRS as 2
WALKING_DOWNSTAIRS as 3
SITTING as 4
STANDING as 5
LAYING as 6
LINK FOR THE DATASET : https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones