**Classification of Human Posture Using Accelerometer Data**

(By Anmol Kumar, Rishabh Chauhan, and Tejas Dubhir)

Human posture recognition and analysis have been a widely studied topic these days because of wearable devices’ innovation. Thus, activity tracking becomes an exciting use-case for healthcare and fitness tracking applications for both the elderly and adults. In this study, we present the analysis of several machine learning models to detect a human’s posture by the data gathered by various accelerometers attached to the body.

# Motivation:

The main motive to do this study was to help assist people (especially those who have a sedentary lifestyle) in their daily routine and physical well-being by tracking their posture, activity, and movement. It can also be useful for the injured and the aged. This study can be further refined for workout tracking, which will make it easier to reach and surpass one’s calorific goals.

# Methodology:

**Preprocessing:**

We plotted the outcomes vs features to notice that the change in values of features like gender, height, weight, BMI, etc did not result in a change in the activity. Thus, making these attributes not largely useful towards our goal.

Here, we change the classes to integer types as ,1 = sitting, 2 = Sitting down to 2, 3 = Standing, 4 = Standing up, and 5 = Walking.

Since the class distribution was uneven, we had to make the dataset even by reducing the number of samples to the least class frequency.

**Methods:**

**Logistic regression**

Multiclass logistic regression is used to determine the class out of the five possible outcomes. The number of epochs is kept at 500 for training the model.

Multiclass logistic regression is used to determine the class out of the five possible outcomes. The number of epochs is kept at 500 for training the model.

**Avg. Training accuracy: 0.8014151854714064**

**Avg. Testing accuracy: 0.7966335007727975**

**2. Support Vector Machine (SVM)**

A support vector machine builds and trains the model in such a way that it tends to maximize the difference between the separating hyperplane and data points. After trying to optimize the hyperparameters and find the best kernel for training, it can be concluded that a linear kernel gives the highest score out of all the kernel methods. Since the time complexity to train the kernel is around O(n^3), we train it only for the first 1000 points as further training does not increase the accuracy but takes more time.

**Avg. Training accuracy: 0.8203970247295209**

**Avg. Testing accuracy: 0.8226188176197836**

**3. Random Forest Classifier**

Random is an ensemble method specifically designed for decision tree classifiers. Random forest fits several decision trees based on the data samples and uses the mode of the average to control over-fitting. The whole dataset is used to build each tree. Table 3 shows the accuracy using Random Forest.

**Avg. Training accuracy: 0.995616815489667**

**Avg. Testing accuracy: 0.9907806556783192**

**4. Gaussian Discriminant Analysis**

GDA is a technique that fits class-conditional densities to the data and uses Bayes Rule. It assigns a Gaussian density against each output class.

**Avg. Training accuracy: 0.90461585523**

**Avg. Testing accuracy: 0.902555061824**

**5. Stochastic Gradient Descent (SGD) optimization on Logistic Regression**

Stochastic Gradient Descent is a variant of gradient descent algorithm which optimizes the weights by randomly selecting any single instance from the dataset to optimize the weights for the next iteration. We evaluated this optimization technique using logistic regression for classification. One peculiar thing in this model is that it is confused with similar values like walking and standing, sitting-down, and standing-up.

**Avg. Training accuracy: 0.7709782328696548**

**Avg. Testing accuracy: 0.7691025888717156**

**6. Neural Network (Multilayer perceptron)**

A neural network is a combination of multiple layers, each having multiple neurons/perceptrons. We compared several activation functions along with the combination of multiple learning rates(𝛼). For a single hidden layer, we found that the accuracy was around **94% for 36 units **and there wasn’t a significant improvement after that. We added another layer with 24 units. The accuracy improved to **98% **for alpha = 0.001.

Relu: 0.985

Linear: 0.815

Sigmoid: 0.9837

Tanh: 0.975

# Results:

After conducting the above experiments, it can be concluded that:

**Random forests technique (max_depth = 16)** performed with **98–99% avg. accuracy**

This was followed by a **Neural Network of 2 hidden layers with 36 and 24 units** respectively with 98% avg. accuracy. (LR = 0.001 and ReLU activation function)

The 3rd best model was **GDA with 92% average accuracy** in our analysis.

# Conclusions:

We used most of the concepts taught to us in the Machine Learning course by Dr. Jainendra, to solve a real-life problem and understood the practical use of visualization of the learning process.

We got the opportunity to apply the machine learning techniques we knew to a dataset that was not familiar to us.

We can now differentiate how unsupervised and supervised learning algorithms behave differently in multiclass classification problems.

We learned that to choose a classifier which when given multiple parameters and has to classify among multiple classes, the random forest is the best choice, followed by neural networks and Gaussian discriminant analysis.