What are the conditions for Overfitting and Underfitting?

This article was published as a part of the Data Science Blogathon.

Introduction

Success is a process not an event.

Data Science is growing rapidly in all sectors. With the availability of so many technologies within the Data Science domain, it becomes tricky to crack any Data Science interview. In this article, we have tried to cover the most common Data Science interview questions asked by recruiters.

The most important concepts and interview questions are as follows :

1. What is Linear Regression. What are the Assumptions involved in it?

Answer : The question can also be phrased as to why linear regression is not a very effective algorithm.

Linear Regression is a mathematical relationship between an independent and dependent variable. The relationship is a direct proportion, relation making it the most simple relationship between the variables.

Y = mX+c

Y – Dependent Variable

X – Independent Variable

m and c are constants

Assumptions of Linear Regression :

  1. The relationship between Y and X must be Linear.
  2. The features must be independent of each other.
  3. Homoscedasticity – The variation between the output must be constant for different input data.
  4. The distribution of Y along X should be the Normal Distribution.

2. What is Logistic Regression? What is the loss function in LR?

Answer : Logistic Regression is the Binary Classification. It is a statistical model that uses the logit function on the top of the probability to give 0 or 1 as a result.

The loss function in LR is known as the Log Loss function. The equation for which is given as :

3. Difference between Regression and Classification?

Answer : The major difference between Regression and Classification is that Regression results in a continuous quantitative value while Classification is predicting the discrete labels.

However, there is no clear line that draws the difference between the two. We have a few properties of both Regression and Classification. These are as follows:

Regression

  • Regression predicts the quantity.
  • We can have discrete as well as continuous values as input for regression.
  • If input data are ordered with respect to the time it becomes time series forecasting.

Classification

  • The Classification problem for two classes is known as Binary Classification.
  • Classification can be split into Multi- Class Classification or Multi-Label Classification.
  • We focus more on accuracy in Classification while we focus more on the error term in Regression.

4. What is Natural Language Processing? State some real life example of NLP.

Answer : Natural Language Processing is a branch of Artificial Intelligence that deals with the conversation of Human Language to Machine Understandable language so that it can be processed by ML models.

Examples – NLP has so many practical applications including chatbots, google translate, and many other real time applications like Alexa.

Some of the other applications of NLP are in text completion, text suggestions, and sentence correction.

5. Why do we need Evaluation Metrics. What do you understand by Confusion Matrix ?

Answer : Evaluation Metrics are statistical measures of model performance. They are very important because to determine the performance of any model it is very significant to use various Evaluation Metrics. Few of the evaluation Metrics are – Accuracy, Log Loss, Confusion Matrix.

Confusion Matrix is a matrix to find the performance of a Classification model. It is in general a 2×2 matrix with one side as prediction and the other side as actual values.

6. How does Confusion Matrix help in evaluating model performance?

Answer: We can find different accuracy measures using a confusion matrix. These parameters are Accuracy, Recall, Precision, F1 Score, and Specificity.

7. What is the significance of Sampling? Name some techniques for Sampling?

Answer : For analyzing the data we cannot proceed with the whole volume at once for large datasets. We need to take some samples from the data which can represent the whole population. While making a sample out of complete data, we should take that data which can be a true representative of the whole data set.

There are mainly two types of Sampling techniques based on Statistics.

Probability Sampling and Non Probability Sampling

Probability Sampling – Simple Random, Clustered Sampling, Stratified Sampling.

Non Probability Sampling – Convenience Sampling, Quota Sampling, Snowball Sampling.

8. What are Type 1 and Type 2 errors? In which scenarios the Type 1 and Type 2 errors become significant?

Answer : Rejection of True Null Hypothesis is known as a Type 1 error. In simple terms, False Positive are known as a Type 1 Error.

Not rejecting the False Null Hypothesis is known as a Type 2 error. False Negatives are known as a Type 2 error.

Type 1 Error is significant where the importance of being negative becomes significant. For example – If a man is not suffering from a particular disease marked as positive for that infection. The medications given to him might damage his organs.

While Type 2 Error is significant in cases where the importance of being positive becomes important. For example – The alarm has to be raised in case of burglary in a bank. But a system identifies it as a False case that won’t raise the alarm on time resulting in a heavy loss.

9. What are the conditions for Overfitting and Underfitting?

Answer :

In Overfitting the model performs well for the training data, but for any new data it fails to provide output. For Underfitting the model is very simple and not able to identify the correct relationship. Following are the bias and variance conditions.

Overfitting – Low bias and High Variance results in overfitted model. Decision tree is more prone to Overfitting.

Underfitting – High bias and Low Variance. Such model doesn’t perform well on test data also. For example – Linear Regression is more prone to Underfitting.