• +91 9723535972
  • info@interviewmaterial.com

Data Science Interview Questions and Answers

Data Science Interview Questions and Answers

Question - 51 : - Write code to calculate the accuracy of a binary classification algorithm using its confusion matrix.

Answer - 51 : -

We can use the code given below to calculate the accuracy of a binary classification algorithm:

def accuracy_score(matrix):
  true_positives = matrix[0][0]
  true_negatives = matrix[1][1]
  total_observations = sum(matrix[0]) + sum(matrix[1])
  return (true_positives + true_negatives) / total_observations

Question - 52 : - What does root cause analysis mean?

Answer - 52 : -

Root cause analysis is the process of figuring out the root causes that lead to certain faults or failures. A factor is considered to be a root cause if, after eliminating it, a sequence of operations, leading to a fault, error, or undesirable result, ends up working correctly. Root cause analysis is a technique that was initially developed and used in the analysis of industrial accidents, but now, it is used in a wide variety of areas.

Question - 53 : - What is A/B testing?

Answer - 53 : -

A/B testing is a kind of statistical hypothesis testing for randomized experiments with two variables. These variables are represented as A and B. A/B testing is used when we wish to test a new feature in a product. In the A/B test, we give users two variants of the product, and we label these variants as A and B.

The A variant can be the product with the new feature added, and the B variant can be the product without the new feature. After users use these two products, we capture their ratings for the product.

If the rating of product variant A is statistically and significantly higher, then the new feature is considered an improvement and useful and is accepted. Otherwise, the new feature is removed from the product.

Question - 54 : - Out of collaborative filtering and content-based filtering, which one is considered better, and why?

Answer - 54 : -

Content-based filtering is considered to be better than collaborative filtering for generating recommendations. It does not mean that collaborative filtering generates bad recommendations.

However, as collaborative filtering is based on the likes and dislikes of other users we cannot rely on it much. Also, users’ likes and dislikes may change in the future.

For example, there may be a movie that a user likes right now but did not like 10 years ago. Moreover, users who are similar in some features may not have the same taste in the kind of content that the platform provides.

In the case of content-based filtering, we make use of users’ own likes and dislikes that are much more reliable and yield more positive results. This is why platforms such as Netflix, Amazon Prime, Spotify, etc. make use of content-based filtering for generating recommendations for their users.

Question - 55 : - Write a function that when called with a confusion matrix for a binary classification model returns a dictionary with its precision and recall.

Answer - 55 : -

We can use the below for this purpose:

def calculate_precsion_and_recall(matrix):
  true_positive  = matrix[0][0]
  false_positive  = matrix[0][1]
  false_negative = matrix[1][0]
  return {
    'precision': (true_positive) / (true_positive + false_positive),
    'recall': (true_positive) / (true_positive + false_negative)
  }

Question - 56 : - What is reinforcement learning?

Answer - 56 : -

Reinforcement learning is a kind of Machine Learning, which is concerned with building software agents that perform actions to attain the most number of cumulative rewards.

A reward here is used for letting the model know (during training) if a particular action leads to the attainment of or brings it closer to the goal. For example, if we are creating an ML model that plays a video game, the reward is going to be either the points collected during the play or the level reached in it.

Reinforcement learning is used to build these kinds of agents that can make real-world decisions that should move the model toward the attainment of a clearly defined goal.

Question - 57 : - Explain TF/IDF vectorization.

Answer - 57 : -

The expression ‘TF/IDF’ stands for Term Frequency–Inverse Document Frequency. It is a numerical measure that allows us to determine how important a word is to a document in a collection of documents called a corpus. TF/IDF is used often in text mining and information retrieval.

Question - 58 : - What are the assumptions required for linear regression?

Answer - 58 : -

There are several assumptions required for linear regression. They are as follows:

  • The data, which is a sample drawn from a population, used to train the model should be representative of the population.
  • The relationship between independent variables and the mean of dependent variables is linear.
  • The variance of the residual is going to be the same for any value of an independent variable. It is also represented as X.
  • Each observation is independent of all other observations.
  • For any value of an independent variable, the independent variable is normally distributed.

Question - 59 : - What happens when some of the assumptions required for linear regression are violated?

Answer - 59 : -

These assumptions may be violated lightly (i.e., some minor violations) or strongly (i.e., the majority of the data has violations). Both of these violations will have different effects on a linear regression model.

Strong violations of these assumptions make the results entirely redundant. Light violations of these assumptions make the results have greater bias or variance.

Question - 60 : - How do you avoid the overfitting of your model?

Answer - 60 : -

Overfitting basically refers to a model that is set only for a small amount of data. It tends to ignore the bigger picture. Three important methods to avoid overfitting are:

  • Keeping the model simple—using fewer variables and removing major amount of the noise in the training data
  • Using cross-validation techniques. E.g.: k folds cross-validation 
  • Using regularisation techniques — like LASSO, to penalise model parameters that are more likely to cause overfitting.


NCERT Solutions

 

Share your email for latest updates

Name:
Email:

Our partners