• +91 9723535972
  • info@interviewmaterial.com

Keras Interview Questions and Answers

Related Subjects

Keras Interview Questions and Answers

Question - 1 : -
Who is the Creator of Keras? 

Answer - 1 : - François Chollet, He is currently working as an AI Researcher at Google

Question - 2 : - Types are layers in Keras?

Answer - 2 : -

  • Core Layers
  • Convolutional Layers
  • Pooling Layers
  • Locally-connected Layers
  • Recurrent Layers
  • Embedding Layers
  • Merge Layers
  • Advanced Activations Layers
  • Normalization Layers
  • Noise layers

Question - 3 : - Explain the examples of data processing in Keras.

Answer - 3 : - Some of the examples include: Firstly, neural networks don’t process raw data, like text files, encoded JPEG image files, or CSV files. They process vectorized & standardized representations. Secondly, text files need to be read into string tensors, then split into words. Finally, the words need to be indexed and turned into integer tensors. Thirdly, images need to be read and decoded into integer tensors, then converted to floating points and normalized to small values (usually between 0 and 1). Lastly, CSV data needs to be parsed, with numerical features converted to floating-point tensors and categorical features indexed and converted to integer tensors. Then each feature typically needs to be normalized to zero-mean and unit variance.

Question - 4 : - Name the types of inputs in the Keras model.

Answer - 4 : - Keras models accept three types of inputs: Firstly, NumPy arrays, just like Scikit-Learn and many other Python-based libraries. This is a good option if your data fits in memory. Secondly, TensorFlow Dataset objects. This is a high-performance option that is more suitable for datasets that do not fit in memory and that are streamed from a disk or from a distributed filesystem. Lastly, Python generators that yield batches of data (such as custom subclasses of the keras.utils.Sequence class).

Question - 5 : - Explain the term regularization.

Answer - 5 : - Regularization is a method that makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model’s performance on the unseen data as well.

Question - 6 : - Name some of the regularization techniques.

Answer - 6 : -

The techniques are as follows:

  • L2 and L1 Regularization
  • Dropout
  • Early Stopping
  • Data Augmentation

Question - 7 : - Explain the L2 and L1 Regularization techniques.

Answer - 7 : - L2 and L1 are the most common types of regularization. Regularization works on the premise that smaller weights lead to simpler models which result helps in avoiding overfitting. So to obtain a smaller weight matrix, these techniques add a ‘regularization term’ along with the loss to obtain the cost function. Here, Cost function = Loss + Regularization term However, the difference between L1 and L2 regularization techniques lies in the nature of this regularization term. In general, the addition of this regularization term causes the values of the weight matrices to reduce, leading to simpler models.

Question - 8 : - What is Convolutional Neural Network?

Answer - 8 : - A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm that can take in an input image, assign importance to various aspects/objects in the image and be able to differentiate one from the other. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms. While in primitive methods filters are hand-engineered, with enough training, ConvNets have the ability to learn these filters/characteristics.

Question - 9 : - What do you understand about Dropout and early stopping techniques?

Answer - 9 : - Dropout means that during the training, randomly selected neurons are turned off or ‘dropped’ out. It means that they are temporarily obstructed from influencing or activating the downward neuron in a forward pass, and none of the weights updates is applied on the backward pass. Whereas Early Stopping is a kind of cross-validation strategy where one part of the training set is used as a validation set, and the performance of the model is gauged against this set. So if the performance on this validation set gets worse, the training on the model is immediately stopped. However, the main idea behind this technique is that while fitting a neural network on training data, consecutively, the model is evaluated on the unseen data or the validation set after each iteration. So if the performance on this validation set is decreasing or remaining the same for certain iterations, then the process of model training is stopped.

Question - 10 : - What do you understand about callbacks?

Answer - 10 : - Callbacks are an important feature of Keras that is configured in fit(). Callbacks are objects that get called by the model at different points during training like: Firstly, at the beginning and end of each batch Secondly, at the beginning and end of each epoch However, callbacks are a way to make model trainable entirely scriptable. This can be used for periodically saving your model.


NCERT Solutions

 

Share your email for latest updates

Name:
Email:

Our partners