Deep Learning Interview Questions and Answers
Question - 51 : - Why are generative adversarial networks (GANs) so popular?
Answer - 51 : -
Generative adversarial networks are used for a variety of purposes. In the case of working with images, they have a high amount of traction and efficient working.
Creation of art: GANs are used to create artistic images, sketches, and paintings.
Image enhancement: They are used to greatly enhance the resolution of the input images.
Image translation: They are also used to change certain aspects, such as day to night and summer to winter, in images easily.
Question - 52 : - Is it possible to calculate the learning rate for a model a priori?
Answer - 52 : -
For simple models, it could be possible to set the best learning rate value a priori. However, for complex models, it is not possible to calculate the best learning rate through theoretical deductions that can actually make accurate predictions. Observations and experiences do play a vital role in defining the optimal learning rate.
Question - 53 : - What are the commonly used approaches to set the learning rate?
Answer - 53 : -
- Using a fixed learning rate value for the complete learning process.
- Using a learning rate schedule
- Making use of adaptive learning rates
- Adding momentum to the classical SGD equation.
Question - 54 : - Is there any difference between neural networks and deep learning?
Answer - 54 : -
Ideally, there is no significant difference between deep learning networks and neural networks. Deep learning networks are neural networks but with a slightly complex architecture than they were in 1990s. It is the availability of hardware and computational resources that has made it feasible to implement them now.
Question - 55 : - You want to train a deep learning model on a 10GB dataset but your machine has 4GB RAM. How will you go about implementing a solution to this deep learning problem?
Answer - 55 : -
One of the possible ways to answer this question would be to say that a neural network can be trained by loading the data into the NumPy array and defining a small batch size.NumPy doesn’t load the complete dataset into the memory but creates a complete mapping of the dataset. NumPy offers several tools for compressing large datasets that can be integrated with other NN packages like PyTorch, TensorFlow, or Keras.
Question - 56 : - What are the limitations of using a perceptron?
Answer - 56 : -
A major drawback to using a perceptron is that they can only linearly separable functions and cannot handle non-linear inputs.
Question - 57 : - How will you differentiate between a multi-class and multi-label classification problem?
Answer - 57 : -
In a multi-class classification problem, the classification task has more than two mutually exclusive classes whereas in a multi-label problem each label has a different classification task, however, the tasks are related somehow. For example, classifying a set of images of animals which may be cats, dogs, or bears is a multi-class classification problem that assumes that each sample has only one label meaning an image can be classified as either a cat or a dog but not both at the same time. Now imagine that you want to process the below image. The image shown below needs to be classified as both cat and dog because the image shows both the animals. In a multi-label classification problem, a set of labels are assigned to each sample and the classes are not mutually exclusive. So, a pattern can belong to one or more classes in a multi-label classification problem.
Question - 58 : - What do you understand by transfer learning?
Answer - 58 : -
You know how to ride a bicycle, so it will be easy for you to learn to drive a bike. This is transfer learning. You have some skill and you can learn a new skill that relates to it without having to learn it from scratch. Transfer learning is a process in which the learning can be transferred from one model to another without having to make the model learn everything from scratch. The features and weights can be used for training the new model providing reusability. Transfer learning works well in training a model easily when there is limited data.
Question - 59 : - What is fine-tuning and how is it different from transfer learning?
Answer - 59 : -
In transfer learning, the feature extraction part remains untouched and only the prediction layer is retrained by changing the weights based on the application. On the contrary in fine-tuning, the prediction layer along with the feature extraction stage can be retrained making the process flexible.
Question - 60 : - Why do we use convolutions for images instead of using fully connected layers?
Answer - 60 : -
Each convolution kernel in a CNN acts like its own feature detector and has a partially in-built translation in-variance. Using convolutions lets one preserve, encode and make use of the spatial information from the image, unlike fully connected layers that do not have any relative spatial information.