Commit c5470a0d authored by Sameer's avatar Sameer
Browse files

Update src/In Processing Datasets/blood.html

Deleted task.txt
parent 7863dd54
Pipeline #35059 passed with stage
in 14 seconds
1).What are Train,test and validation datasets?
*The sample of data used to fit the models.
*The actual dataset that we use to train the model sees and learns from this data.
(eg: weights of connections between neurons in artificial neural network)
The sample of data used to provide an unbiased evaluation of a model fit to the training dataset while tuning the models by hyperparameters.
(eg: the number of hidden units in a neural network)
It is used to provide an unbiased evaluation of a final model fit on the training dataset.
2)what is the use of neural network with multiple layers?
A mullti layer neural network contains more than one layer of artificial neurons or nodes.
A fully connected multi-layer neural network is called a Multilayer Perceptron (MLP). It has 3 layers including one hidden layer.
If it has more than 1 hidden layer. An MLP is a typical example of a feedforward artificial neural network.
3)How neural network gets trained?
*Once a network has been structured for a particular application, that network is ready to be trained.
*To start this process the initial weights are chosen randomly. Then, the training, or learning, begins.
*There are two approaches to training - supervised and unsupervised.
Supervisied training:
Supervised training involves a mechanism of providing the network with the desired output either by manually "grading" the network's performance or by providing the desired outputs with the inputs.
Unsupervisied training:
Unsupervised training is where the network has to make sense of the inputs without outside help.
4)What is Overfitting and Underfitting?
*Overfitting occurs when a statistical model or machine learning algorithm captures the noise of the data.
*Overfitting is often a result of an excessively complicated model, and it can be prevented by fitting multiple models and using validation or cross-validation to compare their predictive accuracies on test data.
*Underfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data.
*Underfitting occurs when the model or the algorithm does not fit the data well enough. Specifically, underfitting occurs if the model or algorithm shows low variance but high bias.
5)What is learning rate and gradient descent?
Learning rate:
The amount of weights are updated during traning is learning rate.
The learning rate is a configurable hyperparameter used in the training of neural networks that has a small positive value.
Gradient descent:
*Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function.
*To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. If, instead, one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.
6)What is the use of bias?
*Machine bias is the effect of assumption in machine learning process.
*Bias is the effect of reflected the use of related also related to reflection of gathering of data.
*It also tells bias is a constant which helps to produce the given data.
*Its having weightsw1,w2,w3 and in middle we have a one function called B whch is called bias.
7)What is optimizer?
*Optimization is a process of searching for parameters that minimize or maximize our functions.
*When we train machine learning model.It also tells us that convert of dc for maximum energy.
8)What is Loss function?
Loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event.
An optimization problem seeks to minimize a loss function.
9)What is activation function and types of activation function?
*This function is used for Neural network to learn and make of sense of between function of inputs and variables.
* The main purpose of this function is to convert input signal to output also reposible for transferring the input to output.
There are three types of activation function:
binary :It is a type of supervised learning, a method of machine learning where the categories are predefined.
linear : linear function of a set of coefficients and variable whose value is used to predict the outcome of a dependent variable.
non linear :Which does nt prdict the outcome of variable.
10) What are learnable parameters in neural network?
*The parameters of a neural network are typically the weights of the connections.
*So, the algorithm itself tunes these parameters.
*The hyper parameters are typically the learning rate, the batch size or the number of epochs.
*In machine learning, a hyperparameter is a parameter whose value is set before the learning process begins.
*By contrast, the values of other parameters are derived via training.
11) what is keras framework?
*Keras is a minimalist Python library for deep learning that can run on top of Theano or TensorFlow.
*It was developed to make implementing deep learning models as fast and easy as possible for research and development.
12) What are batch size and epochs in model
* The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.
* The number of epochs is a hyperparameter that defines the number times that the learning algorithm will work through the entire training data.
13)What is Forwardfeeding ,Backwardfeeding and Backpropogation?
*The gradient can be efficiently evaluated by means of error backpropagation.
*The key idea of backpropagation algorithm is to propagate errors from the output layer back to the input layer by a chain rule.
Backward feeding:
*The backward pass refers to process of counting changes in weights using gradient descent algorithm.
* Computation is made from last layer, backward to the first layer.
back propagation:
* We used only one layer inside the neural network between the inputs and the outputs.
*It is used to repeatedly adjust the weights so as to minimize the loss(expected output-predicted output).
14)What is stochastic gradient descent?
Stochastic Gradient:
*Stochastic gradient descent is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable).
*It is called stochastic because the method uses randomly selected samples to evaluate the gradients.
what is keras framework
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment