top of page

Machine Learning Assignment Unit-4 Sem-5 BCA

Unit : 4 Supervised Learning

Subject : Machine Learning

Std : BCA Sem-5


1. Define : Supervised Learning.

Ans. Supervised learning is the types of machine learning in which machines are trained using well "labelled" training data, and on basis of that data, machines predict the output. The labelled data means some input data is already tagged with the correct output.

In supervised learning, the training data provided to the machines work as the supervisor that teaches the machines to predict the output correctly. It applies the same concept as a student learns in the supervision of the teacher.

Supervised learning is a process of providing input data as well as correct output data to the machine learning model. The aim of a supervised learning algorithm is to find a mapping function to map the input variable(x) with the output variable(y).


How Supervised Learning Works?

In supervised learning, models are trained using labelled dataset, where the model learns about each type of data. Once the training process is completed, the model is tested on the basis of test data (a subset of the training set), and then it predicts the output.

The working of Supervised learning can be easily understood by the below example and diagram:

Suppose we have a dataset of different types of shapes which includes square, rectangle, triangle, and Polygon. Now the first step is that we need to train the model for each shape.

  • If the given shape has four sides, and all the sides are equal, then it will be labelled as a Square.

  • If the given shape has three sides, then it will be labelled as a triangle.

  • If the given shape has six equal sides then it will be labelled as hexagon.

Types of supervised Machine learning Algorithms:

1. Regression :

Regression algorithms are used if there is a relationship between the input variable and the output variable. It is used for the prediction of continuous variables, such as Weather forecasting, Market Trends, etc.

2. Classification :

Classification algorithms are used when the output variable is categorical, which means there are two classes such as Yes-No, Male-Female, True-false, etc.


Advantages of Supervised learning:

  • With the help of supervised learning, the model can predict the output on the basis of prior experiences.

  • In supervised learning, we can have an exact idea about the classes of objects.

  • Supervised learning model helps us to solve various real-world problems such as fraud detection, spam filtering, etc.

Disadvantages of supervised learning:

  • Supervised learning models are not suitable for handling the complex tasks.

  • Supervised learning cannot predict the correct output if the test data is different from the training dataset.

  • Training required lots of computation times.

  • In supervised learning, we need enough knowledge about the classes of object.



2. Explain types of Supervised Learning Algorithm.

Ans. Supervised learning is typically divided into two main categories: regression and classification. In regression, the algorithm learns to predict a continuous output value, such as the price of a house or the temperature of a city.

Supervised learning can be further classified into two categories:

Regression :

Regression is a supervised learning technique used to predict continuous numerical values based on input features. It aims to establish a functional relationship between independent variables and a dependent variable, such as predicting house prices based on features like size, bedrooms, and location. The goal is to minimize the difference between predicted and actual values using algorithms like Linear Regression, Decision Trees, or Neural Networks, ensuring the model captures underlying patterns in the data.

Classification :

Classification is a type of supervised learning that categorizes input data into predefined labels. It involves training a model on labeled examples to learn patterns between input features and output classes. In classification, the target variable is a categorical value. For example, classifying emails as spam or not. The model’s goal is to generalize this learning to make accurate predictions on new, unseen data. Algorithms like Decision Trees, Support Vector Machines, and Neural Networks are commonly used for classification tasks.


Supervised Machine Learning Algorithm :

Supervised learning can be further divided into several different types, each with its own unique characteristics and applications. Here are some of the most common types of supervised learning algorithms:


Linear Regression: Linear regression is a type of regression algorithm that is used to predict a continuous output value. It is one of the simplest and most widely used algorithms in supervised learning. In linear regression, the algorithm tries to find a linear relationship between the input features and the output value. The output value is predicted based on the weighted sum of the input features.


Logistic Regression: Logistic regression is a type of classification algorithm that is used to predict a binary output variable. It is commonly used in machine learning applications where the output variable is either true or false, such as in fraud detection or spam filtering. In logistic regression, the algorithm tries to find a linear relationship between the input features and the output variable. The output variable is then transformed using a logistic function to produce a probability value between 0 and 1.


Decision Trees: Decision tree is a tree-like structure that is used to model decisions and their possible consequences. Each internal node in the tree represents a decision, while each leaf node represents a possible outcome. Decision trees can be used to model complex relationships between input features and output variables. A decision tree is a type of algorithm that is used for both classification and regression tasks.

  • Decision Trees Regression: Decision Trees can be utilized for regression tasks by predicting the value linked with a leaf node.

  • Decision Trees Classification: Random Forest is a machine learning algorithm that uses multiple decision trees to improve classification and prevent overfitting.

Random Forests: Random forests are made up of multiple decision trees that work together to make predictions. Each tree in the forest is trained on a different subset of the input features and data. The final prediction is made by aggregating the predictions of all the trees in the forest. Random forests are an ensemble learning technique that is used for both classification and regression tasks.

  • Random Forest Regression: It combines multiple decision trees to reduce overfitting and improve prediction accuracy.

  • Random Forest Classifier: Combines several decision trees to improve the accuracy of classification while minimizing overfitting.


Support Vector Machine(SVM): The SVM algorithm creates a hyperplane to segregate n-dimensional space into classes and identify the correct category of new data points. The extreme cases that help create the hyperplane are called support vectors, hence the name Support Vector Machine. A Support Vector Machine is a type of algorithm that is used for both classification and regression tasks

  • Support Vector Regression: It is a extension of Support Vector Machines (SVM) used for predicting continuous values.

  • Support Vector Classifier: It aims to find the best hyperplane that maximizes the margin between data points of different classes.


K-Nearest Neighbors (KNN): KNN works by finding k training examples closest to a given input and then predicts the class or value based on the majority class or average value of these neighbors. The performance of KNN can be influenced by the choice of k and the distance metric used to measure proximity. However, it is intuitive but can be sensitive to noisy data and requires careful selection of k for optimal results. A K-Nearest Neighbors (KNN) is a type of algorithm that is used for both classification and regression tasks.

  • K-Nearest Neighbors Regression: It predicts continuous values by averaging the outputs of the k closest neighbors.

  • K-Nearest Neighbors Classification: Data points are classified based on the majority class of their k closest neighbors.


Gradient Boosting: Gradient Boosting combines weak learners, like decision trees, to create a strong model. It iteratively builds new models that correct errors made by previous ones. Each new model is trained to minimize residual errors, resulting in a powerful predictor capable of handling complex data relationships. A Gradient Boosting is a type of algorithm that is used for both classification and regression tasks.

  • Gradient Boosting Regression: It builds an ensemble of weak learners to improve prediction accuracy through iterative training.

  • Gradient Boosting Classification: Creates a group of classifiers to continually enhance the accuracy of predictions through iterations



3. Explain advantages and disadvantages of Supervised Learning.

Ans.

Advantages of Supervised Learning :

The power of supervised learning lies in its ability to accurately predict patterns and make data-driven decisions across a variety of applications. Here are some advantages listed below:

  1. Labeled training data benefits supervised learning by enabling models to accurately learn patterns and relationships between inputs and outputs.

  2. Supervised learning models can accurately predict and classify new data.

  3. Supervised learning has a wide range of applications, including classification, regression, and even more complex problems like image recognition and natural language processing.

  4. Well-established evaluation metrics, including accuracy, precision, recall, and F1-score, facilitate the assessment of supervised learning model performance.

Disadvantages of Supervised Learning :

Although supervised learning methods have benefits, their limitations require careful consideration during problem formulation, data collection, model selection, and evaluation. Here are some disadvantages listed below:

  1. Overfitting: Models can overfit training data, which leads to poor performance on new, unseen data due to the capture of noise.

  2. Feature Engineering: Extracting relevant features from raw data is crucial for model performance, but this process can be time-consuming and may require domain expertise.

  3. Bias in Models: Training data biases can lead to unfair predictions.

  4. Supervised learning heavily depends on labeled training data, which can be costly, time-consuming, and may require domain expertise.


4. Discuss types of dataset in detail.

Ans. These input data used to build the model are usually divided into multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets.


Training data set :

A training data set is a data set of examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, a classifier.

For classification tasks, a supervised learning algorithm looks at the training data set to determine, or learn, the optimal combinations of variables that will generate a good predictive model. The goal is to produce a trained (fitted) model that generalizes well to new, unknown data. The fitted model is evaluated using “new” examples from the held-out datasets (validation and test datasets) to estimate the model’s accuracy in classifying new data. To reduce the risk of issues such as over-fitting, the examples in the validation and test datasets should not be used to train the model.

Most approaches that search through training data for empirical relationships tend to overfit the data, meaning that they can identify and exploit apparent relationships in the training data that do not hold in general.


Validation data set :

A validation data set is a data-set of examples used to tune the hyperparameters (i.e. the architecture) of a classifier. It is sometimes also called the development set or the "dev set". An example of a hyperparameter for artificial neural networks includes the number of hidden units in each layer. It, as well as the testing set (as mentioned below), should follow the same probability distribution as the training data set.

In order to avoid overfitting, when any classification parameter needs to be adjusted, it is necessary to have a validation data set in addition to the training and test datasets. For example, if the most suitable classifier for the problem is sought, the training data set is used to train the different candidate classifiers, the validation data set is used to compare their performances and decide which one to take and, finally, the test data set is used to obtain the performance characteristics such as accuracy, sensitivity, specificity, F-measure, and so on. The validation data set functions as a hybrid: it is training data used for testing, but neither as part of the low-level training nor as part of the final testing.


Test data set :

A test data set is a data set that is independent of the training data set, but that follows the same probability distribution as the training data set. If a model fit to the training data set also fits the test data set well, minimal overfitting has taken place (see figure below). A better fitting of the training data set as opposed to the test data set usually points to over-fitting.

A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier. To do this, the final model is used to predict classifications of examples in the test set. Those predictions are compared to the examples' true classifications to assess the model's accuracy.

In a scenario where both validation and test datasets are used, the test data set is typically used to assess the final model that is selected during the validation process. In the case where the original data set is partitioned into two subsets (training and test datasets), the test data set might assess the model only once (e.g., in the holdout method). Note that some sources advise against such a method. However, when using a method such as cross-validation, two partitions can be sufficient and effective since results are averaged after repeated rounds of model training and testing to help reduce bias and variability.



5. Explain Classification in Detail.

Ans. Classification is a supervised machine learning method where the model tries to predict the correct label of a given input data. In classification, the model is fully trained using the training data, and then it is evaluated on test data before being used to perform prediction on new unseen data.

For instance, an algorithm can learn to predict whether a given email is spam or ham (no spam), as illustrated below.

Before diving into the classification concept, we will first understand the difference between the two types of learners in classification: lazy and eager learners. Then we will clarify the misconception between classification and regression.

Lazy Learners Vs. Eager Learners :

There are two types of learners in machine learning classification: lazy and eager learners.

Eager learners are machine learning algorithms that first build a model from the training dataset before making any prediction on future datasets. They spend more time during the training process because of their eagerness to have a better generalization during the training from learning the weights, but they require less time to make predictions.

Most machine learning algorithms are eager learners, and below are some examples:

  • Logistic Regression.

  • Support Vector Machine.

  • Decision Trees.

  • Artificial Neural Networks.

Lazy learners or instance-based learners, on the other hand, do not create any model immediately from the training data, and this is where the lazy aspect comes from. They just memorize the training data, and each time there is a need to make a prediction, they search for the nearest neighbor from the whole training data, which makes them very slow during prediction. Some examples of this kind are:

  • K-Nearest Neighbor.

  • Case-based reasoning.

However, some algorithms, such as BallTrees and KDTrees, can be used to improve the prediction latency.

Machine Learning Classification Vs. Regression :

There are four main categories of Machine Learning algorithms: supervised, unsupervised, semi-supervised, and reinforcement learning.

Even though classification and regression are both from the category of supervised learning, they are not the same.

  • The prediction task is a classification when the target variable is discrete. An application is the identification of the underlying sentiment of a piece of text.

  • The prediction task is a regression when the target variable is continuous. An example can be the prediction of the salary of a person given their education degree, previous work experience, geographical location, and level of seniority.

Examples of Machine Learning Classification in Real Life :

Healthcare :

Training a machine learning model on historical patient data can help healthcare specialists accurately analyze their diagnoses:

  • During the COVID-19 pandemic, machine learning models were implemented to efficiently predict whether a person had COVID-19 or not.

  • Researchers can use machine learning models to predict new diseases that are more likely to emerge in the future.

Education :

Education is one of the domains dealing with the most textual, video, and audio data. This unstructured information can be analyzed with the help of Natural Language technologies to perform different tasks such as:

  • The classification of documents per category.

  • Automatic identification of the underlying language of students' documents during their application.

  • Analysis of students’ feedback sentiments about a Professor.

Transportation :

Transportation is the key component of many countries' economic development. As a result, industries are using machine and deep learning models:

  • To predict which geographical location will have a rise in traffic volume.

  • Predict potential issues that may occur in specific locations due to weather conditions.

Sustainable agriculture :

Agriculture is one of the most valuable pillars of human survival. Introducing sustainability can help improve farmers' productivity at a different level without damaging the environment:

  • By using classification models to predict which type of land is suitable for a given type of seed.

  • Predict the weather to help them take proper preventive measures.

Different Types of Classification Tasks in Machine Learning :

There are four main classification tasks in Machine learning: binary, multi-class, multi-label, and imbalanced classifications.

Binary Classification :

In a binary classification task, the goal is to classify the input data into two mutually exclusive categories. The training data in such a situation is labeled in a binary format: true and false; positive and negative; O and 1; spam and not spam, etc. depending on the problem being tackled. For instance, we might want to detect whether a given image is a truck or a boat.

Logistic Regression and Support Vector Machines algorithms are natively designed for binary classifications. However, other algorithms such as K-Nearest Neighbors and Decision Trees can also be used for binary classification.


Multi-Class Classification :

The multi-class classification, on the other hand, has at least two mutually exclusive class labels, where the goal is to predict to which class a given input example belongs to. In the following case, the model correctly classified the image to be a plane.

Most of the binary classification algorithms can be also used for multi-class classification. These algorithms include but are not limited to:

Random Forest, Naive Bayes, K-Nearest Neighbors, Gradient Boosting, SVM, Logistic Regression.

One-versus-one: this strategy trains as many classifiers as there are pairs of labels. If we have a 3-class classification, we will have three pairs of labels, thus three classifiers.

One-versus-rest: at this stage, we start by considering each label as an independent label and consider the rest combined as only one label. With 3-classes, we will have three classifiers.


Multi-Label Classification :

In multi-label classification tasks, we try to predict 0 or more classes for each input example. In this case, there is no mutual exclusion because the input example can have more than one label.

Such a scenario can be observed in different domains, such as auto-tagging in Natural Language Processing, where a given text can contain multiple topics. Similarly to computer vision, an image can contain multiple objects, as illustrated below: the model predicted that the image contains: a plane, a boat, a truck, and a dog.

It is not possible to use multi-class or binary classification models to perform multi-label classification. However, most algorithms used for those standard classification tasks have their specialized versions for multi-label classification.


Imbalanced Classification :

For the imbalanced classification, the number of examples is unevenly distributed in each class, meaning that we can have more of one class than the others in the training data. Let’s consider the following 3-class classification scenario where the training data contains: 60% of trucks, 25% of planes, and 15% of boats.

The imbalanced classification problem could occur in the following scenario:

  • Fraudulent transaction detections in financial industries

  • Rare disease diagnosis

  • Customer churn analysis



6. Explain Regression in detail.

Ans. Machine Learning Regression is a technique for investigating the relationship between independent variables or features and a dependent variable or outcome. It’s used as a method for predictive modelling in machine learning, in which an algorithm is used to predict continuous outcomes.

Solving regression problems is one of the most common applications for machine learning models, especially in supervised machine learning. Algorithms are trained to understand the relationship between independent variables and an outcome or dependent variable. The model can then be leveraged to predict the outcome of new and unseen input data, or to fill a gap in missing data.


What are the types of regression?

There are a range of different approaches used in machine learning to perform regression. Different popular algorithms are used to achieve machine learning regression. The different techniques may include different numbers of independent variables or process different types of data. Distinct types of machine learning regression models may also assume a different relationship between the independent and dependent variables. For example, linear regression techniques assume that the relationship is linear, so wouldn’t be effective with datasets with nonlinear relationships.

Some of the most common regression techniques in machine learning can be grouped into the following types of regression analysis:

  • Simple Linear Regression

  • Multiple linear regression

  • Logistic regression

Simple Linear Regression : Simple Linear regression is a linear regression technique which plots a straight line within data points to minimise error between the line and the data points. It is one of the most simple and basic types of machine learning regression. The relationship between the independent and dependent variables is assumed to be linear in this case. This approach is simple because it is used to explore the relationship between the dependent variable and one independent variable. Outliers may be a common occurrence in simple linear regression because of the straight line of best fit.


Multiple Linear regression : Multiple linear regression is a technique used when more than one independent variable is used. Polynomial regression is an example of a multiple linear regression technique. It is a type of multiple linear regression, used when there is more than one independent variable. It achieves a better fit in the comparison to simple linear regression when multiple independent variables are involved. The result when plotted on two dimensions would be a curved line fitted to the data points.


Logistic regression : Logistic regression is used when the dependent variable can have one of two values, such as true or false, or success or failure. Logistic regression models can be used to predict the probability of a dependent variable occurring. Generally, the output values must be binary. A sigmoid curve can be used to map the relationship between the dependent variable and independent variables.



7. Explain Decision tree in brief.

Ans. A decision tree is a non-parametric supervised learning algorithm, which is utilized for both classification and regression tasks. It has a hierarchical, tree structure, which consists of a root node, branches, internal nodes and leaf nodes.

As you can see from the diagram above, a decision tree starts with a root node, which does not have any incoming branches. The outgoing branches from the root node then feed into the internal nodes, also known as decision nodes. Based on the available features, both node types conduct evaluations to form homogenous subsets, which are denoted by leaf nodes, or terminal nodes. The leaf nodes represent all the possible outcomes within the dataset. As an example, let’s imagine that you were trying to assess whether or not you should go surf, you may use the following decision rules to make a choice:

This type of flowchart structure also creates an easy to digest representation of decision-making, allowing different groups across an organization to better understand why a decision was made.

Decision tree learning employs a divide and conquer strategy by conducting a greedy search to identify the optimal split points within a tree. This process of splitting is then repeated in a top-down, recursive manner until all, or the majority of records have been classified under specific class labels. Whether or not all data points are classified as homogenous sets is largely dependent on the complexity of the decision tree. Smaller trees are more easily able to attain pure leaf nodes—i.e. data points in a single class. However, as a tree grows in size, it becomes increasingly difficult to maintain this purity, and it usually results in too little data falling within a given subtree. When this occurs, it is known as data fragmentation, and it can often lead to overfitting. As a result, decision trees have preference for small trees, which is consistent with the principle of parsimony in Occam’s Razor; that is, “entities should not be multiplied beyond necessity.” Said differently, decision trees should add complexity only if necessary, as the simplest explanation is often the best. To reduce complexity and prevent overfitting, pruning is usually employed; this is a process, which removes branches that split on features with low importance. The model’s fit can then be evaluated through the process of cross-validation. Another way that decision trees can maintain their accuracy is by forming an ensemble via a random forest algorithm; this classifier predicts more accurate results, particularly when the individual trees are uncorrelated with each other.



8. How does the Decision Tree algorithm Work ? Explain.

Ans. In a decision tree, for predicting the class of the given dataset, the algorithm starts from the root node of the tree. This algorithm compares the values of root attribute with the record (real dataset) attribute and, based on the comparison, follows the branch and jumps to the next node.

For the next node, the algorithm again compares the attribute value with the other sub-nodes and move further. It continues the process until it reaches the leaf node of the tree. The complete process can be better understood using the below algorithm:

  • Step-1: Begin the tree with the root node, says S, which contains the complete dataset.

  • Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).

  • Step-3: Divide the S into subsets that contains possible values for the best attributes.

  • Step-4: Generate the decision tree node, which contains the best attribute.

  • Step-5: Recursively make new decision trees using the subsets of the dataset created in step -3. Continue this process until a stage is reached where you cannot further classify the nodes and called the final node as a leaf node.

Attribute Selection Measures :

While implementing a Decision tree, the main issue arises that how to select the best attribute for the root node and for sub-nodes. So, to solve such problems there is a technique which is called as Attribute selection measure or ASM. By this measurement, we can easily select the best attribute for the nodes of the tree. There are two popular techniques for ASM, which are:

  • Information Gain

  • Gini Index

1. Information Gain:

  • Information gain is the measurement of changes in entropy after the segmentation of a dataset based on an attribute.

  • It calculates how much information a feature provides us about a class.

  • According to the value of information gain, we split the node and build the decision tree.

  • A decision tree algorithm always tries to maximize the value of information gain, and a node/attribute having the highest information gain is split first. It can be calculated using the below formula:

➡️ Information Gain= Entropy(S)- [(Weighted Avg) *Entropy(each feature)

2. Gini Index:

  • Gini index is a measure of impurity or purity used while creating a decision tree in the CART(Classification and Regression Tree) algorithm.

  • An attribute with the low Gini index should be preferred as compared to the high Gini index.

  • It only creates binary splits, and the CART algorithm uses the Gini index to create binary splits.

  • Gini index can be calculated using the below formula:

➡️ Gini Index= 1- ∑jPj2



9. Explain advantages and disadvantages of Decision Tree.

Ans.

Advantages of the Decision Tree :

  • It is simple to understand as it follows the same process which a human follow while making any decision in real-life.

  • It can be very useful for solving decision-related problems.

  • It helps to think about all the possible outcomes for a problem.

  • There is less requirement of data cleaning compared to other algorithms.

Disadvantages of the Decision Tree :

  • The decision tree contains lots of layers, which makes it complex.

  • It may have an overfitting issue, which can be resolved using the Random Forest algorithm.

  • For more class labels, the computational complexity of the decision tree may increase.






30 views0 comments

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Connect To Me 

  • YouTube
  • Instagram
  • LinkedIn
  • Facebook
  • Twitter
  • Pinterest
bottom of page