-
-
Notifications
You must be signed in to change notification settings - Fork 211
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
added unsupervised learning algorithm
- Loading branch information
1 parent
5ae6b54
commit 522ea43
Showing
1 changed file
with
163 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,163 @@ | ||
--- | ||
id: unsupervised-learning-algorithms | ||
title: Unsupervised Learning Algorithms | ||
sidebar_label: Unsupervised Learning | ||
description: "In this post, we’ll explore the concept of unsupervised learning, a fundamental approach in machine learning where models are trained using unlabeled data." | ||
tags: [machine learning, algorithms, unsupervised learning] | ||
--- | ||
|
||
### Definition: | ||
**Unsupervised Learning** is a type of machine learning where an algorithm learns from unlabeled data to identify patterns, groupings, or structures within the data. Unlike supervised learning, there are no predefined labels or outcomes to guide the learning process. | ||
|
||
<AdsComponent /> | ||
|
||
### Characteristics: | ||
- **Unlabeled Data**: | ||
Unsupervised learning requires a dataset that does not include output labels; the algorithm must discover the inherent structure of the data on its own. | ||
|
||
- **Pattern Recognition**: | ||
The primary goal is to uncover hidden patterns or groupings in the data without prior knowledge of what those patterns may be. | ||
|
||
- **Exploratory Analysis**: | ||
Often used for exploratory data analysis to understand the underlying distribution and relationships in data. | ||
|
||
### Types of Unsupervised Learning Algorithms: | ||
|
||
1. **Clustering Algorithms**: | ||
Used to group similar data points together. Examples include: | ||
- **K-Means Clustering**: Partitions data into K clusters based on feature similarity. | ||
- **Hierarchical Clustering**: Builds a hierarchy of clusters using either agglomerative or divisive approaches. | ||
- **DBSCAN (Density-Based Spatial Clustering of Applications with Noise)**: Groups points that are closely packed together while marking outliers. | ||
|
||
2. **Dimensionality Reduction Algorithms**: | ||
Used to reduce the number of features in a dataset while preserving its essential structure. Examples include: | ||
- **Principal Component Analysis (PCA)**: Transforms data into a lower-dimensional space while retaining variance. | ||
- **t-SNE (t-distributed Stochastic Neighbor Embedding)**: A technique for visualizing high-dimensional data in two or three dimensions. | ||
|
||
3. **Association Rule Learning**: | ||
A method for discovering interesting relations between variables in large databases. An example is: | ||
- **Apriori Algorithm**: Used for mining frequent itemsets and relevant association rules. | ||
|
||
<Ads /> | ||
|
||
### Steps Involved: | ||
1. **Input the Data**: | ||
The algorithm receives unlabeled training data consisting of features without corresponding target labels. | ||
|
||
2. **Preprocess the Data**: | ||
Data cleaning and preprocessing steps may include handling missing values, normalizing or scaling features, and encoding categorical variables. | ||
|
||
3. **Select an Algorithm**: | ||
Choose an appropriate unsupervised learning algorithm based on the problem type (clustering, dimensionality reduction, etc.). | ||
|
||
4. **Fit the Model**: | ||
Apply the chosen algorithm to the dataset to identify patterns or groupings. | ||
|
||
5. **Evaluate Results**: | ||
Use metrics such as silhouette score for clustering or explained variance for dimensionality reduction to assess how well the model performs. | ||
|
||
6. **Interpret Findings**: | ||
Analyze the results to draw insights or make decisions based on identified patterns. | ||
|
||
<AdsComponent /> | ||
|
||
### Problem Statement: | ||
Given an unlabeled dataset with multiple features, the objective is to identify patterns, group similar observations, or reduce dimensionality to facilitate further analysis. | ||
|
||
### Key Concepts: | ||
- **Clustering**: | ||
The process of grouping a set of objects in such a way that objects in the same group (or cluster) are more similar than those in other groups. | ||
|
||
- **Dimensionality Reduction**: | ||
Techniques used to reduce the number of input variables in a dataset while preserving important information. | ||
|
||
- **Association Rules**: | ||
Rules that describe how items are associated with each other in transactional datasets. | ||
|
||
<Ads /> | ||
|
||
### Split Criteria: | ||
Unsupervised learning algorithms typically split data based on inherent structures or distances between points rather than minimizing prediction error. | ||
|
||
### Time Complexity: | ||
- **Training Complexity**: | ||
Varies by algorithm; can range from linear time complexity for simple clustering methods to polynomial time complexity for more complex algorithms. | ||
|
||
- **Prediction Complexity**: | ||
Also varies by algorithm; some algorithms allow for faster predictions after training (e.g., K-Means can be efficient during prediction). | ||
|
||
### Space Complexity: | ||
- **Space Complexity**: | ||
Depends on how much information about the training set needs to be stored (e.g., clustering algorithms may require more space than dimensionality reduction techniques). | ||
|
||
### Example: | ||
Consider a scenario where we want to segment customers based on purchasing behavior without labeled outcomes. | ||
|
||
**Dataset Example:** | ||
|
||
| Customer ID | Age | Annual Income | Spending Score | | ||
|-------------|-----|---------------|----------------| | ||
| 1 | 25 | 50K | 39 | | ||
| 2 | 30 | 60K | 81 | | ||
| 3 | 22 | 45K | 6 | | ||
| 4 | 35 | 70K | 77 | | ||
|
||
Step-by-Step Execution: | ||
|
||
1. **Input Data**: | ||
The model receives training data with features (age, income, spending score). | ||
|
||
2. **Preprocess Data**: | ||
Handle any missing values and scale features if necessary. | ||
|
||
3. **Select Algorithm**: | ||
Choose K-Means for clustering customers based on their spending behavior. | ||
|
||
4. **Fit Model**: | ||
Apply K-Means to group customers into clusters based on their features. | ||
|
||
5. **Evaluate Performance**: | ||
Use silhouette score to evaluate how well-defined the clusters are. | ||
|
||
6. **Interpret Findings**: | ||
Analyze customer segments to tailor marketing strategies accordingly. | ||
|
||
<AdsComponent /> | ||
|
||
### Python Implementation: | ||
Here’s a basic implementation of K-Means clustering using **scikit-learn**: | ||
|
||
```python | ||
from sklearn.cluster import KMeans | ||
import pandas as pd | ||
import matplotlib.pyplot as plt | ||
|
||
# Sample dataset | ||
data = { | ||
'Age': [25, 30, 22, 35], | ||
'Annual Income': [50000, 60000, 45000, 70000], | ||
'Spending Score': [39, 81, 6, 77] | ||
} | ||
|
||
# Convert to DataFrame | ||
df = pd.DataFrame(data) | ||
|
||
# Create KMeans model | ||
kmeans = KMeans(n_clusters=2, random_state=42) | ||
|
||
# Fit model | ||
kmeans.fit(df) | ||
|
||
# Predict cluster labels | ||
labels = kmeans.predict(df) | ||
|
||
# Visualize results | ||
plt.scatter(df['Annual Income'], df['Spending Score'], c=labels) | ||
plt.scatter(kmeans.cluster_centers_[:, 1], kmeans.cluster_centers_[:, 2], s=300, c='red', label='Centroids') | ||
plt.title('Customer Segmentation using K-Means') | ||
plt.xlabel('Annual Income') | ||
plt.ylabel('Spending Score') | ||
plt.legend() | ||
plt.show() | ||
``` | ||
|