K-means Clustering

Customer Segmentation for E-commerce

Vishal Reddy Jakkareddy & PruthviRaj Amgoth (Advisor: Dr. Cohen)

2025-04-14

Introduction

  • Customer segmentation is a key strategy in e-commerce that helps businesses classify customers based on their purchasing behavior.
  • It enables targeted marketing efforts and supports customer retention strategies.
  • Traditionally, segmentation was based on demographic attributes.
  • With the rise of machine learning, data-driven methods like K-Means clustering have become more prevalent.
  • This study uses K-Means clustering to analyze an online retail dataset and identify customer groups based on purchase behaviors.
  • Recent research emphasizes the effectiveness of clustering algorithms for customer segmentation.

Introduction

  • K-Means clustering is favored for its efficiency and ease of implementation (Paramita and Hariguna 2024).
  • The study adopts K-Means clustering using R to enable a data-driven approach for customer analysis and business decision-making.

Methods

K-Means Clustering Algorithm

The K-Means clustering algorithm is an iterative approach for partitioning a dataset into k clusters. It follows these steps to achieve convergence:

  1. Initialize Cluster Centers:
    • Select k initial cluster centroids randomly or use K-Means++ (K-Means++ is an improved initialization technique that places centroids far apart to enhance convergence and accuracy.) for optimized placement.
    • This ensures centroids are well-separated initially.

Initialize Cluster Centers (Image Source: Shiyu Gong)

  1. Assign Data Points:
    • Compute the distance between each data point and all centroids.
    • Assign each data point to the nearest cluster based on a proximity measure (e.g., Euclidean distance).
    • The assignment is done by minimizing the distance:

\[ D(x_i, Z_k(I)) = \min D(x_i, Z_j(I)), j = 1, 2, ..., k \]

  • \(x_i\) represents a data point.
  • \(D(x_i, Z_j^{(I)})\) represents the distance between the data point \(x_i\) and the cluster centroid \(Z_j^{(I)}\).
  • \(k\) is the number of clusters.

Assign Data Points (Image Source: Shiyu Gong)

  1. Update Cluster Centers:
    • Recalculate the centroid of each cluster by computing the mean of all assigned points:

\[ Z_j(I) = \frac{1}{n_j} \sum_{k=1}^{n_j} X_k^{(j)} \]

Where:

  • \(X_k^{(j)}\) are the data points assigned to cluster j.
  • \(Z_j(I)\) are the centroids of clusters.

Update Cluster Centers (Image Source: Shiyu Gong)

  1. Repeat Until Convergence:
    • Steps 2 and 3 are repeated iteratively until no significant change occurs in the centroids.
    • The algorithm stops when cluster assignments remain stable or meet a convergence threshold (Tabianan, Velu, and Ravi 2022).
    • The final clustering result aims to minimize the error square sum criterion:

\[ J = \sum_{j=1}^{k} \sum_{k=1}^{n_j} ||X_k^{(j)} - Z_j(I)||^2 \]

Where J represents the total clustering variance.

Repeat Until Convergence (Image Source: Shiyu Gong)

Selection of k (Number of Clusters)

The optimal number of clusters k is determined using the Elbow Method, which plots WCSS against k values. The elbow point—where the rate of decrease in WCSS (Within-Cluster Sum of Squares (WCSS) is a measure of the variance within clusters. The elbow point—where the curve bends—indicates the optimal number of clusters.) diminishes—indicates the optimal k.

\[ \text{WCSS} = \sum_{x_k \in Z_1^{(I)}} \text{distance}(x_k^{(j)}, z_1^{(I)})^2 + \sum_{x_k \in Z_2^{(I)}} \text{distance}(x_k^{(j)}, z_2^{(I)})^2 + \sum_{x_k \in Z_3^{(I)}} \text{distance}(x_k^{(j)}, z_3^{(I)})^2 + \cdots \]

Where:

  • \((x_k^{(j)})\): A data point in cluster \((j)\)
  • \((z_j^{(I)})\): The centroid of cluster \((j)\)
  • \((Z_j^{(I)})\): The set of points in cluster \((j)\)

Elbow Method (Image Source: Soumo Chatterjee)

Proximity Measures

K-Means clustering assigns points to clusters based on distance metrics. The most commonly used metrics are:

  • Euclidean Distance (Most Common in K-Means):

\[ D_{euclidean}(x_1, x_2) = \sqrt{ \sum_{i=1}^{n} ((x_{1})_{i} - (x_{2})_{i})^2 } \]

  • Manhattan Distance (Useful for Grid-Based Data):

\[ D_{manhattan}(x_1, x_2) = \sum_{i=1}^{n} |((x_{1})_{i} - (x_{2})_{i})| \]

Choosing an appropriate distance metric affects cluster shape and performance (Bradley, Bennett, and Demiriz 2000).

Data Exploration and Visualization

  • The Online Retail dataset, sourced from the UCI Machine Learning Repository, contains transactional data from a UK-based e-commerce store. This store specializes in selling unique giftware, which is frequently purchased in bulk by customers. The dataset spans transactions recorded between December 1, 2009, and December 9, 2011. (Chen 2015)
  • This dataset is particularly useful for customer segmentation, sales analysis, and market trend evaluation. It includes eight attributes that provide insights into customer purchases, product details, and order quantities. The data can be leveraged for analyzing buying behaviors, identifying customer clusters, and predicting future sales trends.

The dataset contains transactional records from an online retail store. The key attributes in the dataset include:

  • InvoiceNo: Unique invoice number for transactions
  • StockCode: Product code
  • Description: Product name
  • Quantity: Quantity purchased
  • InvoiceDate: Date and time of the purchase
  • UnitPrice: Price per unit
  • CustomerID: Unique identifier for customers
  • Country: Country where the transaction occurred
First Few Records of the Dataset
InvoiceNo StockCode Description Quantity InvoiceDate UnitPrice CustomerID Country
536365 85123A WHITE HANGING HEART T-LIGHT HOLDER 6 2010-12-01 08:26:00 2.55 17850 United Kingdom
536365 71053 WHITE METAL LANTERN 6 2010-12-01 08:26:00 3.39 17850 United Kingdom
536365 84406B CREAM CUPID HEARTS COAT HANGER 8 2010-12-01 08:26:00 2.75 17850 United Kingdom
536365 84029G KNITTED UNION FLAG HOT WATER BOTTLE 6 2010-12-01 08:26:00 3.39 17850 United Kingdom
536365 84029E RED WOOLLY HOTTIE WHITE HEART. 6 2010-12-01 08:26:00 3.39 17850 United Kingdom
536365 22752 SET 7 BABUSHKA NESTING BOXES 2 2010-12-01 08:26:00 7.65 17850 United Kingdom

Exploratory Data Analysis

Data summary
Name df
Number of rows 541909
Number of columns 8
_______________________
Column type frequency:
character 4
numeric 3
POSIXct 1
________________________
Group variables None

Variable type: character

skim_variable n_missing complete_rate min max empty n_unique whitespace
InvoiceNo 0 1 6 7 0 25900 0
StockCode 0 1 1 12 0 4070 0
Description 1454 1 1 35 0 4211 0
Country 0 1 3 20 0 38 0

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
Quantity 0 1.00 9.55 218.08 -80995.00 1.00 3.00 10.00 80995 ▁▁▇▁▁
UnitPrice 0 1.00 4.61 96.76 -11062.06 1.25 2.08 4.13 38970 ▁▇▁▁▁
CustomerID 135080 0.75 15287.69 1713.60 12346.00 13953.00 15152.00 16791.00 18287 ▇▇▇▇▇

Variable type: POSIXct

skim_variable n_missing complete_rate min max median n_unique
InvoiceDate 0 1 2010-12-01 08:26:00 2011-12-09 12:50:00 2011-07-19 17:17:00 23260

The dataset has 541,909 rows and 8 columns. CustomerID has 135,080 missing values (25% missing rate), and Description has 1,454 missing values. Quantity and UnitPrice show extreme values (e.g., negative quantities, very high prices), indicating the need for cleaning.

The visualization shows that CustomerID has the highest proportion of missing values (0.25), often paired with missing Description values (0.01 proportion of combinations).

  • Quantity Histogram (A): Most transactions involve quantities between 1 and 5, with a right-skewed distribution.
  • Price Histogram (B): Most items are priced between 1 and 4 units, also right-skewed.
  • Top 5 StockCode (C): StockCode 85123A has the highest sales count (~2,000 transactions).
  • Top 5 Countries (D): The United Kingdom dominates with over 300,000 transactions, followed by EIRE, France, Germany, and Spain.

Data Cleaning and Transformation

Code
# Data Cleaning
df <- df %>%
  filter(!is.na(CustomerID)) %>%
  filter(!grepl("^C", InvoiceNo)) %>%
  filter(Quantity > 0, UnitPrice > 0) %>%
  mutate(TotalSpending = Quantity * UnitPrice) %>%
  mutate(InvoiceDate = as.Date(InvoiceDate)) %>%
  mutate(Description = replace_na(Description, "Unknown"))

This reduces the dataset to valid transactions, removing 135,080 rows with missing CustomerID, cancelled transactions (starting with “C”), and entries with non-positive Quantity or UnitPrice.

Modeling and Results

RFM Analysis and Data Preparation

RFM analysis quantifies customer behavior using three metrics:

  • Recency: Days since the last purchase (relative to December 10, 2011).
  • Frequency: Number of transactions per customer.
  • Monetary: Total spending per customer.
Code
# Outlier Removal Function
remove_outliers <- function(data, column) {
  Q1 <- quantile(data[[column]], 0.25)
  Q3 <- quantile(data[[column]], 0.75)
  IQR_value <- Q3 - Q1
  lower_bound <- Q1 - 1.5 * IQR_value
  upper_bound <- Q3 + 1.5 * IQR_value
  data %>% filter(data[[column]] >= lower_bound & data[[column]] <= upper_bound)
}

# Removing Outliers
df <- df %>%
  remove_outliers("Quantity") %>%
  remove_outliers("UnitPrice")

# RFM Analysis Data Preparation
customer_data <- df %>%
  group_by(CustomerID) %>%
  summarise(
    Recency = as.numeric(as.Date("2011-12-10") - max(InvoiceDate)),
    Frequency = n(),
    Monetary = sum(TotalSpending)
  )

# Log Transformation
customer_data_log <- customer_data %>%
  mutate(
    Frequency = log1p(Frequency),
    Monetary = log1p(Monetary)
  )

# Standardizing Data for K-Means Clustering
customer_data_scaled <- customer_data_log %>%
  select(-CustomerID) %>%
  scale()
  • Outliers in Recency, Frequency, and Monetary are removed using the IQR method.
  • Log transformation (log1p) reduces skewness in the RFM metrics.
  • Data is scaled to ensure equal weighting of features during clustering.

The boxplots show that after transformation, the distributions are more symmetric, though some outliers remain (e.g., high Monetary values).

K-Means Clustering Analysis

Determining Optimal k

In this step, we are using the Elbow Method to determine the optimal number of clusters (k) for K-Means clustering. The plot displays the Total Within Sum of Squares (WSS) for different values of k, and we observe how the WSS decreases as the number of clusters increases. In our case, the elbow clearly appears at k = 3, meaning that three clusters provide the best segmentation of the data without overfitting. Choosing more than three clusters would result in only marginal improvement while increasing complexity unnecessarily.

Applying K-Means Clustering

The visualization illustrates the results of a K-Means clustering algorithm applied to customer data, projected onto two principal components—Dim1 and Dim2—which together capture approximately 95.6% of the total variance (Dim1: 73.3%, Dim2: 22.3%). The plot reveals three distinct customer segments, each represented by a unique color and shape: Cluster 1 (blue circles), Cluster 2 (yellow triangles), and Cluster 3 (grey squares). The separation between clusters is generally clear, indicating well-defined groupings; however, there is a slight overlap between Clusters 2 and 3, suggesting the possibility of similar behaviors or characteristics among customers near the boundary. This dimensionality reduction has proven effective, as most of the important information in the dataset is retained in just two dimensions, allowing for clear interpretation and visual segmentation of the customer base. These insights can be valuable for targeted marketing, personalized services, or strategic business decisions.

Dim1 and Dim2 are principal components resulting from a dimensionality reduction technique

Cluster Profiling

Cluster Profiling is the process of analyzing the characteristics of different customer segments after applying a clustering algorithm (like K-means). It helps us understand the behavior and value of each group, making it easier to tailor marketing strategies and business decisions. In this analysis, we’ve grouped customers into three clusters based on their Recency, Frequency, and Monetary values (RFM analysis). We then calculated the average values of these metrics for each cluster to interpret their purchasing behavior. By visualizing these summaries, we can clearly identify which clusters contain loyal customers, occasional buyers, or those at risk of churn—allowing businesses to take targeted actions accordingly.

Insights

  • Cluster 1 (Green): Moderate recency (~57 days), low frequency (~3 transactions), and moderate spending (~5.56 log units). These are occasional buyers.
  • Cluster 2 (Yellow): Low recency (~37 days), high frequency (~4.78 transactions), and high spending (~7.33 log units). These are loyal, high-value customers.
  • Cluster 3 (Blue): High recency (~256 days), low frequency (~2.60 transactions), and low spending (~5.13 log units). These are inactive customers at risk of churn.

Conclusion

K-Means clustering identified three customer segments:

  1. Occasional Buyers (Cluster 1): Customers who purchase infrequently with moderate spending.
  2. Loyal Customers (Cluster 2): Frequent buyers with high spending, ideal for retention strategies.
  3. Inactive Customers (Cluster 3): Customers who haven’t purchased recently, requiring re-engagement campaigns.

These segments enable targeted marketing: loyalty programs for Cluster 2, re-engagement offers for Cluster 3, and promotional deals for Cluster 1.

This study demonstrates the effectiveness of K-Means clustering for customer segmentation in e-commerce. By applying RFM analysis and K-Means clustering to the Online Retail dataset, we identified three distinct customer segments with actionable insights for marketing strategies. Future work could explore alternative clustering methods (e.g., DBSCAN) or incorporate additional features like product categories to enhance segmentation.

References

Bradley, Paul S, Kristin P Bennett, and Ayhan Demiriz. 2000. “Constrained k-Means Clustering.” Microsoft Research, Redmond 20 (0): 0.
Chen, Daqing. 2015. Online Retail.” UCI Machine Learning Repository.
Paramita, Adi Suryaputra, and Taqwa Hariguna. 2024. “Comparison of k-Means and DBSCAN Algorithms for Customer Segmentation in e-Commerce.” Journal of Digital Market and Digital Currency 1 (1): 43–62.
Tabianan, Kayalvily, Shubashini Velu, and Vinayakumar Ravi. 2022. “K-Means Clustering Approach for Intelligent Customer Segmentation Using Customer Purchase Behavior Data.” Sustainability 14 (12): 7243.