Critical concepts for correct customer segmentation implementation

Charlotte Pople, CAMS, and Haibo Zhang, Ph.D., CAMS
8/12/2020
Critical concepts for correct customer segmentation implementation

Identifying suspicious activity is a complex task. Transaction monitoring models that include customer segmentation offer more holistic analytics.

Holistic, risk-based analytics is necessary for any bank or financial services company seeking to monitor suspicious activity and mitigate risk. Integrating customer segmentation into transaction monitoring models enhances an organization’s approach and overall risk posture.

As banks and financial services companies grow in asset size and customer base, suspicious activity monitoring becomes increasingly more complex. While transaction monitoring models can help banks and financial services companies mitigate certain risks associated with money laundering, it is essential that these models are used properly. Customer segmentation is one way to improve the efficiency of transaction monitoring models. This article discusses two segmentation approaches, how these approaches can be integrated, the importance of orthogonality, and common misconceptions about transaction monitoring segmentation.

Why segmentation?

Customer segmentation using risk-based data insights elevates the effectiveness and efficiency of transaction monitoring models. It allows companies to better cluster and set more precise thresholds for monitoring groups of similarly behaving customers.

Traditionally, segmentation was qualitative. It was used to delineate the lines of business by clustering customers based on common attributes and characteristics. Adding a quantitative dimension increases the impact of segmentation by reducing noise alerts associated with transaction monitoring scenarios.

Banks and financial services companies must have processes in place to gain a reasonable understanding of their customers along with access to accurate and consistent data. More specifically, banks and financial services companies must be able to ingest and analyze historical customer behavior to establish future “expected” behavior and create the best segmentation model possible. Segmentation can be performed in several ways, and effectively combining these ways can result in a much more complete model.

Top-down segmentation

The top-down (or first-tier) segmentation approach uses customer attribute information, commonly known as customer reference data, to determine customer clusters. The goal of the top-down approach is to combine and cluster these customers based on their attributes, such as North American Industry Classification System (NAICS) code, geographic footprint, and line-of-business information. Top-down segmentation typically is the first layer of an effective segmentation methodology because it sets the baseline knowledge of the customer population.

Once business knowledge has been established in the first tier, it is important to validate it by using customer data. This check is referred to as the refinement process (to avoid confusing it with the bottom-up approach discussed later).The refinement process involves using the statistical descriptions of the populations established by top-down knowledge and comparing and discussing them with business knowledge to accept or refine the top-down tier. In order to validate business knowledge via customer attributes, statistics such as density distribution, count, mean, maximum, and distinct values can be derived from customer data, including reference data and historical activity data when available. As a result, further data analysis or new attributes can be introduced into the segmentation process.

Bottom-up segmentation

The bottom-up (or second-tier) segmentation approach is based on the segments established by the top-down approach. It uses customer activity data to further cluster customers based on similar transaction behavior such as wire, cash, check, and automated clearing house transactions. The bottom-up approach essentially applies unsupervised machine-learning techniques to the top-down population, and it requires a minimum of 12 months of transactional activity to operate efficiently. Bottom-up segmentation techniques such as k-means clustering can include using a fixed number (k) of clusters to define each data point. Data points are then assigned to a cluster based on proximity to the center of the cluster. The main objective of the bottom-up segmentation approach is to make better inferences about whether any activity can be viewed as anomalous or not, specific to a customer’s cluster.

The final segmentation is a combination of the top-down and the bottom-up segments. Depending on the transaction monitoring policy, customer risk ratings (CRRs) usually are added as well to form a complete segmentation model.

The importance of orthogonality

Due to factors such as data availability, data quality, and the intricate nature of customer behavior within sometimes complex products, it is important to combine the top-down and the bottom-up approaches to achieve the best segmentation results.

It is unlikely that an efficient segmentation model can be achieved by only a top-down approach unless the customer base is small and products are very simple. When dealing with large numbers of customers and an array of complex products, the bottom-up approach also should be used. However, it is important to maintain orthogonality. Quickly defined, “orthogonality” is a mathematical concept that refers to the perpendicularity between two notions; in this case, to maintain orthogonality is to make sure top-down and bottom-up notions are kept independent of one another, which means avoiding using the bottom-up observations to modify or overwrite the top-down segments.

In the event strong differences are observed from top-down understanding and bottom-up evidence, a deep-dive analysis should be conducted to understand why. Of course, analytics applied to dissect such disagreements should follow established model validation framework and governance practices. When conflicts persist, the top-down logic should be kept intact, which means bottom-up discrepancies will likely result in alerts. The goal is to make sure that the alert investigation and established tuning feedback loop can be used to better understand the root cause of the issue.

Three common misconceptions

In the course of discussing segmentation with a variety of banks and financial services companies, several common misconceptions emerge. These misconceptions often lead to skewed or inaccurate segmentation of the transaction monitoring model:

  • Misconception 1: Segmentation combines groups of customers designated as a higher AML risk. Grouping customers based on AML risk generally is the task of CRR. While CRR might act as an input model to the transaction monitoring segmentation model, it is not the goal of segmentation.
  • Misconception 2: Transaction monitoring segmentation should be trained by suspicious activity report (SAR) data. As previously mentioned, orthogonality is key to proper transaction monitoring segmentation. In particular, segmentation must be independent of any empirical financial crime measurements, especially in instances of artificial effects from the past, such as SAR data. Segmentation should be based on actual customer behaviors.
  • Misconception 3: Segmentation is a predictive model. The objective of segmentation is not to predict the behavior of a group of customers. Rather, it is to understand the best way to describe each group of customers. Therefore, transaction monitoring segmentation can be properly designated as a descriptive model.

The future of financial crime analytics

For an unsupervised machine learning algorithm to improve segmentation outputs and transaction monitoring system efficiency, the segmentation process requires meaningful data inputs. When the segmentation outputs improve and can more dynamically move entities based on new activity, the volume of noise generated by transaction monitoring systems should reduce.

Currently, segmentation is typically applied to transaction monitoring systems based on how frequently customer information is updated, often a 12- to 18-month cycle based on industry standards. The main case against using dynamic segmentation is that the system needs to ingest enough data to develop meaningful clusters.

Proper transaction monitoring segmentation can transform transaction monitoring analytics into holistic, risk-based analytics. Understanding the nature of the descriptive model, refraining from directly linking the model to noise reduction or transaction monitoring scenarios, and making sure orthogonality exists between the two tiers can lead to significant efficiencies in suspicious activity monitoring.

Glossary

  • Bottom-up segmentation: an approach that creates segments based on similarities derived by applying clustering techniques to transactional activities; the most popularly used technique is k-means clustering
  • Orthogonality: the perpendicularity between two notions; in the context of segmentation, to maintain orthogonality means top-down and bottom-up notions must be kept independent of one another

 

  • Refinement: a process of using descriptive statistics to validate and refine top-down segmentation
  • Top-down segmentation: an approach that uses business knowledge and customer reference data to group customers that are expected to have similar financial transaction behaviors

Learn more

Haibo Zhang
Haibo Zhang
Managing Director