Big Data

Data Science: Identifying Variables That Might Be Better Predictors

Bill Schmarzo By Bill Schmarzo CTO, Dell EMC Services (aka “Dean of Big Data”) December 21, 2016

I love the simplicity of the data science concepts as taught by the book “Moneyball.” Everyone wants to jump right into the real meaty, highly technical data science books. But I recommend to my students to start with the book “Moneyball.” The book does a great job of making the power of data science come to life (and the movie doesn’t count, as my wife saw it and “Brad Pitt is so cute!” was her only takeaway…ugh). One of my favorite lessons out of the book is the definition of data science:

Data Science is about identifying those variables and metrics that might be better predictors of performance

This straightforward definition sets the stage for defining the roles and responsibilities of the business stakeholders and the data science team:

  • Business stakeholders are responsible for identifying (brainstorming) those variables and metrics that might be better predictors of performance, and
  • The Data Science team is responsible for quantifying which variables and metrics actually are better predictors of performance

This approach takes advantage of what the business stakeholders know best – which is the business. And this approach takes advantage of what the data science team knows best – which is data transformation, data enrichment, data exploration and analytic modeling. The perfect data science team!

Note: the word “might” is probably the most important word in the data science definition. Business stakeholders must feel comfortable brainstorming different variables and metrics that might be better predictors of performance without feeling like their ideas will be judged. Some of our best ideas come from people whose voices typically don’t get heard. Our Big Data Vision Workshop process considers all ideas to be worthy of consideration. If you do not embrace that concept, then you risk constraining the creative thinking of the business stakeholders, or worse, miss out on surfacing potentially valuable data insight.

This blog serves to expand on the approach that the data science team uses to identify (and quantify) which variables and metrics are better predictors of performance. Let me walk through an example.

We recently had an engagement with a financial services organization where we were asked to predict customer attritors; that is, to identify which customers were at-risk of ending their relationship with the organization. As we typically do in a Big Data Vision Workshop, we held facilitated brainstorming sessions with the business stakeholders to identify those variables and metrics that might be better predictors of performance (see Figure 1).

data science

Figure 1: Brainstorming the Variables and Metrics that Might Be Better Predictors

Note: I had to blur the exact metrics that we identified for client reasons of competitive advantage. Yea, I like that!

From this list of variables and metrics, the data science team sought to create an “Attrition Score” that can be used to identify (or score) at-risk customers. The data science team embraced the iterative, “fail fast / learn faster” process in testing different combinations of variables and metrics.   The data science team tested different data enrichment and transformation techniques and different analytic algorithms with different combinations of the variables and metrics to see which combinations of variables yielded the best results (see Figure 2).

data science

Figure 2: Exploring Different Combinations of Variables and Metrics

The challenge for the data science team is to not settle on the first model that “works.” The data science teams needs to constantly push the envelope and as a result, fail enough in their testing of different combinations of variables to feel personally confident in the results of the final model.

After much testing and failing – and testing and failing – and testing and failing, the data science team came up with an “Attrition Score” model that had failed enough times for them to feel confident about its results (see Figure 3).

data science

Figure 3: Identifying Variables and Metrics that ARE Better Predictors

We needed an approach that got the best out of everyone on the project – the business stakeholders brainstorming variables and metrics, and the data science team creatively testing different combinations. The final results in this engagement were quite impressive (see Figure 4):

Figure 4: Final Attrition Model Results

Figure 4: Final Attrition Model Results

The creative data science process of combining different variables and metrics is highly dependent upon the success of the business stakeholder brainstorming exercises. If the business stakeholders are not brought into this process early and allowed to think creatively about what variables and metrics might be better predictors of performance, then the collection of variables and metrics that the data science team will seek to test will be limited. Put another way, the success of the data science process and the creation of the actionable score is highly dependent upon the creative involvement of the business stakeholders at the beginning of the process.

And that’s the power of our Big Data Vision Workshop process.

——————–

If you are interested in learning more about the Dell EMC Big Data Vision Workshop, check out the blogs below:

Bill Schmarzo

About Bill Schmarzo


CTO, Dell EMC Services (aka “Dean of Big Data”)

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Dell EMC’s Big Data Practice. As a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

Read More

Join the Conversation

Our Team becomes stronger with every person who adds to the conversation. So please join the conversation. Comment on our posts and share!

Leave a Reply

Your email address will not be published. Required fields are marked *

2 thoughts on “Data Science: Identifying Variables That Might Be Better Predictors

  1. a “model that identified ~59% of attritors” means nothing. You can have a model which identifies 100% of attritors with one line of code:
    def predict_attritors(x) :
    return Ture

    So, the 59% or 24% metric result means nothing when you don’t control for your false positives. In binary classification there are two types of errors:
    FN: False negatives – missed attritors
    FP: False positives – false attritors
    This define in turn two rates:
    FNR: FN/P
    FPR: FP/N
    Where P is the number of attritors and N is the number of non attritors.

    Your 59% is actually 100%-FNR, also known as the True Posutive Rate or “recall”. But stating the recall without stating the FPR is like not saying anything at all.

    Some use also a metric called precision which is the number of true detection out of all detection, or TP/(TP+FP).

    The performance point in your diagram will translate to 59% TPR but the x-axis is just the fraction of total population. Assuming rate of attritors rate of 10% in the population you get TP=10%*59% yet FP=90%*~10% so that the precision is about 1/3…

    • Hanan, thanks for the reply. We certainly try to account for false positives and false negatives in the testing of the analytic models. One should always test the models in the real world and hold out some of your customers (your control group) against which we can test (and learn from) the effectiveness of the analytic models.

      Let’s take the attrition model example. If the analytics flagged 100 customers who were predicted to attrite, then one would want to do hold out 10 to 15% of those customers (your control group) and not give them the attrition treatment. Then you could measure the control group (how many of the customers in the control group actually attrited) versus how many attrited from the test group.

      In the same sense, one would want to track how many customers who were not flagged as attritors actually attrited.

      Test, measure, learn and test again…that’s the only way that analytic models get better.

      Check out my blog for more details about how we approach Type I and Type II errors.

      https://infocus.emc.com/william_schmarzo/understanding-type-i-and-type-ii-errors/

      Thanks!