Big Data

Data Lake: Save Me More Money vs. Make Me More Money

Bill Schmarzo By Bill Schmarzo CTO, Dell EMC Services (aka “Dean of Big Data”) February 4, 2016

2016 will be the year of the data lake. But I expect that much of 2016 data lake efforts will be focused on activities and projects that save the company more money. That is okay from a foundation perspective, but IT and Business will both miss the bigger opportunity to leverage the data lake (and its associated analytics) to make the company more money.

This blog examines an approach that allows organizations to quickly achieve some “save me more money” cost benefits from their data lake without losing sight of the bigger “make me more money” payoff, – by coupling the data lake with data science to optimize key business processes, uncover new monetization opportunities and create a more compelling and differentiated customer experience.

Let’s start by quickly reviewing the concept of a data lake.

The Data Lake

The data lake is a centralized repository for all the organization’s data of interest, whether internally or externally generated. The data lake frees the advanced analytics and data science teams from being held captive to the data volume (detailed transactional history at the individual level), variety (structured and unstructured data) and velocity (real-time/right-time) constraints of the data warehouse. The data lake provides a line of demarcation that supports the traditional business intelligence/data warehouse environment (for operational and management reporting and dashboards) while enabling the organization’s new advanced analytics and data science capabilities (see figure 1).

Bill 1

Figure 1: The Data Lake

The viability of a data lake was enabled by many factors including:

  • The development of Hadoop as a scale-out processing environment. Hadoop was developed and perfected by internet giants such as Google, Yahoo, eBay and Facebook to store, manage and analyze petabytes of web, search and social media data.
  • The dramatic cost savings using open source software (Hadoop, MapReduce, Pig, Python, HBase, etc.) running on commodity servers that yields a 20x to 50x cost advantage over traditional, proprietary data warehousing technologies .
  • The ability to load data as-is, which means that a schema does NOT need to be created prior to loading the data. This supports the rapid ingestion and analysis of a wide variety of structured and unstructured data sources.

The characteristics of a data lake include:

  • Ingest. Capture data from wide range of traditional (operational, transactional) and new sources (structured and unstructured) as-is
  • Store. Store all your data in one environment for cross-functional business analysis
  • Analyze. Support the analytics and data science to uncover new customer, product, and operational insights
  • Surface. Empower front-line employees and managers, and drive a more profitable customer engagement leveraging customer, product and operational insights
  • Act. Integrate analytic insights into operational (Finance, Manufacturing, Marketing, Sales Force, Procurement, Logistics) and management (Business Intelligence reports and dashboards) systems

Data Lake Foundation: Save Me More Money

Most companies today have some level of experience with Hadoop. And many of these companies are embracing the data lake in order to drive costs out of the organization. Some of these “save me more money” areas include:

  • Data enrichment and data transformation for activities such as converting unstructured text fields into a structured format or creating new composite metrics such as recency, frequency and sequencing of customer activities.
  • ETL (Extract, Transform, Load) offload from the data warehouse. It is estimated that ETL jobs consume 40% to 80% of all the data warehouse cycles. Organizations can realize an immediate value by moving the ETL jobs off of the expensive data warehouse to the data lake.
  • Data Archiving, which provides a lower-cost way to archive or store data for historical, compliance or regulatory purposes
  • Data discovery and data visualization that supports the ability to rapidly explore and visualize a wide variety of structured and unstructured data sources.
  • Data warehouse replacement. A growing number of organizations are leveraging open-source technologies such as Hive, HBase, HAWQ and Impala to move their business intelligence workloads off of the traditional RDBMS-based data warehouse to the Hadoop-based data lake.

These customers are dealing with what I will call “data lake 1.0”, which is a technology stack that includes storage, compute and Hadoop. The savings from these “Save me more money” activities can be nice with a Return on Investment (ROI) typically in the 10% to 20% range. But if organizations stop there, then they are leaving the 5x to 10x ROI projects on the table. Do I have your attention now?

Data Lake Game-changer: Make Me More Money

Leading organizations are transitioning their data lakes to what I call “data lake 2.0” which includes the data lake 1.0 technology foundation (storage, compute, Hadoop) plus the capabilities necessary to build business-centric, analytics-enabled applications. These additional data lake 2.0 capabilities include data science, data visualization, data governance, data engineering and application development. Data lake 2.0 supports the rapid development of analytics-enabled applications, built upon the Analytics “Hub and Spoke” data lake architecture that I introduced in my blog “Why Do I Need A Data Lake?” (see figure 2).

Bill blog2

Figure 2: Analytics Hub and Spoke Architecture

Data lake 2.0 and the Analytics “Hub and Spoke” architecture supports the development of a wide range of analytics-enabled applications including:

  •  Customer Acquisition
  •  Customer Retention
  • Predictive Maintenance
  •  Marketing Effectiveness
  • Customer Lifetime Value
  • Demand Forecasting
  • Network Optimization
  • Risk Reduction
  • Load Balancing
  • “Smart” Products
  • Pricing Optimization
  • Yield Optimization
  • Theft Reduction
  • Revenue Protection

Note: Some organizations (public sector, federal, military, etc.) don’t really have a “make me more money” charter; so for these organizations, the focus should be on “make me more efficient.”

Big Data Value Iceberg

The game-changing business value enabled big data isn’t found in the technology-centric data lake 1.0, or the top of the iceberg. Like an iceberg, the bigger business opportunities are hiding just under the surface in data lake 2.0 (see figure 3).

bill blog3

Figure 3: Data Lake Value Iceberg

The “Save Me More Money” projects are the typical domain of IT, and that is what data lake 1.0 can deliver. However if your organization is interested in the 10x-20x ROI “Make Me More Money” opportunities, then your organization needs to aggressively continue down the data lake path to get to data lake 2.0.
10x-20x ROI projects…do I have your attention now?

Bill Schmarzo

About Bill Schmarzo


CTO, Dell EMC Services (aka “Dean of Big Data”)

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting the strategy and defining the Big Data service offerings and capabilities for Dell EMC Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He’s written several white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power the organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill was ranked as #15 Big Data Influencer by Onalytica.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored Dell EMC’s Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements, and co-authored with Ralph Kimball a series of articles on analytic applications. Bill has served on The Data Warehouse Institute’s faculty as the head of the analytic applications curriculum.

Previously, Bill was the vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a masters degree in Business Administration from the University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

Read More

Join the Conversation

Our Team becomes stronger with every person who adds to the conversation. So please join the conversation. Comment on our posts and share!

Leave a Reply

Your email address will not be published. Required fields are marked *