Title search: ✖
Subscribe to RSS Feed
January 9, 2017
Forget Skilling Up – Marketing Technology Needs to Come Back Down to Earth
By Anthony Botibol, BlueVenn
Over the last few years, many marketers have noticed a transitional change in their roles. Rather than focusing on the more creative aspects that have traditionally defined marketing, they are instead transforming into data scientists; analyzing reports, cleansing data sets and building in-depth profiles for each of their customers.
But are these new data wrangling responsibilities what marketers signed up for when they started their career? According to a new report from BlueVenn, not only do over 40% of marketers have to digest 21 or more sources of data, many are now spending as much as 80% of their day working with it.
Having to adapt to this new data-led approach to their job – while maintaining the more creative components that drew them to marketing in the first place – is a challenge that many need to understand better. Not least the MarTech industry.
To learn more, we surveyed over 200 senior B2C marketers to determine how they are handling the ever-increasing deluge of customer data, and then identify their biggest struggles.
Bringing multiple data sources together in order to form a coherent Single Customer View (SCV) has long been regarded one of marketing’s ‘holy grails’. Yet unified customer data is still a long way off for many – as many as 82% of marketers have yet to achieve an SCV, with 62% saying they have to deal with customer data from numerous disparate silos.
High volumes of data and disjointed data cause other problems, too. For example, over half (54%) of marketers claim that poor quality data has damaged their ability to provide more targeted campaigns, while 27% claim that they lack the skills to analyze it anyway.
So, given that we found 72% of marketers believe data analysis is the most important skillset for them to acquire over the next two years, what can be done to improve the situation? Do marketers, as the marketing press so frequently suggest, need to re-skill into data science? Or is this expectation misplaced?
If we’re being honest, the majority of marketers aren’t clamoring to retrain in data science in order to continue doing their day-to-day jobs. That said, they are equally reluctant to hand precious marketing data over to others. The IT department may have the analytic skills, but do they have the marketing savvy to put the data to best use? An external data agency may have them - but they also come with a considerable cost.
If customer data needs to stay with the marketing department then there needs to be a middle ground. Out conclusion was, rather than turning marketers into data scientists, MarTech vendors need to step up. After all, who are these marketing tools really for?
This is where the case for a Customer Data Platform can be made. By both removing the need to analyze data by hand and creating a single interface from which multiple data sources can be managed, marketers can do much of the work of data scientist using automated data management processes.
True, we might not yet be at a stage where marketers can press a single red button and the platform does all the data crunching for them. Nevertheless, as far as resolving many of the biggest issues they face – without extensive retraining – a CDP sounds like the compromise that our report suggests marketers are crying out for.
December 27, 2016
CDP Differentiators: Identified vs. Anonymous Data
Author: David Raab, CDP Institute
Many systems assemble customer data. This is confusing for marketers who often have a hard time understanding how the systems differ. Here’s a look at one important distinction: whether a system works with identified or anonymous individuals.
Identified individuals are known by name or an identifier that can be linked to a name, such as phone number, email address, credit card, or bank account. Anonymous individuals can’t be linked to a name but may still have an identifier that can be tracked over time, such as a browser cookie, device ID or account log-in. This means that even anonymous individuals can have a customer profile with detailed information. But the anonymous profile is typically limited to one source: without a personal identifier, there’s no way to link it to data in different systems that's about same person.
In recent years, the most important anonymous identifiers have been Web browser cookies. These are deposited on a computer during a Web or email interaction and can be read during subsequent interactions. If the visitor identifies herself by filling out a form or logging into an existing account, the cookie can be linked to her identity. But most site visitors don’t identify themselves, so their cookies remain anonymous.
The primary use of anonymous cookies has been to create advertising audiences. These are built by linking each cookie to attributes derived from Web behaviors such as content consumption. Audiences are built by selecting cookies with specified combinations of attributes. Technical details differ, but you can safely visualize this data as a spreadsheet where the first column holds the cookie ID and other columns contain values for attributes. Key design challenges in building these systems include handling millions (sometimes billions) of cookies, allowing thousands of attributes, easily adding new attributes, and selecting records very quickly.
Identified data is a different story. Because the data can be linked to a specific individual, the system often includes data from different sources. Each source may have its own identifier: a cookie ID from a Web site, an email address from CRM, a device ID from a mobile app, and so on. To accommodate this, the system needs a central “spreadsheet” that lists all the identifiers associated with a particular individual. Other “spreadsheets” hold the actual data from the source systems: that is, Web page views, purchases, phone calls, emails sent, etc. Each row on the central spreadsheet represents one individual and the columns are the different identifiers. On the other spreadsheets, each row represents a particular item (page view, order, phone call, etc.) and the columns are the details about those items (date, page name, product purchased, price, etc.). The columns are different in each spreadsheet, reflecting attributes of the items they represent. But every spreadsheet needs a column with a customer identifier. This is what the system matches to the identifiers in the central spreadsheet when it needs to create a unified customer view by assembling all data associated with an individual.
The actual technologies involved with anonymous and identified data involve more than simple spreadsheets. But you can still safely assume that systems for identified data are more complex than systems for anonymous data. Challenges facing systems for identified individuals are different as well. Key identified data issues include storing and accessing different data types, making it easy to add new sources, and combining similar data that comes from separate systems.
The different requirements for anonymous and identified data mean it’s hard for one system to do both well. Some vendors don’t even try. Others use a single technology they feel works well enough. But most who try to do both run what are essentially two different systems, each optimized for one application. They then call on the appropriate system as needed. These vendors differ in the underlying technologies, how much data is actually shared by the two systems, how data in one system can be accessed through the other (if at all), and how the data is presented to administrators, users, and other systems. Many vendors also improve performance using supplemental technologies such as indexes and summary tables.
This practical complexity is what muddles the distinction between Customer Data Platforms (CDPs) and Data Management Platforms (DMPs). Most DMPS were originally designed to handle anonymous cookie pools for advertising, using some variant of the “single spreadsheet” model. Most CDPs were designed to manage identified data from multiple sources, using the “multi-spreadsheet” approach. But many DMPs have been extended to handle identified data and many CDPs have added support for advertising audiences. The technical details of how they do this vary widely, but it’s those technical details that determine how well they succeed. So there’s no value to generalizing about which solution is theoretically better.
What does have value is being a smart buyer. This means you should:
December 20, 2016
Customer Data Platform Use Case: How to Turn Post-Purchase Data into Insights and Income
By Laura Hazlett, Ascent360
Data is one of the most valuable assets to a company. It provides insight into what people are buying, how much they are spending, and most importantly, who they are. But getting a clear view into this data is often a challenge. Only so much can be pulled out of each island of data you have. Your Point of Sale system will show you what is being purchased and how much you are earning while your email sign up will tell you who is interested in your company. But how do you know if those email subscribers ever turn out to be customers? A Customer Data Platform (CDP) can help you gain valuable knowledge from your data. By combining all data sources into the CDP, analytics can then be run to provide key statistics such as Lifetime Customer Value, Customer vs Prospect ratio and RFM Score. From there, segments can be built to execute more effective marketing campaigns.
Thinking through how exactly to use this data and how best to gain value from it can be difficult. The best way to look at how valuable this type of information can be, is to look more closely into a use case. Below we will walk through a customer journey to better understand the benefits of a Customer Data Platform.
CDP Use Case: Post-Purchase Customer Stream
For this use case example, you are the owner of a jewelry chain who has recently purchased software to better track and understand your customers and transactions, also known as a Customer Data Platform. Yesterday, at one of your stores, a customer purchased a necklace. After looking through many options, he decided on a $200 necklace. At that amount, it isn’t a transaction you would typically look further into. However, because his purchase comes through your Point of Sale system and into your CDP software, you are alerted that while this purchase was only $200, his lifetime value at this store location is $12,150. He is immediately tagged as a high value customer and goes into a high value customer marketing stream you have easily developed.
This automated email is sent to high value customers the following day, thanking them for their recent purchase. The email is also dynamically populated with the store managers signature. It is not a sales email, simply a loyalty message thanking the customer for their purchase.
Looking through the customer journey, you decide you want to see how many days between the initial purchase it takes before a repeat purchase is made. In the case of your jewelry chain, you see that the typical repeat purchase is about 90 days after the initial purchase. You present this information to your marketing team who then decides to implement a post-purchase and cross-sell marketing campaign to go out 85 days after a purchase. The customer bought a necklace? Great, promote the matching earring set in your post-purchase email!
Since you have set up the automated email it will check the data each day to see who is a high value customer that purchased the prior day, send out the personalized thank you email, and then the 85-day post purchase email. After implementing this campaign, you look through your CDP analytics and realize that while mainly very successful, there are still some customers that are not opening or clicking the email you are sending them. You realize that some customers need a different journey outside of email. Recognizing these people are important, you decide to create a segment of people that will receive a direct mail piece if they have not opened the email within a week of being sent. Your CDP software will check this segment to see they have come back to your store after receiving this direct mail piece.
Importantly, your CDP software can attribute revenue to the marketing campaign from the email and direct mail campaigns because the system is pulling in all of the point of sale data. Overall, you see a high marketing ROI from this post-purchase stream as well as high customer retention.
CDP: The Benefits of Customer Data
Without CDP software, this type of analysis and execution would likely not be possible. Combining your data into a single customer database can allow you to gain valuable insight into your customer data and execute highly targeted marketing strategies to customers and prospects leading to increased revenue. All companies have data, but it what you do with it that makes the difference and can set you apart from your competition. Because after all, data is just data until you do something with it.
December 15, 2016
Retailer Wildfang used a Customer Data Platform to Boost Revenue by 73%
Author: Julia Farina, Lytics
Wildfang, a quickly growing clothing and accessories retailer, is a brand that is all about understanding its customers. They serve an extremely diverse audience, ranging from sports enthusiasts, fashionistas and career women to A-list actors and entertainers. Wildfang understands their various audiences and, in-store, their associates can immediately assess which products will resonate with individual customers as soon as they walk in.
However, the challenge was how to extend that personalized experience to their e-commerce website. Wildfang decided to use a customer data platform (CDP) to help them identify their diverse audiences online, personalize their messaging to them and generate more revenue.
Wildfang chose Lytics because its customer data platform allows companies to personalize their marketing (e.g, website, online ads, email, etc.) by better managing data about customers and taking action from both integrations (e.g., Facebook Ads, email, etc.) and within the product itself (the Lytics website personalization product).
How they did it:
STEP 1: Resolve identities and access a holistic view of each customer by feeding cross-channel customer data (location, email, subscription, social, offline and online purchasing, web, and mobile) into one centralized hub.
STEP 2: Take advantage of behavioral and predictive insights and create dynamic, cross-channel audience segments.
STEP 3: Create custom web campaigns that integrate with the look and feel of their website.
USE CASES AND RESULTS
Personalized messaging and recommendations
Instead of one-size-fits-all messaging, Wildfang used a customer data platform to deliver distinct language and recommendations on their website that would resonate with customers who came from select partner and affiliate websites. They saw a 10% click-through rate as a result.
Wildfang used a customer data platform to leverage in-store and website data to identify customers who qualified for lifetime free shipping. These customers were greeted with website personalization to remind them of the offer. Conversion rates have skyrocketed — up 80% year on year, with targeted messaging a key factor.
Better User Experience
While unknown visitors received a website prompt for one retailer’s newsletter signup, Wildfang used a CDP to suppress prompts for known customers (who had already provided an email address). This simple fix resulted in a better user experience for known customers and reduced bounce rates.
By using a Customer Data Platform, Wildfang eliminated unnecessary advertising spend by not retargeting customers who were already engaging with them over email and reserving online win-back ads for individuals who unsubscribe from emails.
The overall result of using a Customer Data Platform has been a better customer experience and a considerable increase in revenue for the popular retailer. Emma McIlroy, CEO of Wildfang explains: “Previously, we didn’t have a way to target members from our most important cohorts on our website. With Lytics, identifying and messaging this group on site is easy. This opportunity equals immediate and significant revenue for us and helped us grow our business by 73% in one year.”
December 12, 2016
Research Roundup: New Surveys Shed Light on Customer Data Platforms
Author: David Raab, CDP Institute
Several recent surveys touch on Customer Data Platform issues. Each has its own focus but all address the relationship of unified customer data to unified customer experience – a topic that isn't as simple as it might seem.
Myths of Marketing Survey from BlueVenn (download here).
BlueVenn polled 202 B2C marketers about their use of customer data. Marketers said they were in charge of data at 60% of the companies, compared with 27% who said it was managed by IT. They also reported that customer data comes from many systems (40% have more than 20 data sources) and takes a lot of time (nearly 30% spend more than half their time on analyzing data). That answer helps to explain why time was the most commonly cited obstacle to creating more targeted campaigns (41%), followed by data cost (34%), data access (28%), data skills/knowledge (27%), and other data-related issues.
BlueVenn’s major conclusion was that marketers are spending too much time on data-related work and that better systems are the cure. In their words: “We here at BlueVenn believe that the perceived skills shortage as a barrier to achieving a Single Customer View, real-time omnichannel marketing and customer journey analytics, is in fact the BIGGEST myth of marketing. It is not the fault of the marketer that they cannot achieve their strategies –the blame should in fact lie at the feet of marketing technology providers.”
Regarding unified data vs unified experience: 28% said they had connected all channels to create an omnichannel experience while just 18% reported having unified all data sources into a single customer view. This suggests they either have direct connections between systems or share identities without a complete customer profile. So, if you’re thinking the single customer view is a prerequisite to omnichannel experience, think again.
The State of MarTech and AdTech: Customer Journey Investments in 2017: The Agency Perspective from Kitewheel (download here).
Kitewheel asked 134 agency marketers about their use of marketing technology. Key findings were that agencies see great demand for customer journey projects and 75% are investing in technology to do them. These investments are despite the fact that respondents already have more tools than they use (just 6% use all their existing tools weekly while 72% use fewer than 40% of their tools weekly.) By far the largest barriers to tool use were lack of expertise and training (66% combined). The gap is preventing nearly two-thirds of agencies from delivering projects their clients want.
In Kitewheel’s words: “Agencies are cautious in their ability to build the capabilities to deliver a customer journey in 2017. 64% don’t expect to be able to deliver real journeys until 2018 or later. Primary reason for this caution is the lack of skills and tools (54%)”
On unified data vs. experience: 46% said they can currently do omnichannel personalization while just 12% said they could do adtech/martech unification. So, even more than before, we see cross-channel treatments without unified data. Respondents did rate adtech/martech unification as the top capability needed for journey programs, while omni-channel personalization ranked fifth.
The Impact of CRM on Customer Experience from Usermind (download here).
Usermind surveyed over 500 people who either had personal super-admin access to a CRM system or supervised an individual with super-admin access. Perhaps not surprisingly, this group concluded that CRM had the greatest impact on customer experience than other systems, including data warehouses/other customer data platforms, marketing automation, social and Web traffic data, or third party data. Less easily explained away are conclusions that companies with more satisfied customers (as reported by the respondents) were more likely to use CRM as their primary customer system of record, to have a dedicated customer experience team, and to have fully digitized business operations or a digital transformation strategy in place. This group also uses many systems: 33% reported having more than 20 systems impacting customer experience.
The survey also found that companies implementing customer journeys with workflow tools inside each application were much more likely to report very satisfied customers than companies using internally developed solutions or “integration platforms” (which Usermind did not define). On the other hand, the capabilities needed to improve customer experience covered a wide range of cross-system functions: data mapping to identify customers across all applications; adaptive system and data integrations; automated workflows; defined customer journeys that span applications and teams; and, a unified view of customer data”. So the respondents feel that a siloed CRM isn’t enough.
Usermind’s advice seems to be that companies should use existing tools to build better experiences instead of waiting to build a unified view. This is why resources like dedicated customer experience teams have more impact than a data warehouse or CDP. In Usermind’s words: “Traditional integration approaches create challenges for your team, and roadblocks to delighting your customers. Point-to-point integrations don’t pass valuable customer context along with your data. And whenever your source systems or schema change, your integrations will break. Data alone won’t deliver a better customer experience — your analysis needs to be translated into action. If you use a customer engagement hub or journey orchestration to deliver a one-to-one, real-time customer experience, you can avoid the pitfalls of traditional, labor-intensive approaches.”
Regarding data vs experience: 64% of respondents said that data mapping to identify customers across all applications would improve customer experience, compared with 51% who cited customer journeys that span applications and teams and 38% who cited a unified customer view. Again, respondents are saying they can deliver coordinated experiences without assembling a central database.
From Theory to Practice: A Roadmap to “Omnichannel” Activation from Winterberry Group (download here).
Winterberry spoke with more than 100 executives at advertising, marketing, media and technology companies. The topic was audience (i.e., customer and prospect) recognition in particular and omnichannel strategies in general. Key findings were that 73% saw recognition as a moderate or higher priority but only 9% were able to recognize customers across all channels. Fewer than 7% were satisfied with their ability to leverage customer data across channels. The survey distinguished cross-channel recognition from omnichannel marketing programs, but found for both that technical improvements such as integration were more important than organizationsl issues such as collaboration, priorities, or staff skills.
In Winterberry’s words: “What’s the next frontier of omnichannel marketing? Panelists said the next great leap forward would be driven from the inside, with the potential alignment of internal business processes and technology infrastructure likely to do more to advance their omnichannel efforts in the years ahead than any other initiative.”
On data vs experience: the survey found that 40% felt they did cross-channel orchestration extremely or fairly well while just 32% said they do audience recognition across all or most channels. Once more, we see that orchestration is apparently possible without the data sharing that recognition makes possible.
Each survey offers useful insights related to its primary topic. But, for me, the most important message is the one they all share: omnichannel programs can be delivered without building a central database. That may seem an unlikely conclusion for the Customer Data Platform Institute blog, but let's be clear. It doesn't mean that central databases are unnecessary. It only means you can do some omnichannel work without them. One intermediary step is cross-channel customer recognition, which requires building cross-channel identities (presumably in a central system) and sharing them with experience-delivery systems. The next step is to expand the central system by adding more data and sharing it. This can be an incremental process as the central system gains access to more sources and as marketers find uses for additional pieces of information. A complete customer view is still the long-term goal because it enables the richest marketing programs and deepest analysis of customer behaviors and program results.
Centralized orchestration may be another intermediate step. An orchestration engine needs a unified customer view, which it might create for itself or read from a separate customer database. Either way, the orchestration engine's role is to provide execution systems with consistent customer treatments. This replaces relying on each execution system to make its own decisions. Although orchestration is not part of CDP definition, it's important to recognize that many CDPs do include such functions because they add value for marketers. And value to marketers, not conformance with a definition, is what really counts.
December 8, 2016
Best Practices for Customer Data Management in the Multi-Screen Era
Authors: Michael Katz and David Spitz, mParticle
When it comes to customer data, we’re undergoing a period of massive change. eMarketer reports there are 2x the number of Internet connected devices today as there are people; by 2020, that number will be almost 5x. Meanwhile, according to the Winterberry Group, organizations use on average more than a dozen distinct SaaS tools, with some using as many as 30 tools. That’s a lot of data sources and outlets.
The classic 3 V’s of big data, volume, velocity, and variety, still apply today, but in different ways, shapes, and forms than in the CRM and web eras. With mobile, there is much more data being created passively via the radio, and thus generating and transmitting all the time. The variety is much greater than even on web because of all of the device telemetry data, the geospatial element, as well as the native data points. The velocity is also unlike anything we’ve ever seen since data is being generated with every single movement and swipe.
But what makes the current moment so challenging isn’t just the data itself. The software deployment model, how data gets onto these remote devices, and, most important, the end use cases are all different, too.
The dominant interaction types on most of these new devices is apps, and those apps themselves are fundamentally different from browsers. They are compiled code and shipped software that live locally on your phone, TV, Kindle, etc. They are decentralized from the browser, with no agile development. On the other hand, because native apps can access and leverage device functionality, they can more readily access features like new cameras, accelerometers, barometers, glass to support 3D touch. All of this innovation in hardware has the potential to create a much richer experiences for end users.
New data types such as push tokens and exceptions never existed in web environments yet are now paramount for success. Conversely, all this incremental data collection, when not managed properly, can add significant overhead, risk, and complexity into an app experience.
Taken together, these changes have serious implications for marketers. For example, in a world where the mobile device is now the hub for every step in a customer’s journey, marketers need converged, multi-purpose data platforms, not just ad platforms masquerading as customer data platforms, to take advantage of “through-the-line” opportunities that exist on mobile. At the same time, people are multi-tasking, engaging with customer support reps, and, yes, still buying in retail stores, and all of that needs to be taken into account, too.
Data convergence in the multi-screen era doesn’t mean necessarily that we’ll have an “all in one” monolith for ad serving, email, social, web, and so on, but it does mean that these efforts will be joined at a business and data level like never before.
In the face of these challenges, companies need to change how they handle data at every step along the way, yet they can’t just hit the pause button on business as usual and overhaul everything in order to do that.
Here are four steps companies need to take in order to not just survive, but thrive, in this new era:
Defining a strategy is typically the first step in any endeavor, and data management is no exception. To define your data strategy, you must:
● Identify your goals: Do you want to improve growth, retention, audience insight? Whatever it is, be clear.
● Map your data: Map your data to your goals by considering factors like KPIs, how you’ll think about segmentation, and what your engagement triggers will be.
● Create naming conventions: Your naming conventions should be clear and simple to understand. Far too often, organizations use inconsistent and/or difficult to understand naming conventions, and that can wreak havoc down the line.
● Build a hierarchy of user IDs: As we move away from anonymous web tracking, take advantage of the data that’s available -- including identities -- to develop an omnichannel understanding of customers.
● Outline your use cases: Determine how you will use the available data to achieve your goals by clearly outlining your use cases.
● Align use cases with technology: Ensure a clear alignment between your use cases and your technology stack and consider what your stack needs to look like in the future to help solve for your core business challenges.
● Remember privacy: Privacy is a trade-off of personalization, and you must have the right privacy controls in place to respect your customers’ requirements.
Your data collection process can impact both your users’ experiences and your ability to take action. With that in mind, your data collection process needs to:
● Do no harm: Collect data once and do so in the right way in order to avoid bloating your app with unnecessary code that can degrade the user experience.
● Be consistent: Accept no compromises in capturing data consistently across all screens, but remember to account for the native data types on each. Failing to account for these native data types can lead to an 80/20 scenario where you have 80% of the data but are missing the 20% that is native to each screen, and it’s that 20% that typically drives the overwhelming majority of value.
● Make data capture use case agnostic: Step back and think about the bigger picture. While execution should be highly use case specific, having a single source of truth that can support diverse use cases will stand the test of time.
The value of a data platform should go beyond the sum of its parts. The best way to inject additional value into the stack is to enable greater control of data through filters, enrichment, and segmentation. To do so:
● Be diligent about converting signal to noise: Don’t pollute downstream systems with an abundance of data. Limiting what you send makes your analysis easier and your costs lower.
● Merge identities around a single customer view: Augment direct matches with data science, but only after maximizing user identity matching.
● Enrich data to get a more complete view: Bring data feeds in from all of your SaaS tools as well as from relevant third party tools.
Finally, you need to simplify the process of operationalizing data to all the different endpoints across the entire lifecycle of your business. This simplification requires you to:
● Empower end users to move quickly
● Sync continuously to avoid wastage
● Bring data back “in” from executional tools to learn and optimize
When thinking about data connections, you also need to keep in mind the fast pace of change in today’s environment. The vendors leading the pack can change at any time. One of the great benefits of having a data platform detached from execution is the ability to add execution and analytics tools as needed onto a central data hub, as well as remove those wants that are no longer needed without losing historic data.
In the multi-screen era, a reactive, tool-centric approach is simply not an option. It adds significant cost, complexity, and risk to your business. Ultimately, you end up with a tangled web of client and server side integrations that leads to unnecessary overhead and creates user experience, privacy, and security challenges.
That’s why you need to spend time thinking about all of your data use cases across marketing, analytics, data science, attribution, CRM, help desk, you name it, and build a data strategy to support them holistically. Following the above-mentioned steps will enable a truly 360-degree view of the customer that’s not only insightful but also meaningful to the business.
Michael Katz and David Spitz are, respectively, CEO/Co-Founder and CMO of mParticle. This blog post was adapted from their October 2016 webinar of the same title.
December 6, 2016
Subaru UK Distributors IM Group Reap the Benefits of a Customer Data Platform
Author: Anthony Botibol, BlueVenn
As the sole importer and distributor of Subaru cars in the UK, IM Group coordinates the activities of over 200 dealerships around the country.
In 2011, IM Group set out to centralize the marketing for all its dealerships, providing effective, direct messages to existing and potential customers via a unified Customer Database Platform.
Traditionally, the dealerships had the main responsibility for customer records, which created a significant problem. They often lost track as cars were sold to new owners, or serviced at locations other than the original dealership. Moreover, there was no framework, or will, to share data with each other, or with central franchising.
It therefore made it impossible to maintain a reliable database of customer contacts. This contributed to the issue of multiple and inaccurate records for individual customers, as well as confusion over which dealers ‘owned’ the data, where customers had used more than one dealer. All of this undermined the potential for high quality direct marketing to existing and prospective customers.
The data had to be cleaned and streamlined, as well as integrated with information from potential customers who had visited the Subaru website. The company wanted to send email newsletters with content tailored to segmented groups to prompt enquiries, encourage test drives and ultimately convert them to sales – while also monitoring the conversion rates at different levels and running analytics on dealerships’ performance.
IM Group addressed these issues by turning to the BlueVenn Customer Data Platform (CDP), for analytics and a Single Customer View.
The BlueVenn solution is a purpose-built database containing a single record, which can be linked to all other information, for every individual the company wishes to contact. For example, to match vehicle tracking ownership third party data with first party data, to identify owners who had sold their cars to stop service reminder communications and irrelevant offers. It helps to create a customer view and perform tasks such as segmentation, profiling and campaign performance management and acts as the platform for all the marketing intelligence.
The solution has enabled IM Group to handle the online marketing for all of its dealerships, ensuring a more coherent national strategy, and make it possible to monitor the performance of each franchise in the market.
“Combining data from the dealerships and website users greatly increased the number of sales opportunities we could identify. Compared with the hundreds of enquiries per month generated by our newsletter, the company can now identify thousands of website users and it is possible to see exactly which parts of the site they have visited, and what interests them most. This helps us to better target prospective customers with content tailored specifically to the interest they have shown,” says Howard Ormesher, CRM Director at IM Group.
IM Group is now able to filter content to determine which contacts should receive offers, dealership news or national newsletters, depending on contact preference. Additionally, dealers can opt to include their own offers such as reduced pricing on servicing or discounted accessories for example.
Equally important is that data on customer responses is fed back to BlueVenn. This gives IM Group a much more reliable picture of the market, supported by evidence, rather than the anecdotal reports from dealerships. It also provides an early indication of whether or not a marketing campaign is working, and IM Group can measure factors such as opens, clicks, resulting web sessions, test drives and sales for every email it sends.
The overall result has been a considerable increase in conversion rates: the number of enquiries leading to test drives has risen by a factor of 3.2 and test drives to sales is up by 1.6.
“We now have a CRM infrastructure that is constantly fed; every day we take new feeds of data, the system is rebuilt and campaigns are triggered automatically, which drive in new content. And the culture of the organization is changing; there’s an acceptance that this is beginning to set the agenda and help the business to improve,” said Ormesher.
December 5, 2016
Persistence of Data in Customer Data Platforms
Author: David Raab, CDP Institute
The CDP Institute’s definition of a Customer Data Platform describes it as a “persistent, unified customer database”. Most of the CDP discussion focuses on the “unified” bit, since collecting data from different sources and linking it to cross-channel identities is a huge challenge. But “persistent” is worth some thought as well.
“Persistent” is in the definition to distinguish CDPs from solutions that read data from external source systems without storing it internally. The two main classes of these real time interaction managers, which assemble data to guide Web and call center interactions*, and integration platforms like Jitterbit, Mulesoft, Zapier, and Boomi, which act as switchboards to shuttle data between different systems without storing the data themselves.
The value of persistence is obvious: storing data lets CDP users look back over time to find patterns, calculate trends, build aggregates, and access details that might be lost or inaccessible in source systems. Persistence is especially important for identity resolution, which needs historical data such as the same device accessing different accounts or different devices being used simultaneously. On a practical level, it’s often easier to work with data stored inside the CDP than to read that data from an external system. Indeed, the owners of external systems are often unwilling to allow external access to their data because they fear it will interfere with operational performance. And they’re often right.
But it’s not enough to say that persistence is important. Persistence also has its costs, most obviously in extracting, moving and storing the persisted data. There are also performance penalties from having more data to sort through. It’s true that storage is cheap and big data technology scales almost indefinitely. But if you copy enough data these costs still become significant. At the extremes, there are certain kinds of data it doesn’t make sense to persist (in most cases), such as minute-by-minute changes in customer location, local weather, or stock portfolio values. Most CDP applications only need to know those values while an interaction is happening, so all that’s needed is to look up the current value at the start of an interaction. Storing a continuous history would be overkill, although it often does make sense to save a snapshot of those values at the time of the transaction. As the mention of location may suggest, persistence can also raise privacy issues.
In other words, the question isn’t whether persistence is needed but which data to persist. Choices must be made.
The first step is to distinguish three categories: data which must be persisted, data which might be persisted, and data which should never be persisted. You can then start asking which data falls into each category. The real answers will depend on your situation but here are some thoughts.
The call for whether or not to persist can be close where there are truly massive amounts of detail – think Web logs – which are used only occasionally. Having them available is extremely convenient, especially when summaries are not sufficient substitutes. For example, customer segmentation projects may need the underlying details to reclassify customers using different segment definitions. Simply storing the customer’s current segment with each interaction won’t work. This type of after-the-fact reclassification is a common requirement and one of the big advantages of having the details in the CDP. But it might not be worth loading the data if you’ll do the analysis just once every three years – although having the data easily available might result in doing the analysis more frequently. Chicken, meet egg.
If you’re thinking that the boundaries between these categories are pretty vague, here’s some bad news: it gets worse. You might want to store the recent portion of a data stream that’s too large to keep in its entirety, in the way that surveillance tapes are kept for a period and then erased if nothing important happened. Or, you might read time-sensitive information directly from operational systems to support real-time interactions, but then upload the same information in overnight batches for historical analysis. And let’s not even get started on the fact that your CDP itself can have different types of storage with different levels of detail and access speed. Or that answers will change over time as you find and discard uses for particular pieces of data. Or that CDP technology itself will evolve.
Given these ambiguities, how should you think about persistence in planning your CDP? As ever, the foundation is specific business uses: what data do you need, in which formats and how quickly, to support your intended applications? Can you meet those needs by reading directly from the source systems or do you need to load it into the CDP? If it must be in the CDP, is the cost to load and store it acceptable? Beyond this relatively static analysis, remember that there may be future uses for the data and that you have choices in how you manage it in your CDP.
Bottom line: persistence in a CDP isn’t a simple topic. But if you only remember that you’ll likely need a combination of internal storage and external access, you’re already started in the right direction.
* This is a large and complicated category, which could easily occupy several blog posts by itself. See this Forrester Wave for a good overview of the classic real time interaction managers, nearly all of which are now baked into enterprise marketing suites. See this Gartner report for an overview of digital personalization engines, a newer group with overlapping functions.
November 28, 2016
Survey: Structured Management is Critical to Database Success
Author: David Raab, CDP Institute
Structured technology management tools such as long term planning and technology standards all contribute to success in building a unified customer database, according to a survey released today by the Customer Data Platform Institute.
We asked marketers and martech experts about their current and planned customer databases and several key management tactics, including long term planning, agile development, technology standards, and value measures. The results confirmed the obvious -- it pays to be organized -- but also added some nuance to understanding what's important.
Some of the key findings are summarized below. You can download the full report for free from the CDP Institute Library at www.cdpinstitute.org
November 22, 2016
Welcome to the CDP Institute Blog
Author: David Raab, Founder, CDP Institute
This is the first official post on the Customer Data Platform Institute blog, even though we launched the Institute three weeks ago. There’s no deep reason for the delay, just that we’ve been busy getting other parts of the Institute functioning. The most prominent of these is the on-line Library of papers from the Institute and sponsoring vendors. We see the Library as the heart of the Institute because it holds such a wide range and depth of information on Customer Data Platforms and related topics. Reading the Library contents is an advanced education in all things CDP and education is what the CDP is all about.
The other activity that’s taken much energy is the daily newsletter, which you’ve been receiving if you’re an Institute member (and if you’re not, sign up here). Even though we limit the newsletter to three articles a day and those articles are just links to news items published elsewhere, it’s still a substantial effort to scan for appropriate items, do enough to understand what each item really represents, and write a several sentence comment. I’ll admit those comments are my favorite part since I get to have a bit of fun while writing them. But the real reason we bother with the comments, when it would be so much easier just to reprint the first few lines of the original articles, is to explain why a particular item is included. This is usually because it illustrates some larger trend or point that’s worth tracking. I’ve always thought of everything I write as a tile filling in one tiny piece of a larger mosaic. Every piece enriches the picture but you can only make sense of it by pulling back and looking at the whole. The newsletter items are more pieces in the mosaic and comments are a ways of showing where each piece fits in the grand scheme of things.
Of course, I do include an occasional item simply because it amuses me.
This blog is another part of the same project. It provides larger tiles than the newsletter but still contributes to the same big picture. I’m especially excited about the blog because we plan to have many expert voices. I'll be one of them but the vendors sponsoring the Institute have agreed to contribute regularly. We’ll bring in outside voices as well. If things go as expected we’ll have almost one post per day, delivering a rich chorus of experts. They won’t always agree (in fact, I hope to occasionally start some productive arguements). But they should certainly cover the broad range of topics relevant to marketers at different stages in their customer data management journey. Truth be told, we are being uncharacteristically systematic (for me) in planning the mix – simply because I feel it’s so important to the members that we do it right.
Of course, even as I write this I’m scanning newsletter articles telling me that blogging is overworked, if not completely dead. For example, this one argues for more infographics while this one asks how your blog can stand out from 65 million others (it's not such a great article, actually, but a killer headline). So part of me does worry that the institute will pump out too much content on the blog and elsewhere. But most of me – and the vendors who have been enthusiastic supporters of the Institute – thinks there’s a ton that hasn’t yet been said on the topic of Customer Data Platforms and customer data management in general and that we’ll all benefit from having the Institute broadcast as much of that information as possible.
And, if you’re worried, we will indeed get around to infographics, videos, slide-shares, podcasts, Webinars, discussion groups, calendars, and eventually real-world meetings and conferences. But it took three weeks to just get to this blog post, so be a little patient, okay?
In the meantime, enjoy what we have and let us know if you have thoughts on what we should add or would like to contribute something of your own.