Rising attention to feedback from customers is a fast-growing trend in all markets today, and surveying different aspects of businesses is used as never before. But do the people engaged in it really get as much benefit as they hoped? This is one of the hot topics discussed broadly across the fields, from election polls to support of business decisions.
If you look closely into the anatomy of the discussions on this topic, three major questions surface:
- How do you cope with the growing amount of work required for preparation and analysis of surveys?
- How do you design and analyze surveys to get the results that lead to efficient business decisions?
- What organizational steps are necessary for smooth implementation of findings?
Accordingly, three lines of organizational and technological development are very much needed.
The first is automation of the mundane work associated with surveying. We see progress on that front in the growth of companies engaged in survey automation, such as Qualtrics and Survey Monkey. Today, if business managers have certain ideas about which they want to know more, it is easy to represent questions in convenient and usable formats. It is also easy to deliver survey forms to potential or existing customers and to collect answers. Not an easy part is to extract useful information from the answers.
That brings us to the other two – quality and focus of analysis. There is a good portion of criticism in publications about the prevailing approach to this matter. As a well-articulated example, I can reference an article by Jeremy Griffith “Predictive analytics: better than surveys?” recently published by “Marketing Tech News” (see ). It argues that the BI-style analysis practiced today produces low-quality results mostly because a simple summarization of issues is based on a typically low rate of responses. As an alternative, the article favors creation of predictive models that enlighten researchers about reasons for customer satisfaction or dissatisfaction. In some sense, that is akin to the prediction of probable answers from non-responders.
We at Compellon see the root of the problem at a more fundamental level. Surveying, as the data collection part, and analysis (summarization or predictive) are two sides of the same process, and the real issues are the ability to extract much more information than popular methods do today as well as the ability to ask useful questions. It becomes possible only if data collection (e.g., survey) and analysis are steps in a cycling process. The current low rate of responses is hurting analysis, but taken by itself is not the key problem. The fundamental analytical issues can be expressed in two statements:
- Finding beneficial actions requires not only knowledge of flaws in services but also understanding their reasons.
- Causality relations cannot be found on a static statistical analysis, however strong it might be. We need to analyze statistical consequences of the controlled changes (focused interventions) in the system, as studies of causality show (see ).
For sure, predictive analysis based on discovery of relations opens an opportunity to predict what people are silent about, if we have enough information about them already. But we should not forget that even the strongest static (non-interventional) analysis is able to extract no more information than the original source contains. If the predictor based on a small sample is accurate, then the sample itself is sufficient for business decisions. If not, the application of erroneous conclusions to a broader population will be only misleading.
My point is that an appropriately staged analysis with the focus on discovery of the key causal relationships can transform an ordinary cycle of the trial-and-error investigations into a systematic discovery of reasons, which is the basis for understanding customers’ behavior. This is exactly what we try to accomplish at Compellon.
As an example, our company has had a good experience collaborating with Qualtrics on the analysis of surveys, where both types of the required work well fit each other. At a high level, this collaboration works as the cycle schematically represented as the chart in Fig. 1.
Initially, survey questions broadly cover aspects of the business and, possibly, some painful issues already known to the business. In addition, available information about customers is collected. After survey automation and collection of responses are done, this material goes to Compellon for analysis.
Five relevant capabilities of the Compellon analytical engine are used at this stage:
- AI-based evidence driven modeling without fitting any preconceived model structures. This direct approach accelerates the modeling process and avoids implicit assumptions associated with standard solutions.
- Discovery of trigger events, i.e., finding the key information about driving forces behind different types of customer behavior. Discovery of the most important profiles of customers in order to support a more-personalized way of interaction with consumers of services.
- Revelation of chains of influence. They characterize business aspects that determine customer behavior. This helps to understand the train of thought that leads the customer to positive or negative conclusions about the service. Initially it is a hypothetical influence, which is to be categorized as causal (or not) at the later steps of the cycle.
- Advisor feature. It suggests actions, which will probably improve customer satisfaction and, therefore, perception of services. In addition to the qualitative aspect telling us which slice of business needs improvement, the feature also provides quantitative information telling the business to what extent it is necessary to improve faulty sides of the service. This, in turn, provides information for optimal intervention highlighting cause and effect
- Finally, the automatic assessment of survey quality points to flaws in the questionnaire itself, which leads to the next round of interaction with customers in a more-focused manner.
All this is very far from the standard “toolkit” that survey analysts are required to have today. If you read a typical list of the required knowledge (e.g., as in ), you will not even find elements of AI-based approaches, any support for causality studies, or methods for finding optimal interventions.
On the organizational side, each new step of the cycle is organized in a way that benefits further improvement of services, the model quality, and more accurate description of causal relations. In our collaboration with 1-800 CONTACTS on churn analysis (see ), the client did a great job in changing responsibilities and the focus of tasks of its departments in order to extract maximum value from a stronger analysis and the cycling architecture of the process.
We see these examples of collaboration as confirmation of our general direction in development – a systematic AI-based search for understanding. Being applied to surveys, it raises business-customer relations to a new level.
- Thanks to technology, the constructive dialog between businesses and customers (through surveys and other means) will become easy and even seamless.
- AI-supported advice to business decision makers will be more concrete and inventive such that business analysts cannot imagine themselves without such support.
- And due to personalization, customers will stop thinking about surveys as a nuisance; instead, they will perceive the newly-transformed relationship process as a great help and will need and rely on it as they now need and rely on microwaves or smartphones.
- Jeremy Griffith, “Predictive Analytics: Better Than Surveys?” Marketing Tech News, Nov. 22, 2016.
- Stanislav Kolenikov, “Training for the Modern Survey Statistician;” Survey Practice, 2015, Vol. 7, special issue, pp. 1-13.
- Judea Perl, “Causality;” Cambridge University Press, second edition, 2009.
- Neil Wieloch, “1-800 CONTACTS Identifies Drivers of Churn;” a blog on this site.