Pages

Friday, February 15, 2013

Incorporating Preference Construction into the Choice Modeling Process

Statistical modeling often begins with the response generation process because data analysis is a combination of mathematics and substantive theory.  It is a theory of how things work that determines how we ought to collect and analyze our data.

A good example of this type of statistical modeling was the accurate predictions made by several political scientists in the 2012 presidential election.  This is how Simon Jackman, author of the pscl R package, described his work for the Huffington Post.
"The 'quant triumph' here is more about the way we've approached the problem: a blend of political insight, statistical modeling, a ton of code and a fire hose of data from the pollsters. Since at least four or five of us will claim 51/51 (subject to FL) this cycle, it's not 'a Nate thing.' It's a data-meets-good-model, scientific-method thing."
We start with the science.  If we have a theory of the data generation process, we use that knowledge to guide our data collection and statistical modeling.  I recognize that this is not the only approach to analyzing data.  When the data mechanism is unknown, we must rely on exploratory techniques and algorithmic modeling as Breiman argues in his two cultures of statistical modeling paper.  However, choice modeling is well-grounded with extensive empirical findings from behavioral economics and a theoretical foundation from psychology.  We ought to use that knowledge to avoid traps and missteps.

How Does Human Judgment and Decision Making Work?

In the decision making literature we can identify two incommensurate worldviews of how humans form judgments and make choices.  On the one hand, we have those who seem to believe that attribute preferences are well-formed and established, waiting to be retrieved from memory.  To be clear, we are speaking about specific attributes that might be varied in a conjoint study like the 50 attributes with 167 levels in the Courtyard by Marriot Study.  We will refer to this viewpoint as "revealed preference" because it holds that detailed and extensive preferences are present somewhere in memory and are uncovered by our questioning. 

On the other hand, many of us do not see preferences as well-formed or enduring.  Preferences are constructed "on the fly" or in the situation based on the conditions present at the time and past experiences in similar contexts.  That is, one does not retrieve a stored preference for each of the 167 feature levels in the above Courtyard conjoint.  Preference construction, like language production, is an adaptive act using whatever prior experience and knowledge is deemed relevant in the present context.  One does not expect stability across data collection procedures unless the same construction process is used each time, and even seemingly minor changes in the survey method can have a major impact on preference measurement.  We will refer to this viewpoint as "constructed preference" for obvious reasons.

These two worldviews lead to very different concerns about data collection and analysis.  The "revealed preference" follower is far less concerned about the reactive effects of the experimental arrangements in conjoint research.  It is not that they deny the possibility of measurement bias.  However, preferences are real and show themselves regardless of whether one uses a self-reported importance, a rating of purchase intent, or a choice from a set of alternatives.

On the other hand, the marketing researcher in the "constructed preference" camp expends a good deal of effort trying to mimic the marketplace as closely as possible.  They worry that the conjoint study has the potential to create experimental task-specific preferences rather than measure preferences that would be constructed in the purchase context.  They know that preferences are constructed in the marketplace, and they wish to replicate those naturally occurring processes so that their finding can be generalized.  Wanting to make statements about what is likely to happen in the real world, they need to be certain that there is a sufficient match between the experimental task and the actual purchase task as experienced by customers.

What Does the Product Manager Know that the Marketing Researcher Doesn't?

The arrangement of products or services impacts purchases by potential customers.  For example, placing the store-brand aspirin at a much lower price next to the comparable national brand on the same shelf elicits higher levels of price sensitivity and lures customers into thinking that national brands are probably not of higher quality after all.  At some point the price reduction gets large enough that it becomes easier for customers to accept the popular belief that both the national brand and the store brand were manufactured at the same place with different labels placed on the two bottles.

We refer to the above effect as framing.  Although I know the sofa never sold for $1000, it is so hard to resist that 50% discount.  It just looks like a better price than $500 without the discount.  Framing is a perceptual illusion, like the moon appearing larger at the horizon than at its zenith.  Why would a retailer place at least some of its more expensive wines on the middle shelf?  Because the middle shelf is where most shoppers look first, the higher prices set the frame and make the lower priced wine appear least expensive.  Price sensitivity is not retrieved from memory.  It is constructed at the moment from the information on hand at the time.

Although marketing has always designed the shopping experience to increase sales, choice architecture makes this topic an area of formal study.  Beginning with the recognition that there is no neutral way to present a choice, the question shifts to how to manipulate the choice presentation in order to "nudge" people toward the behavior you desire.  I wish to avoid the political controversy surrounding the book Nudge because it is irrelevant to my point that preference is constructed and at least part of that construction process includes the way choices are presented. 

What Worldview Guides Choice Modeling in Marketing?

I can only assume that a good number of marketing researchers hold the revealed preference worldview.  What other explanation can be given for adaptive choice-based conjoint where the respondent begins the choice process with a build-your-own product exercise?  Why else would someone use a menu-based choice when the actual purchase task was selecting from a set of predetermined bundles?

Both these examples come from Sawtooth Software and both deal with what they call choice-based conjoint.  Conjoint designs assume that products and services can be represented as attribute bundles and that the preference for the bundle is a function of the preferences for the individual attribute levels.  When the dependent variable is a categorical choice, we have choice-based conjoint.  When the dependent variable is a rating, we have rating-based conjoint.  Sawtooth offers adaptive conjoint for both choices and ratings.

Sawtooth's recommendations are contained in their research paper "Which Conjoint Method Should I Use?"  In their summary they tell us first that our method ought to reflect the marketplace, but then they assert that the important considerations are the number of attributes, the sample size, and the available interviewing time.  Similarly, Sawtooth claims that their menu-based products can be used equally well for buying pre-designed bundles or a la carte items.  Implicit is the belief that one would get the same results whether a buyer "built-your-own" or had to pick one of a set of available feature bundles.  It is as if the process of designing your own product would have no effect on what you wanted, as if stable preferences for hundreds of feature levels were revealed regardless of the method.

It is not as if Sawtooth does not acknowledge that their measurement procedures can impact preferences.  One can find several papers from Sawtooth itself or from its annual conference that demonstrate order and context effects.  For example, when discussing how many choice sets should be shown, Rich Johnson presents compelling evidence that price becomes more important over time as respondents repeatedly make selections from more and more choice sets.  But his conclusion is not that varying price simulates a "price war" and draws attention to the pricing attribute.  Instead, he argues that over time respondents become better shoppers and attend to variables other than brand.  That is, where others would see a measurement bias, Johnson discovers an opportunity to uncover and reveal "real" preferences.

We should not try to minimize the possible confounding effects of asking respondents to repeatedly make choices from sets of alternatives with varying features.  This is the problem with within-subject design that Kahneman discusses in his Nobel Prize acceptance speech (pp. 473-474), "They are liable to induce the effect that they are intended to test."  Kahneman views preferences as constructions.

Hopefully, one last example will clarify how the two worldviews can look at the same data and see two different things.  Here is a quote from the third page of the previously mentioned paper "Which Conjoint Method Should I Use?" by Bryan Orme,

"Despite the benefits of choice data, they contain less information than ratings per unit of respondent effort. After evaluating a number of product concepts, the respondent tells us which one is preferred. We do not learn whether it was strongly or just barely preferred to the others; nor do we learn the relative preference among the rejected alternatives."
Orme sees real preferences that exist independently of the task.  Moreover, these preferences are continuous.  Choice data does not reveal all that is there because it does not reveal strength of preference. 

As a cognitive miser, the constructed-preference view holds that respondents only make the distinctions they need to make in order to complete the task.  In a choice task, once I eliminate an alternative for any reason, even superficial features such as color or shape, I am done.  I do not do need to form a relative preference for each alternative.  My goal was to simplify the choice set, and I do not spend time studying alternatives and forming preferences of relative strength for alternatives that have been rejected.  Unless, of course, you ask me to rate every alternative in the choice set.  However, now the measurement task no longer mimics the purchase task and different preferences get constructed.

Actually, you do not need to accept the constructed-preference view to see both menu-based and adaptive conjoint as intrusive measurement techniques.  That is, one can believe in revealed preference and still hold that some measurement procedures are disruptive.  However, believing that preferences are constructed forces one to take additional steps.  We need a model of real world purchases, a model of the measurement process, and a determination if they two are similar enough to justify generalization.

Let us compare the Sawtooth approach with that from John Colias at Decision Analysts.  They offer a free R package, called ChoiceModelR, which builds on the rhierMnlRwMixture function from Peter Rossi R package bayesm.  Although they do not appear to take an explicit position on the constructed versus revealed preference debate, they do raise several cautions about the importance of recreating the real-world purchase.  They stress the need to customize each design to match the specifics of the brand offering and are concerned about the need to deal with critical idiosyncrasies that are unique to every application.  Realism is important enough that shopping visits are simulated using 3D animation.  Perhaps I should not count Decision Analysts as a "yes" in constructed-preference column.  Nonetheless, they demonstrate that choice modeling can be conducted with some sensitivity to what happens in the marketplace. 

The Quantitative Triumph

Simon Jackman got it right, both the 2012 election prediction and how to model real world phenomena.  "It's a data-meets-good-model, scientific-method thing."  Although this post has focused on the role of preference construction on choice modeling, the same processes are at work whenever any respondent is asked any question.  Election polling is subject to similar measurement effects that must be addressed in the statistical model.  Fortunately, we know a good deal about how respondents interpret questions and how they form a response from research under the heading cognitive aspects of survey methodology.

Obviously, when I look to election prediction for guidance, I am speaking of the modeling process and not the statistical models actually used.  Election prediction serves as a standard because of its willingness to admit the limitation of its data and its ability to compensate with theoretical knowledge and advanced statistical modeling.  Making the political pundits look stupid was simply a little extra treat.

1 comment:

  1. Enjoyed this post. As a long-time practitioner, I like to design choice tasks so that they reflect real-world environment (both in terms of competitive environment as well as the client's product line) as closely as possible. The more successful we are in doing that, the less critical is the philosophical distinction between revealed and constructed preference (since we are trying to replicate the conditions under which the decision maker makes her choice).
    I enjoyed your reference to John Colias (who taught me choice modeling in a real-world setting 22 years ago as a fresh-out-of-school analyst at MARC Research). I find his ChoiceModelR package to yield excellent results for many of my projects.
    I am glad I stumbled upon your blog! I look forward to reading more of your stuff!
    Shiv

    ReplyDelete