Major update on mobile market research

Picture of a bridge in LisbonFor a few years there have been relatively few new findings about mobile market research. We have seen the share of online surveys completed via mobile increasing and we have seen the number of mobile only studies (studies that require a smartphone, for example location-based, in-the-moment and smartphone ethnography) increasing. But the overall picture has remained fairly constant in terms of advice and practice. However, the picture has now changed.

Last week saw five days of short courses and presentations in Lisbon, Portugal at the ESRA Conference (European Survey Research Association). There were over 700 presentations and most of the leading names in survey, web, and mobile research were present (including: Don Dillman, Mick Couper, Google’s Mario Callegaro, SurveyMonkey’s Sarah Cho, Edith de Leeuw, Roger Tourangeau, GfK’s Randall Thomas & Frances Barlas, and my colleague Sue York).

There were more than 20 presentations particularly relevant to mobile market research – making it one of the largest collections of reports and findings from experiments reported anywhere.

In this post I set out my key takeaways from the ESRA Conference in terms of mobile market research. But, I may update this post when I get access to all of the presentations and papers from the conference.

Grids on smartphones do not have to be worse than grids on PCs

For many years, the prevailing wisdom has been that grids on smartphones are a much bigger problem than they are on PCs. However, several studies presented at this conference, especially the study presented by Mick Couper, showed that this does not have to be the case.

A mobile optimised grid performs in a very similar way to a grid on a PC. When we say performs in a very similar way, we mean it gives similar results, in most cases takes a similar amount of time, and attracts a similar level of dissatisfaction from the research participants.

What is a mobile optimised grid?

In many ways a mobile optimised grid is the result of adopting a mobile first approach (something that Sue York spoke about at the conference). A mobile optimised grid can mean looking like a traditional grid, if the labels are short, the number of scale points is few, and if the software is responsive (meaning it fits nicely on the page).

Another way of dealing with grids is to present the rows of the grid one item at a time (and there are now a wide variety of ways of doing this). This one row at a time approach is usually called item-by-item. Most of the studies, presented in Lisbon, preferred using a scrolling approach to presenting the rows of the grid one item at a time. For example, after answering one row, the user scrolls down to the next question (or is auto-advanced down to the next row). In commercial projects the item-by-item approach is often achieved by showing each item on a new page.

The papers showed that survey results were very similar when using mobile optimised grid, when making the following comparisons:

  1. The PC and mobile sample both saw grids that fit on the smartphone screen nicely.
  2. The PC participants saw grids and the smartphone participants saw alternatives (such as the item-by item scrolling approach).
  3. The PC participants saw item-by-item versions and the smartphone people saw nice grids.
  4. Both PC and smartphone participants saw item-by-item options.

Note, when item-by-item approaches were used with a new page per question (as often happens in commercial studies) the data were similar, but the surveys took longer to complete.

Long labels are a problem 1

A point highlighted by Don Dillman was that if long labels are used for the rows it is hard to produce a grid that is mobile optimised. For example, if we have a question asking how important the following are in selecting a holiday destination

  •  “Has a wide range of cultural events and museums”
  • “The connections from my country to the destination are convenient.”
  • “Has a wide range of water sports, such as sailing, fishing, surfing, water-skiing, and snorkelling.”

On a smartphone these long labels mean that there is not enough space to make several rows visible AND make the scale points visible AND ensure that the buttons or sliders are easy to use. If the labels are long, the mobile version needs to be achieved using an item-by-item approach.

Long labels are a problem 2

Several studies showed that long labels (and long instructions and long questions) tend to be poorly understood by many research participants. This was true of all self-completion modes; web, mobile and paper. Several speakers stressed the need for cognitive interviews to be conducted when designing new questions and questionnaires – to assess what the participant thinks they are being asked and how they set about answering the question.

Turning smartphones horizontal is not a great option

In the past, many researchers have felt that the best option is to ask research participants to use their phone in landscape mode, especially for things like scales and grids. However, several studies showed that (even when asked) only a few people do this. The people who did hold their phones in landscape mode tended to be younger and were perhaps familiar with using their phones to play games.

Making studies Mobile First can change the results

Facilitating mobile devices does change the results, because it increases the range of people taking the survey. This has been known for many years and nothing has happened to change this picture.

There are many groups of people who are less likely to complete a survey on a PC and if participants have the choice to use PC or mobile the coverage of the study improves. When the coverage improves the answers can change because some people are no longer being missed.

In her keynote speech, Edith de Leeuw made the point that mode effects comprise two elements. The first element is changes caused by the change in hardware, these are undesirable mode effects. Secondly, changes caused by improving the coverage of the study – these are desirable effects.

Are there mode effects when using a mix of Mobile and PC?

If your survey is badly designed for mobile, there will be mode effects. There is mixed evidence about open-ended responses on mobiles, with many people reporting that the open-ends are more limited from mobiles. There still seems to be agreement that multi-select grids when asked item-by-item on a mobile produce more answers that the multi-select grid asked on a PC as a conventional grid.

So, does that mean grids are ok? If well designed?

Not really. We can make grids on smartphones as good as grids on PCs. But on PCs and smartphones grids remain one of the items most disliked by participants. They are associated with more break offs, and, in interviews with research participants, they are regularly cited as reasons for not doing studies. Research still has a need to minimise the use of grids, to make grids smaller, and to make them easier.

Reduce scale points

Several people, for example Sue York, talked about the need to be more mobile first and to move away from long scales to simpler options, such as selecting rather than rating. Great evidence for this point of view was provided by GfK’s Randall Thomas & Frances Barlas. They showed that with fewer scale points the scales were easier to read on a smartphone, the information was very similar, and the differentiations (e.g. standard deviation) was greater.

Thomas and Barlas seemed to be recommending 3-point scales (e.g. Not Like, Neutral, and Like) – but they also offered support for 2-point scales.

Anchored Scales are more consistent

Thomas and Barlas showed that in their studies, anchored scales produced results that were more consistent (between PC and mobile) than scales that were only anchored at the end points.

Perhaps standardise on unipolar and anchored scales

In most cases Thomas and Barlas found that within the USA bipolar scales tended to perform better than unipolar, for example Dislike, Neutral, and Like (as opposed to the unipolar Not like, Neutral, and Like). But they also noted that for many languages there are problems translating bipolar scales and these translations created differences in the data that were unwanted. Hence, the advice to use unipolar scales.

Are researchers training people to use PCs?

Several studies, for example data from SurveyMonkey, showed that when contacting people who were not part of research panels, almost 50% tend to use a mobile device. However, studies with panels (commercial research panels and the probability research panels favoured by social researchers) the proportion using mobiles is closer to 20% to 30%. Perhaps, the poor performance of mobiles in the past has discouraged mobile preferrers from being involved with these panels? If so, this is another reason that we need to adopt a more Mobile First approach.

Commercial researchers are doing a bad job at being Mobile First

Sue York presented data supplied by Research Now that showed that the proportion of Mobile Optimised surveys has not really improved over the last 3 years. The table from Sue York’s presentation is shown below.

Table of mobile phone suitability

Despite the best efforts of panel companies, nearly one-third of the surveys being submitted to Research Now are judged to be ‘Mobile Impossible’, nearly a quarter ‘Mobile Possible’ – with fewer than half being mobile ‘Friendly’ or ‘Optimized’.

If it doesn’t work on mobile, don’t do it

Some people have argued that specific types of questions only work on PC (for example large grids, or some types of interactive questions). However, in most cases, excluding people who will only take part via mobile is going to compromise your research – and this effect is likely to increase. If you have something that is PC only, try to re-design it (or re-envision it) so that it does work on a smartphone, or reconcile yourself to using an increasingly skewed sample.

Sensors still not ready for the major league

There were some interesting papers in the use of sensors, for example using apps to collect media usage, audio capture to record broadcasts heard, and GPS to aid travel diaries. But none of them were without their challenges. The media capture approaches requires apps to be downloaded and significant  ‘per participant’ incentives to take part.

The GPS tracking for travel diaries was perhaps the most illustrative of the benefits and challenges. A pilot study presented at the conference showed that the data collected could be quite useful, much richer than the paper diaries and more accurate in terms of things like distance travelled. However, the app under-recorded the number of journeys. One of the reasons for under-recording was that the app turned itself off when the battery indicator reached 20% remaining – which happened often enough to change the data.

The key lessons from the various uses of sensors are:

  • Not all smartphone users will take part, so sampling comparability issues can arise. We could end up finding out a great deal about a small and not necessarily representative group.
  • When the systems work they provide different data than older systems (such as paper diaries), which means many organisations will want to delay to the change until they are sure that can achieve all the benefits in one move, rather than having several disruptions to their historical data.
  • We need to keep trialling and piloting new systems; they are getting better and can deliver better information (although better does mean different in many cases).
  •  Issues like informed consent become even more challenging with passive data collection – see the next heading.

Data Privacy and Informed Consent – problems ahead

Market research is based on informed consent. A paper by Barbara Felderer and Annelies Blom highlighted some of the challenges with privacy and consent. In a study in Germany they asked people to type in their current location (with options such as address, post code etc). Well over 90% of participants did this. However, the survey also asked permission to collect the location of the phone automatically using GPS. About one third of people who typed in their address said no to their GPS location being collected. This suggests we should not simply be collecting GPS without consent, and that consent will not necessarily be given when we ask for it.

No new references to Non-Smartphone phones.

There are over 7 billion mobile phones in use around the world. Fewer than 3 billion of these phones are smartphones, so by focusing on smartphones we are excluding the majority of the world’s population. However, the rate of smartphone adoption means that soon this will be less of a problem, and is already a marginal problem in many countries.

In Summary

The top takeaways are:

  1. Surveys need to be good on mobiles if research participants are not to be alienated and if we want data that is comparable between PC and mobile.
  2. Many of the survey platforms are capable of producing mobile optimised surveys, but many researchers do not appear to be making the effort.
  3. None of the survey platforms have a magic button to create mobile first surveys, it requires design changes from the researcher too, for example shorter text, shorter scales, and shorter answer lists.
  4. A well-designed study will produce similar results on both mobile and PC – but a badly designed study is more likely to produce different results.
  5. Grids are not necessarily a smartphone problem anymore; they are a general research problem.
  6. Consider using 3-point, unipolar scales, fully anchored – especially in grids (if you are still using grids).
  7. If you are designing new questions, test them using cognitive interviews to find out what participants think they mean and how they are approaching them.

2 thoughts on “Major update on mobile market research

  1. Great sum up – thanks so much for sharing these learnings from the ESRA Conference! I’ve been running cognitive interviews for years and they are essential for large scale quant for sure, but really for any quant that you want to know you can trust!
    As for grids – do you like doing them? Particularly if they ask you to explain your choice at every line on the grid – I don’t. Why should regular participants!

  2. I am not a fan of grids. If they are short and simple and needed, then they can be acceptable. For example, here are five statements, do I Agree/Disagree/ or feel Neutral about each of the five.

Comments are closed.