MRS Technology and Data Summit 2016
Being a data technology company in the market research space it was quite natural for us to be interested in attending the Technology and Data summit hosted by the Market Research Society.
The chosen tagline for the event was ‘a day of strategy, understanding and disruption’. We’re not sure how much disruption was involved, but it definitely was an interesting look at emerging and futuristic technologies and the direction that market research might take in the coming years.
Whilst some of the presentations were about technology that seems to have been ‘coming in the next few years’ for what seems like forever, there were a few presentations about technology that is already being used, albeit in its infancy. All the presenters agreed that the size and diversity of data is only going to increase, and that we need the capacity and tools as well as the ethical framework to cope with this. It seems inevitable that computers will take on an increasing role in our personal and professional lives, and we need to be prepared to interact with them and adapt our own roles appropriately.
The key takeaway for us is that we need to start thinking now about how we can exploit these emerging technologies and methods so that we can be one of the pioneers when the Market Research industry is finally ready to embrace them. Although these changes will not happen overnight, it was good to have a glimpse into the likely direction of the industry in order for us to try to keep ahead of the curve.
Overall, a thought provoking and enjoyable day out of the office!
A short summary of each presentation from the day is below –
Tapping into the lizard brain with VR. – Matt Rattcliffe – Masters of Pie
Masters of Pie are the winner of the ‘Big Data VR Challenge’. Matt explained what they did to win this challenge, and the applications of VR in data visualisation that they are working on at present.
By using VR, a researcher can dive into the data and really experience it. Using colours and 3 dimensions as well as sound, enables the human lizard brain to pick out patterns in the data. It is easier to see where the patterns appear in the data when it is all around you, and directional sound can highlight areas of interest.
Matt says that his vision is a symbiotic relationship between human and computer whereby the computer can point out areas of interest to the human, or respond to human requests. A virtual ‘data buddy’ could accompany an explorer into the VR space to guide them through the data.
Using VR technology enables users to interface with the data in a more natural way as they can use gesture based control, which removes a layer of abstraction necessary to facilitate mouse and keyboard control.
Collaboration by means of telepresence in the VR space between colleagues in different locations, is more natural and beneficial than other options, and the ability to ‘touch’ and ‘pass’ the data between people is of benefit.
Afterwards there was some discussion as to how much benefit spatial representation of the data will actually add, and if this is a practical solution for ad-hoc studies. Without actually experiencing the VR simulations, it is difficult to consolidate an informed opinion.
Weaving the human body into the digital domain – Ghislaine Boddington – Body>Space>Data
Coming from a dance background and now working in the arts, Ghislaine discussed how technology and human actions combine to create art. Using sensors either in a room or on the participants, the space can react to movement, creating collaborative art.
An important point that she made is that people have multiple identities in the digital age. Our various online personas, be they on social media or other virtual worlds are different from our real world identity, as well as from each other.
She envisages a world where there is a massive amount of data generated from our bodies, and informed us that there is a phenomenon of ‘implant parties’ which is growing in popularity. In fact, she believes that her son will bring back a girlfriend in 10 years who is semi-cyborg.
We need to be prepared to deal with the large amount of data generated, as well as be clear on the ethics of how we use it.
Keynote interview: Edwina Dunn – Starcount
Probably most famous for being behind Tesco’s Clubcard, Edwina gave her thoughts about the likely direction that the market research industry will take. Interviewed by Tim Phillips, she discussed the role that social media is playing in the industry.
Looking through social media it is possible to identify a few key influencers in each sector, each with many followers. By bridging this with other data collected, new insights can be gained. For example, sales of Triumph motorbikes in Sao Paulo shot up once David Beckham was pictured riding one. Knowing which influencers to target can be of great benefit to companies.
In today’s world where there is a wealth of information online, there is no such thing as truly anonymised data. It is easy to join the dots and find out the person in question. Unlike the NSA, however, market research is not interested in the individual, rather it searches for large groups and segments of the population.
She mentioned that banks need to understand their customers and offer a personalised service. They weren’t really interested in data collection in the past, and now they have really lost out due to that.
However, there is no point in collecting data just for the sake of data collection if you don’t need it. It is a waste of money to collect data if you don’t have a clear idea what you are going to do with it.
Using wearables, machine vision and the crowd to mine actual behaviour for insightful moments – Duncan Roberts – weseethrough
Duncan gave a fascinating insight into how his company collects data about what people actually do, rather than what they claim to do. Taking responses with a dose of healthy scepticism, his company provides empirical results. This presentation was a personal favourite.
Giving wearable cameras to respondents, be it Google Glass or a GoPro camera, allows the researchers to know exactly how much ‘a little bit of bleach’ or ‘frying for 5 minutes’ actually is. And the reality is often way off as people describe things differently, and have faulty memories and subjective time perception.
Also, watching a video of someone using a product can give context that may not be in the questionnaire. For example, videoing people cooking in Nigeria showed that they have constant blackouts and are often holding a torch in one hand. This means that the packaging of ingredients needs to be openable with one hand and teeth.
This type of data gathering has been successfully used all over the world crossing cultural and language barriers. The integration of technology and human power is crucial to its success. This begins with using proximity sensors and bar codes to remind respondents to start recording, and finishes with a mixture of human and computer input to parse the recorded video to pick out the relevant 0.1% necessary for the insight. Video parsing software is also used to trawl through Youtube and pick out videos where people are giving their opinions about a particular product.
To dispel the worry of observation bias, Duncan showed us a video of a woman in Brazil who was filming herself cleaning her bathroom. When cleaning around the shower, she took a toothbrush from a cup by the sink to clean around the edges and put it back afterwards! People forget that they are being filmed and so there is no bias, but it would be interesting to know whose toothbrush it was, and what they had done to deserve such treatment!
Biometrics: Unlocking the unconscious, true and emotional consumer – Kerry Rheinstein – Future Foundation, Gawain Morrison – Sensum
This was really two presentations. First Kerry explained that we now have sensors that are able to pick up on our emotions. From sensing sweat levels and heartbeat to facial recognition algorithms, we now have devices that can make decisions for us, or provide us with options based on our subconscious emotions rather than our rational choice. Examples include a wrist band that swipes Tinder based on our pulse rate or a self-checkout that offers a discount on relaxing body oil if we look stressed.
Gawain demonstrated how our emotional responses and our rational responses may be radically different, as well as showing us a video of how his company used sensors to monitor participants at a run which culminated at a music festival event. This was an ad campaign for a deodorant and the more that the participants moved, the more money was donated to charity. This campaign fit with the three criteria that he applies to all his projects – it must be beneficial to science, art and society.
They were both advocates of using biometric systems to increase the volume and type of data collected, but stressed that there had to be a level of trust between the data generator – the person wearing the sensor – and the data consumers. Gawain was of the opinion that proper explanation of how the data was used as well as some quid-pro-quo arrangement so that the data generator also gained, would satisfy the ethics of this type of data collection.
Lifting the lid on the future – fusing public and private data at scale to unlock powerful insight and drive change – Miranda Roe – White Swan
Miranda started by explaining how White Swan was started. The owner of the market research company Black Swan had a sister who was ill. She kept on deteriorating and the doctors couldn’t diagnose the problem. Using the data unification and analysis technology developed at Black Swan, he searched through the internet to find what other people with the same symptoms were saying. Eventually he came up with the rare disease that she was suffering from, and now she leads a normal life. This propelled him to set up a not-for-profit organisation to utilise the power of data gathering, unification and analysis to help others who may be in similar situations. Thus White Swan was born.
There is a lot of data out there. Medical data, university research data, social media postings etc. By identifying which data is relevant to a particular case and by working with medical professionals to guide the questions being asked, accurate diagnosis can be made in weeks rather than months or years. For example, Ankylosing Spondylitis (AS) is a common but rarely diagnosed back problem. To identify the issues that surround it, it is necessary to focus on people who have this particular condition whilst ignoring other back pain sufferers. A consultant suggested that searching the data for those with pain when lying down would differentiate between those suffering from AS and those suffering from other syndromes. This sort of collaboration between disciplines is leading to accurate and quick results.
Data analysis can also identify those factors which aggravate or mitigate symptoms of illnesses. By comparing incidences of flare ups against daily activities, diet or even the weather, sufferers can learn what to avoid or even just be pre-warned that an attack may be imminent.
There is still inertia within the medical community about taking advice from outside sources, but the idea of using data to aid in quick diagnosis and methods of mitigation of conditions is gathering momentum within that community. There is much interest from some quarters, and it is hoped that these methods will become increasingly helpful in the coming years.
“Algorithms won’t make us live forever, but the potential to improve our health and happiness is enormous.”
Keynote presentation: The curious case of the data scientist – David Selby – IBM
David started off with some spurious correlations. US per capita cheese consumption correlates with the number of people who died by becoming tangled in their bedsheets. The amount of sour cream purchased correlates with the number of motorcycle accidents. The amount of milk produced in the US correlates with the divorce rate in Washington state. If you are interested in many more of these, you can go to http://tylervigen.com/.
His main point – besides for demonstrating a great enthusiasm for his job – was that data processing is a ‘sausage machine’. You put stuff in and you get stuff out. It is imperative that the human doing this applies common sense about what data goes in and what results come out.
“Collecting data is easy; collecting good data requires much more effort,” he said and illustrated this with some sensor data for monitoring the environment in his workplace. It seems that the sensors were placed near the closest electrical socket rather than in the best place to capture the readings necessary. A cursory glance at the data showed that an empty room was shown as being much louder than an occupied one which is obviously wrong, as is a room with a temperature line chart that looks more like a heart-monitor reading.
Common sense also needs to be applied to the results. It doesn’t seem likely that flu medicine purchases increase with temperature rises. Rechecking the data revealed that the opposite is true.
Big data is good data (for the future of our cities) – Jen Hawes-Hewitt – Accenture
Jen travels around the globe advising city administrations in how to run their cities efficiently based on data. Cities are where most of humanity live, and the amount of data generated by city dwellers as they go about their daily lives – be it work, travel, leisure or anything else – is enormous.
Data can come from many different sources – hobbyists, crowd sourced, public, academic and corporate sources. The consumers of the data are as varied as the generators, and the ability to make sense of this data for the benefit of everyone as well as the monetising opportunities are what will make digital cities a success.
Some data is privately owned and some is open source already. Very often someone will have the data that you need, it is just necessary to know who has the data and to come to a mutually beneficial arrangement to share it. There is a slowly growing idea of information market places where data holders can come together to make these agreements to facilitate the data flow from the data holders to the data consumers.
Debate: tech and data disruption or silicon hype – Claire Emes (Ipsos), Harry Armstrong (NESTA), Ian Gibbs (Data Stories)
A major discussion point was the need to define an ethical framework for the collection and use of data. Is it enough to ask people to tick a box providing consent? It has been shown that people don’t really read the conditions that they are agreeing to, and in fact, may need a two-day workshop to fully understand the implications. Perhaps a quid-pro-quo agreement is not enough as there is still no fully informed consent. Also, do separate agreements to allow the use of different pieces of data imply agreement to combine them for additional insights?
The market research industry’s direction was discussed with Claire warning against ‘shiny new toy syndrome’. Just because something is new and exciting doesn’t mean that it needs to be adopted. On the other hand, the industry does need to keep up with the current technological developments. Some merging of the old and the new needs to be achieved.
As to whether computers will put us all out of a job, the panel agreed that computers can gather and calculate but not explain the data. Researchers need to market themselves as providers of data and insight interpretation.