On this special Halloween episode, Claire Murphy-Morgan tells us about the links between art and psychology, treating eating disorders remotely, and the online communities that are discussing parapsychology. We talk all things parapsychology: ghosts, near-death experiences, and spooky Wikipedia editors.
If you’d like to keep up with the Remote Healthcare for Eating Disorders throughout COVID-19 project you can check out their website or follow the project on Twitter @RHEDC_Project.
You can also follow Claire’s work on parapsychology on Twitter @ClaireMorganM.
To stay updated on episodes you can follow me on Twitter @BrownGenavee.
Search for “climate change” on YouTube and before long you’ll likely find a video that denies it exists. In fact, when it comes to shaping the online conversation around climate change, a new study suggests that deniers and conspiracy theorists might hold an edge over those believing in science. Researchers found evidence that most YouTube videos relating to climate change oppose the scientific consensus that it’s primarily caused by human activities.
The study highlights the key role of social media use in the spread of scientific misinformation. And it suggests scientists and those who support them need to be more active in developing creative and compelling ways to communicate their findings. But more importantly, we need to be worried about the effects that maliciously manipulated scientific information can have on our behaviour, individually and as a society.
The recent study by Joachim Allgaier of RWTH Aachen University in Germany analysed the content of a randomised sample of 200 YouTube videos related to climate change. He found that a majority (107) of the videos either denied that climate change was caused by humans or claimed that climate change was a conspiracy.
The videos peddling the conspiracy theories received the highest number of views. And those spreading these conspiracy theories used terms like “geoengineering” to make it seem like their claims had a scientific basis when, in fact, they did not.
Climate change is far from the only area where we see a trend for online misinformation about science triumphing over scientifically valid facts. Take an issue like infectious diseases, and perhaps the most well-known example of the measles-mumps-rubella (MMR) vaccine. Despite large amounts of online information about the vaccine’s safety, false claims that it has harmful effects have spread widely and resulted in plummeting levels of vaccination in many countries around the world.
But it’s not just well-known conspiracy theories that are causing a problem. In May 2018, one troublemaker came into his own at the height of the Nipah virus outbreak that eventually claimed 17 lives in the southern Indian state of Kerala. He duplicated the letterhead of the District Medical Officer and spread a message claiming that Nipah was spreading through chicken meat.
In reality, the scientifically established view is that the fruit bat is the host for the virus. As the unfounded rumour went viral on WhatsApp in Kerala and neighbouring states like Tamil Nadu, consumers became wary of consuming chicken, which sent the incomes of local chicken traders into a tailspin.
The effects of misinformation surrounding the MMR vaccine and Nipah virus on human behaviour should not be surprising given we know that our memory is malleable. Our recollection of original facts can be replaced with new, false ones. We also know conspiracy theories have a powerful appeal as they can help people make sense of events or issues they feel they have no control over.
This problem is complicated further by the personalisation algorithms underlying social media. These tend to feed us content consistent with our beliefs and clicking patterns, helping to strengthen the acceptance of misinformation. Someone who is sceptical about climate change might be given an increasing stream of content denying it is caused by humans, making them less likely to take personal action or vote to tackle the issue.
Further rapid advances in digital technologies will also ensure that misinformation arrives in unexpected formats and with varying levels of sophistication. Duplicating an official’s letterhead or strategically using key words to manipulate online search engines is the tip of the iceberg. The emergence of artificial intelligence-related developments such as DeepFakes – highly realistic doctored videos – is likely to make it a lot harder to spot misinformation.
So how do we tackle this problem? The challenge is made greater by the fact that simply providing corrective scientific information can reinforce people’s awareness of the falsehoods. We also have to overcome resistance from people’s ideological beliefs and biases.
Social media companies are trying to developing institutional mechanisms to contain the spread of misinformation. Responding to the new research, a YouTube spokesperson said: “Since this study was conducted in 2018, we’ve made hundreds of changes to our platform and the results of this study do not accurately reflect the way that YouTube works today … These changes have already reduced views from recommendations of this type of content by 50% in the US.”
Other companies have recruited fact checkers in large numbers, awarded research grants to study misinformation to academics (including myself), and search terms for topics where misinformation could have harmful health effects have been blocked.
But the continuing prominence of scientific misinformation on social media suggests these measures are not enough. As a result, governments around the world are taking action, ranging from passing legislation to internet shutdowns, much to the ire of freedom-of-speech activists.
Scientists need to get involved
Another possible solution may be to hone people’s ability to think critically so they can tell the difference between actual scientific information and conspiracy theories. For example, a district in Kerala has launched a data literacy initiative across nearly 150 public schools trying to empower children with the skills to differentiate between authentic and fake information. It’s early days but there is already anecdotal evidence that this can make a difference.
Scientists also need to get more involved in the fight to make sure their work isn’t dismissed or misused, as in the case of terms like “geoengineering” being hijacked by YouTube climate deniers. Conspiracy theories ride on the appeal of certainties – however fake – whereas uncertainty is inherent to the scientific process. But in the case of the scientific consensus on climate change, which sees up to 99% of climate scientists agreeing that humans are responsible, we have something as close to certainty as science comes.
Scientists need to leverage this agreement to its maximum and communicate to the public using innovative and persuasive strategies. This includes creating social media content of their own to not only shift beliefs but also influence behaviours. Otherwise, their voices, however highly trusted, will continue to be drowned out by the frequency and ferocity of content produced by those with no concrete evidence.
The online retail giant Amazon has moved from our screens to our streets, with the introduction of Amazon grocery and book stores. With this expansion came the introduction of Amazon One – a service that lets customers use their handprint to pay, rather than tapping or swiping a card. According to recent reports, Amazon is now offering promotional credit to users who enroll.
In the UK we’re quickly becoming used to biometric-based identification. Many of us use a thumbprint or facial recognition to access our smartphones, authorise payments or cross international borders.
Using a biometric (part of your body) rather than a credit card (something you own) to make a purchase might offer a lot more convenience for what feels like very little cost. But there are several complex issues involved in giving up your biometric data to another party, which is why we should be wary of companies such as Amazon incentivising us to use biometrics for everyday transactions.
Amazon’s handprint incentive adds to an ongoing academic and policy debate about when and where to use biometrics to “authenticate” yourself to a system (to prove that you are who you say you are).
On the benefits side, you’re never without your biometric identifier -– your face, hand or finger travel with you. Biometrics are pretty hard to steal (modern fingerprint systems typically include a “liveness” test so that no attacker would be tempted to chop a finger off or make latex copies). They’re also easy to use -– gone are the problems of remembering multiple passwords to access different systems and services.
What about the costs? You don’t have many hands –- and you can’t get a new one –- so one biometric will have to serve as an entry point to multiple systems. That becomes a real problem if a biometric is hacked.
Biometrics can also be discriminatory. Many facial recognition systems fail ethnic minorities (because the systems have been trained with predominantly white faces. Fingerprint systems may fail older adults, who have thinner skin and less marked whorls, and all systems would fail those with certain disabilities – arthritis, for example, could make it difficult to yield a palm print.
Who should we trust?
A key issue for biometrics “identity providers” is that they can be trusted. This means that they will keep the data secure and will be “proportional” in their use of biometrics as a means of identification. In other words, they will use biometrics when it is necessary – say, for security purposes – but not simply because it seems convenient.
The UK government is currently consulting on a new digital identity and attributes trust framework where firms can be certified to offer biometric and other forms of identity management services.
As the number of daily digital transactions we make grows, so does the need for simple, seamless authentication, so it is not surprising that Amazon might want to become a major player in this space. Offering to pay for you to use a biometric sign-in is a quick means of getting you to choose Amazon as your trusted identity provider … but are you sure you want to do that?
Unfortunately we’re victims of our own psychology in this process. We will often say we value our privacy and want to protect our data, but then, with the promise of a quick reward, we will simply click on that link, accept those cookies, login via Facebook, offer up that fingerprint and buy into that shiny new thing.
Researchers have a name for this: the privacy paradox. In survey after survey, people will argue that they care deeply about privacy, data protection and digital security, but these attitudes are not supported in their behaviour. Several explanations exist for this, with some researchers arguing that people employ a privacy calculus to assess the costs and benefits of disclosing particular information.
The problem, as always, is that certain types of cognitive or social bias begin to creep into this calculus. We know, for example, that people will underestimate the risks associated with things they like and overestimate the risks associated with things they dislike (something known as the “affect heuristic”).
As a consequence, people tend to share more personal data than they should, and the amount of such data in circulation grows exponentially. The same is true for biometrics. People will say that only trusted organisations should hold biometric data, but then go on to give their biometrics up with a small incentive. In my own research, I’ve linked this behavioural paradox to the fact that security and privacy are things we need to do, but they don’t give us any joy, so our motivation to act is low.
Any warnings about the longer-term risks of taking the Amazon shilling might be futile, but I leave you with this: your biometrics don’t just confirm your identity, they are more revealing than that. They say something very clearly about ethnicity and age, but may also unknowingly reveal information about disability or even mood (in the example of, say, a voice biometric).
Biometric analysis can be done without permission (state regulations permitting) and, in some cases, at scale. China leads the way in the use of face recognition to identify individuals in a crowd, even when wearing masks. Exchanging a palm print for the equivalent of a free book may seem like a vastly different thing, but it is the thin end of the biometric wedge.
More and more people are going online to search for information about their health. Though it can be a minefield, where unverified sources abound, searching the internet can help people to understand different health problems, and give them access to emotional and social support.
For many in the UK, getting to actually see a GP remains difficult, and constraints around appointment times mean that some discussions are often cut short. But by using the internet, patients can prepare for appointments, or follow up on issues that were raised in the consulting room but left them with unanswered questions.
But not everyone is so keen on patients using the internet in this way. Some GPs and other heath professionals have doubts about the quality and usefulness of the information available. There are also suggestions that “cyberchrondria” may be fuelling a surge in unnecessary tests and appointments.
Similarly, though so many people are using online resources to fill in gaps in their knowledge, or to help them ask the right questions, they may not be comfortable bringing it up in the consulting room.
For our latest research project, we wanted to find out just why it can be so difficult to discuss online information with doctors. We found that in addition to people being embarrassed in case they have misunderstood the information, or can’t remember it accurately, they also fear a negative reaction from the GP who may think they are difficult or challenging.
How to make it work
So how can you as a patient bring up online information with your doctor? First, it sounds obvious but you need a good, open relationship with your GP. Tell them you have been looking online, but ask for their feedback on the information, and for any useful sites they know of. We found that patients with a good doctor relationship felt able to discuss information and ideas from websites and online forums in a considered and critical manner.
Importantly, it is not about the patient trying to be the doctor. Ideally, patients should bring along their information, use it to help explain their key concerns, or detail the options they’ve explored, but also make clear that they still want and value their GP’s input on their findings.
Some of the patients we spoke to told us that they are acutely aware of their doctor’s negative feelings towards the internet. In these situations, people are sometimes tempted to disguise the source of their information. Rather than openly discussing their findings from the internet, they may pretend they got the information elsewhere when mentioning it to their doctor or be very careful not to reveal its origin at all.
For some people we spoke to, the process of trying to integrate the results of their web searches into their communications with the GP was frustrating to say the least. They felt uncomfortable, embarrassed, and sometimes held back key information. This made for unproductive meetings which were felt to be a waste of time.
There needs to be a new and more productive way to integrate online information into doctor-patient discussions. First of all, there should be better ways for patients to collect and organise accurate information online so that they can organise their thoughts and prepare for a visit.
In the consulting room itself, GPs should use the research as an opportunity to have more productive discussions, and use it as a way to teach patients more about their own health issues. They need to question the information source, message and credibility, but GPs could also use it as an opportunity to nudge patients to think about their health options and consider what’s important to them.
Just as a GP is not solely responsible for the health of a patient, neither is the patient themselves. Internet research can no longer be dismissed. Even if inaccurate, it can help build a better relationship between patient and doctor, and give them both a better understanding of managing health in the modern world.
Smartphones have changed the world. A quick glance around any street or communal space shows how dominant our favourite digital devices have become.
We are familiar with the sight of groups of teenagers not talking, but eagerly composing messages and posts on their screens. Or seeing couples dining silently in restaurants, ignoring the romantic flickering candle in favour of the comforting blue light of their phones.
Attempts have been made to come up with rules of phone etiquette during face-to-face interactions. But why do these devices that are meant to connect us when we’re far apart seem to cause so much division when we’re close together?
Some research has begun to examine this question. In one 2016 study conducted in US coffee shops, researchers found that using a mobile device while spending time with someone reduced the ability of one conversation partner to properly listen and engage with the other. This effect was particularly strong when the people interacting didn’t know each other well.
In another more recent study, researchers told restaurant goers to either leave their phones out on the table or to put them in a box, out of reach and sight. At the end of the meal, participants were asked how enjoyable the meal was and how distracted they had felt.
People who had their phones on the table felt more distracted, which in turn led to lower enjoyment of their time spent eating with friends or family.
My own research has also delved into the topic of phones distracting from high quality face-to-face interactions. In my study, I invited pairs of friends to come to the lab to take part in an experiment and then asked them to wait for five minutes sitting side by side in a waiting area while I printed out questionnaires.
This was actually a deception. I was only really interested in what they would do during the five minutes of “waiting time”, so I secretly filmed them to see what they did. I then asked them to complete a questionnaire on how well they thought that period of interaction had gone.
Finally, I disclosed to the participants that they had been recorded and asked for permission to keep the tapes to analyse in our study. Everyone allowed us to keep their videos (even the pair who had criticised my outfit when I left them alone). Then with the help of my research assistants, we watched all the videos to see how much each pair of friends had used their phones.
We found that 48 out of the 63 friendship pairs used their mobile phones, and on average they used their phones for one minute and 15 seconds out of the five-minute period. We calculated these averages based on both friends’ behaviours because interactions are dependent on both people who are present. So even if only one person used their phone, we would still expect their phone use to influence the quality of the interaction.
The longer they spent using their phones, the lower the quality of their interaction. We also found that regardless of how close the friends were, they all had worse interactions when they used their phones.
Watching the videos of friends using their phones taught me a lot about why they can be such a problem in face-to-face interactions. On occasion, the phones were used to share information, like showing a picture or email that they wanted to discuss. These types of usage didn’t seem to hurt their interactions, but they also didn’t happen very often.
Only 21% of people used their phones in this way and on average the sharing only lasted five seconds. What happened more often was what I refer to as “distraction multitasking”, when friends were listening with one ear but still looking at and thinking about what was on their phones.
This type of use made up the majority of what we observed on the tapes. One particularly sad clip I will always remember was between two female friends. Both friends were getting along well after I left them alone, and then one of them got out her phone.
In the meantime, her friend had thought of something she would like to say and looked up eagerly about to share perhaps some gossip or good news. But as soon as she saw that her friend was completely absorbed in her phone, she looked away, disappointed and hurt. They didn’t speak again during the waiting period.
This seems to me to be the biggest problem that phones create in face-to-face interactions. They make us less available to others by distracting us from important social cues, like that light in a friend’s eyes when she has something important to tell us.
While technologically mediated conversations can be useful to maintain our relationships, most of us still prefer face-to-face interactions to bond with our friends. Face-to-face conversations can feel safer for sharing intimate information – like things we’re worried about or proud of – because they can’t be saved and shared with others.
Being physically present also allows for physical contact, like holding someone’s hand when they’re scared or giving them a hug when they’re sad. When someone is focused on their phone, they may miss out on opportunities to give this kind of support.
The best phone etiquette to remember is that phones are meant to help us connect with our friends and family when they’re far away. When they’re right in front of us, we should take full advantage of the opportunity to connect in real life – and leave our phones alone.