A few interesting posts and articles have emerged in the last few months about the ‘smartphone culture’ – the presence of mobile phones that do a vast number of things, including video, music, internet, email, photos, games and running location-based software. Although you can use one just as a mobile phone, for calls and text messages, almost everybody opts for a full wireless web connection enabling a myriad of features, and an ever increasing number of ‘apps’ – for example for GPS navigation, choosing a restaurant (as you are walking down a street), translating a piece of text written in a foreign a language using the camera, etc. The IPhone vies with Android and Blackberry phones, and the whole debate about what to buy is a bit tiresome. One article in 2011 (Amanda Bown, Jun 2011, womensfitness magazine) cited a YouGov survey – 33% of the British population had a smartphone in early 2011, and 37% of Australians. In Mid 2013, another article showed the figure was 64.7% in Australia, and a bit less in the UK. By 2015, 66% of UK adults had a smartphone, with 90% of those aged 16-24. 49% of people aged 18-24 check their phones within five minutes of waking up.
True ‘smart’ phones have been around for several years, but were previously regarded in western countries as a bit special and expensive. They were invented in the early 2000s (by 2007 and 2008, touchscreen IPhone and Android were available). By 2009/10 they just seemed to be everywhere, despite the high network charges to run them in Australia (lower in the UK and USA).
An historical allegory. We have dealt with sudden gadget arrival before. When mobile phones came in earnest in the 1990s, there were a number of research projects about the social changes they were bringing. As David Harvey said about modern life in general, mobiles ‘compressed’ time and space, making communication and everyday life a little bit easier (and the potential for overexploitation of one’s labour easier too – calls from the boss at midnight, etc.). At the same time, they have been a boon for keeping in touch and for poor households in Africa and Asia, providing knowledge of markets, road conditions, emergencies etc. In London, where I was living at the time, they quickly became an annoyance. Commuting to the city in the 80s and 90s I winced every time a loud phone conversation took place on a crowded train, or when people took a call during a face to face conversation. This sort of behaviour is more accepted in cultures today. Generally, hostility to mobiles has mellowed with time, and a ‘technography’ (to use anthropologist Paul Richards’ phrase – it means research “on complex interactions between social groups, collective representations, innovation processes, technical artefacts, and nature“) would show an increase in social acceptance of mobiles over 20 years. In 2011, teaching a class of 120 university students, we did a quick poll – everybody had a mobile phone! So, while I had reservations, mobiles did not have the power to destroy face to face communication, which was what the class was doing. The implications of smartphones are somewhat different I think. They are much more powerful. There is a technological leap, such than many everyday electronic activities – including many of the functions of a laptop computer – can be done with a gadget that fits in your pocket and is available any moment of the day and night. Location based software is changing things rapidly. This article from 2010 suggest ownership of a smartphone is inevitable. I disagree. Not all of the implications are good. The positives, like finding out where you are when lost, are all pretty obvious. Let me focus on the negatives.
- They are addictive, in the sense that many find it hard to put them down or leave them alone for long periods. Way more than standard mobiles. I recently watched three students sit down for coffee. Having ordered, they did not speak for over 10 minutes – all of them were scrolling and typing on their IPhones. Why meet in the first place? (that was 2011 – in 2013 for friends, even couples, this is increasingly acceptable)
- Their ease of use can reinforce a belief that online communication is as valuable, and worthwhile investing time in, as the alternative – actually talking face to face or on the phone (it isn’t, in my view). This trend started with computers, of course. Just heard about a guy who messaged he girlfriend in Sydney to break up with her after a long relationship. Impossible and unthinkable 20 yrs ago. (the parents got together about it to get proper communication)
- Too much reliance on technology – people often say their ‘whole life’ is on a smartphone, and how then ‘love’ them. My friends have them and have expressed this sentiment. They check them constantly and don’t put them down. Many people cannot leave the things alone for five minutes. My son can’t. Perhaps the worst feeling is giving a lecture or a talk, and looking up to see an audience of heads bowed and fingers scrolling on phones. It was bad enough with laptops, and I was guilty of that, but this is worse. This article provides worrying stats on this trend in classrooms in 2013.
- Rapid technological advance is an issue for theorists of capitalism. We are become beholden to Apple and their ilk for new tech, and these gadgets become objects of desire for professionals and academics- on a MUCH shorter cycle than 5 years ago when you might just replace your cell phone and laptop every few years. I am unconvinced by activist friends who spend thousands on resource-intensive Iphones, tablets and gadgets – there is some contradiction there, surely. This was noticed in 2013 when the Fairphone was marketed as a response to Apple and the rest – they at least check the conditions of production of their components, even if it still uses energy to produce and run.
- Screen time. I would like to be looking at as screen, as opposed to conducting myself in real life away from one, about three hours a day, max. This also goes for parenting – minimising kid’s screen time, encourage other forms of learning and outdoor activities (we are failing…). These gadgets, along with changing workplace practices, are just helping to make this almost impossible for teenagers in particular.
- Spatial awareness and navigation – people are losing this basic skill, particularly in the US where smartphones are very present (and navigation skills already leave a little to be desired anyway), because they think they always have mobile Google Maps or a GPS app to help them out. Learn to read a map first. Is that too much to ask? As a geographer, I say let the technology aid the brain, without actually replacing its functions. Those “spatial memory” functions, it appears, atrophy. Evidence for this came in 2015 here.
- Music, podcasts, video. Why do we need these available 24 hrs a day, and in our ears when we are travelling? Why not take the headset out and listen to the world instead?
Recent postings about this issue that I have made on a couple of academic listservs brought no responses; I think this was a guilty silence from my smartphone-owning colleagues. It is remarkable how little critical literature exists on the topic, but great books were published in 2015. The ‘I hate smartphones’ group on Facebook disappeared in 2016 . I suspect many academics love these things – they can continue working all the time, even if they are just looking up data or sending an email, and they maintain connectivity even when it is manifestly unnecessary to have it. Those spare 10 minutes sitting on a bus or having a cup of tea can now be filled with scrolling.
Baroness Susan Greenfield, the renowned neuroscientist from Oxford, has been one of the first to break ranks. She has been arguing since 2009 that the ensemble of instant communication and social media enabled by new communications technologies (including smartphones) is changing neural pathways, accustoming the smartphone generation to short choppy communication, originally based on texts and Facebook entries but now including Twitter and much more, and reducing the development of extended arguments and reasoning. The latter is particularly compromised by video games, she says (2013 radio debate). Studies at Notre Dame are underway. Ian Price argues in his book The Activity Illusion that constant messaging “overstimulates our brain’s dopamine system and neurologists are beginning to recognise this impairs our cognitive ability, reduces our ability to concentrate and often makes us tired and frazzled“. Damian Thompson, author of “The Fix” (2012) worries. He cites a 2010 study of Stanford students. If they are right, and refereed papers are scarce (although the New York Times has summaries from 2010), then we are all in real trouble – our kids are growing up with less capacity to concentrate for long periods. It is not just smartphones that does this of course, they are just the medium for the new internet-hungry modes of knowledge acquisition. But they do enable new styles of social interaction and learning. Some 2012 research is reported here, and a scale of addiction study here. Again there is no particular reason that everybody falls prey to smartphone seduction even if they have one, i.e. accessing them constantly. My dad, who likes gadgets, had a huge first generation mobile the size of a brick. Now in his 80s, he has a Samsung smartphone. He does answer emails on it. He sends us one or two photos. But the point is that it hardly rules his life. This is the safer way to use technology – sparingly.
At a recent event in Melbourne where artists met climate scientists, I raised the phone communication issue. One person said that organising a major art festival with multiple venues and events would not be have been possible without smart phone – needed for hooking people up, emailing images, organising venues and so on. Fair enough. There a major issue here for scholars. For those of use who continues to teach face to face in actual classrooms (occasionally aided by some multimedia and online resources) and who set standard student assessments- essays, exams, book reviews – we could find poorer student results occurring over time as the new learning styles set in. Already, cribbing other people’s text off the web infests many assignments. I don’t agree with Cathy Davidson from Duke, author of Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn (Viking, 2011) who argues for acceptance of the internet age by lecturers, which means modification of academic assessment and learning styles to accommodate it. Some specific technologies can augment learning, for sure – Dr Mike Smith, a geographer at Kingston University in London, expert on spatial data, advocates use of numeracy apps for schools and is adapts at IT to demonstrate how the planet works. There are also the protest support programs (countering police harassment of demonstrators with realtime info) and evidence that phones were useful during the Arab Spring demonstrations. But I think our problem generally is now too many knowledge sources for students. We still need to teach about how to sift through it and make judgments, to argue a case and to form opinions (these claims were made when the internet began, too). Knowledge acquisition, Greenfield says, is altering as mobile internet use and video games make massive inroads into everyday life. But if you are a scientist, or a writer, actual work is always required – emailing and phoning does not cut it. A technography of smartphones is well overdue.
personally speaking...For me, phones are for talking on, and computers have replaced typewriters (but not pen and paper) as things to write on. This classifies me as a “Better-Never” in Adam Gopnik’s excellent article in the New Yorker (2011) “The Better-Nevers think that we would have been better off if the whole thing had never happened, that the world that is coming to an end is superior to the one that is taking its place, and that, at a minimum, books and magazines create private space for minds in ways that twenty-second bursts of information don’t.” But with a laptop and a lot of online teaching materials, and running a web journal, not sure I fit that profile completely. I generally re-use second-hand technology, to help cut down the waste stream. I have gone as far as a mobile phone in my life, having first got a second-hand one in 2001. That is only 15 years ago. I still use the same sim card when in the UK [2015- I updated to a $20 s/h Nokia with a full keypad but no data]. Mine rings perhaps once a day, if I am lucky. I may get a couple of text messages. And yet I have a very busy job, just like many smartphone people. I can handle communication about meetings, as well as family issues and emergencies, easily. The last thing in the world I want is any more communication with the office, or any more access to emails in particular. A laptop is quite enough, accessed occasionally throughout the day. On it, you can write on a keyboard that fits your hands, and look at a screen 25cm+, not 4cm across. With 100+ emails a day, why would I want them disrupting me on a smartphone? Nothing is worth the personal cost of that compression of time and space into constant scrolling and expectations of instantaneous response.
My logic has not really convinced anybody I know – (except perhaps until the London riots of 2011). The general feeling is that if there is decent technology about, it must be better, more efficient, and worth buying. I am not all that sure. Admittedly, smartphones have passed through the early product cycle phase where consumer testing can obliterate bad ideas, and they are unlikely to be consigned to a technological dead end, like videodiscs and palm pilots. Their numbers will grow. But if Greenfield is right, the implications for our lives are major, not at all of them positive, and are rolling in on the next consignment of phones from China and S Korea. As Adam Gopnik says “Our contraptions may shape our consciousness, but it is our consciousness that makes our credos, and we mostly live by those.” …… Tristan Harris is the new guro of spartan phone use (2016) – but he uses an Iphone, pared down to avoid distracting and addictive applications. I hope Gopnik and Harriss are right – in other words, smart people need to manage smart phones carefully and not give into the seduction of their time-wasting addictive character. Will Davidson and Greenfield’s views be properly debated one day?
—– PS I have not even touched on the issue of ‘conflict minerals‘ in the phone manufacturing process. Mind you, Australian mining would love slave-mined tantalum from the Congo to be banned, since they also hold reserves themselves. The use of Blackberry messaging systems by rioters in London In Aug 11 is now well documented and may lead to phone networks being shut off, or changes to software to make it less anonymous. Or the cartoons. and videos! Silly one here
- [http://www.youtube.com/watch? v=xR1ckgXN8G0&index=4&list=PLykzf464sU99MyxWCT_6ez4iRJaHZ2mWv] (deleted unfortunately – it was great though)
- Interesting articles The Guardian, April 2015. http://www.theguardian.com/sustainable-business/2015/mar/23/were-are-all-losers-to-gadget-industry-built-on-planned-obsolescence
- Nightime SP use disrupts sleep and performance – Klodiana Lanaj, Russell E. Johnson, Christopher M. Barnes 2014 Beginning the workday yet already depleted? Consequences of late-night smartphone use and sleep Organizational Behavior and Human Decision Processes, Volume 124, Issue 1, Pages 11-23. mp3 summary
- Dave O’Neill, Melbourne comedian who happens to be a parent at our former school, on “tablets” for kids (April 2014) – we think alike http://www.theage.com.au/entertainment/kids-should-learn-how-to-deal-with-boredom-20140327-35l5y.html (my son got one in 2015 – disaster)
- 2015) Good habits gone bad: explaining negative consequences associated with the use of mobile phones from a dual-systems perspective, Info Systems J, doi: 10.1111/isj.12065. , , , , and (
- New book 2016 by Michael Harriss http://www.endofabsence.com/the-end-of-absence/