Professor Helen Kennedy MAE explores what a fair and inclusive digital society could look like and the challenges of digital inequality, big data, and technology ethics.
#BuildingBridges2024SpotlightSeries
About Helen Kennedy
Professor Helen Kennedy MAE FBA FAcSSS is a leading authority in the field of digital society, currently serving as a Professor of Digital Society in the Department of Sociological Studies at the University of Sheffield. With over two decades of experience, her research focuses on the everyday experiences of digital technology, exploring themes such as digital inequality, data ethics, and the socio-political implications of big data and datafication. As the Director of the € 5 million ESRC Digital Good Network, she collaborates with practitioners, cultural organisations, and policy stakeholders to envision and achieve a good digital society.
At AE’s Annual conference Building Bridges, Professor Kennedy will present her talk, What does a good digital society look like?
Read the interview
Based on your research, what constitutes a “good digital society,” and what are the primary barriers to achieving it?
“We are addressing this question over five years, supported by a € 5 million investment from the UK’s social science funder. It’s not an easy question to answer. Ideas about what constitutes a “good digital society” differ across individuals and communities, depending on their politics and social inequalities. Many ‘tech for good’ initiatives lack clear definitions of what ‘good’ means, and some groups benefit from technology deployments that harm others, so there’s a real danger of ‘goodness-washing’ of still harmful technologies. Some commentators think that we shouldn’t be asking whether digital technologies are good and fair, but whether they are just, promote equity, and shift power.
At the Digital Good Network, we believe that it’s not possible to arrive at a good digital society without addressing major social challenges, and because of this, we focus our work on equity, sustainability, and collective resilience.”
What are the most significant changes you’ve observed in digital society over the past decade?
“There’s always something new provoking wild optimism, or fear and alarm. In my time, it’s been the web itself, social media, big data and datafication, and now it’s AI in all its forms. In reality, new digital technologies often bring with them quite similar possibilities and problems, relating to who benefits and who has power over them, and who does not.”
Your research covers various aspects of digital inequality. Can you discuss some key findings and their implications for society?
“In Living With Data, we found that belonging to a disadvantaged or minority group informed people’s perceptions of data uses. Disadvantaged or minority groups expressed different views on data use compared to advantaged or majority counterparts. Factors such as education, economic status, age, dis/ability, gender identity and sexuality, English as an additional language, and race and ethnicity shaped participants’ perceptions of data use at times. I’m not suggesting a direct correlation between belonging to a demographic group and perceptions of data uses – rather, demographic characteristics shape life experiences, which in turn shape perceptions.
For example:
- Disabled people were more positive about health data re-use than people who did not have a disability;
- White people trusted the police’s data uses more than Black, Asian and other racially minoritised people;
- Older people trusted their GP more than the youngest 18-24 age group;
- LGBTQ+ people trust health organisations less than heterosexual cisgendered respondents.
These differences show that there is no singular ‘public’; viewing the public as one entity obscures the diversity and inequalities that characterise various groups’ experiences and perceptions of digital technologies. It’s important to be specific and precise when discussing publics in relation to digital technologies. Digital society policymakers and practitioners should look beyond the headline findings of individual studies, turning instead to evidence reviews that synthesise findings from multiple studies. Stakeholders should regularly consult diverse publics, because structural inequalities influence how different people perceive various digital technologies.”
Big data and datafication are central themes in your work. How do you balance the potential benefits of big data with the need to protect individual privacy and prevent misuse?
“I don’t think we should be focusing on individuals and on privacy – rather, I think we should be asking questions about the ways that datafication impacts communities and societies. There are lots of ways in which datafication perpetuates inequities, creates new ones, and impacts already marginalised groups. Our focus should be on balancing the benefits of big data with ensuring it serves all groups, not just a privileged few, and pausing deployments when they fail to do so.”
Given the rapid pace of technological change, what are some emerging trends or challenges you believe should be prioritised in future research in this field?
“It’s important to understand how specific digital technologies are used, deployed, negotiated or resisted by specific groups in particular contexts. Yet, there’s also a larger, macro-level, normative question we need to ask: What kind of society do we want, and what role should digital technology play in it? US sociologist and critical race scholar Ruha Benjamin says, “We should remember to imagine and craft the worlds we cannot live without, just as we dismantle the ones we cannot live within”. The words should inspire our digital society research, and prompt us to ask ourselves how we want things to be, and what we need to do to get there.
Along that journey, we should be prepared to pause or abolish tech deployments that don’t align with our values and principles, ensuring that the ‘we’ or ‘our’ in that phrase is inclusive and does not perpetuate existing inequities.”