Sir Shankar Balasubramanian MAE and Professor Helga Nowotny MAE discuss innovation, interdisciplinary research, public and policy engagement, and the impacts of technology.

About Shankar Balasubramanian
Sir Shankar Balasubramanian is the Herchel Smith Professor of Medicinal Chemistry at the University of Cambridge and senior group leader at the Cambridge Institute. He works on the chemistry, structure and function of nucleic acids, spanning fundamental chemistry and its application to the biological and medical sciences. Sir Shankar was knighted in the Queen’s New Year’s Honours in 2017 for his services to science and medicine, and awarded the Royal Society’s Royal Medal in 2018. In 2021, he was awarded the 2020 Millennium Technology Prize (jointly with David Klenerman), and the 2022 Breakthrough Prize for Life Sciences (jointly with David Klenerman and Pascal Mayer) for their work on sequencing technologies.
About Helga Nowotny MAE
Helga Nowotny MAE is Professor emerita of Science and Technology Studies, ETH Zurich, founding member and former President of the European Research Council. She has held teaching and research positions at universities and research institutions in several European countries, and continues to be actively engaged in research and innovation policy at a European and international level. She is currently a member of the Board of Trustees of the Falling Walls Foundation, Berlin; Vice-President of the Lindau Nobel Laureate Meetings; a member of the Austrian Council for Research and Technology Development; and Visiting Professor at Nanyang Technological University, Singapore. Her latest publication, In AI we Trust: Power, Illusion and Control of Predictive Algorithms was published by Polity Press in 2021.
The interview
Your session at the AE Conference Building Bridges 2022 is called, Asking more from research: Scientific and scholarly research results are a public good to be benefitted by society. Could each of you give a brief explanation of your research/scholarly work/interests, with this theme in mind.
HN: “This raises the question, “What kind of research?” and my answer is twofold. As a researcher in Science and Technology Studies (STS), I’m interested in the kind of societal conditions needed for science to thrive, and in the contributions that the sciences make to society. This was once called the social contract with science, and is one way of asking the question of what kind of research we want. We want research excellence, but society also expects some kind of benefit from it. Turning to my experience as former President of the European Research Council (ERC), asking more from research focuses very much on research where no immediate benefit is in sight. This is basic research or, to use my preferred term the ‘usefulness of useless knowledge’, as Abram Flexner called it. What seems useless in the moment turns out to be very useful when taken up, sometimes much later. We have a very good example from the recent pandemic, when the ERC showed that it had already funded up to 600 projects that were of direct or indirect relevance to producing mRNA vaccines and making progress on therapies. Thus, basic research does not deliver ‘more from research’ in the sense of immediate benefits, but historical experience teaches us that something useful will come out of what science produces.”
SB: “I’ve spent the last 30 years or so working on understanding the different aspects of how DNA works, by looking at it on a molecular level. DNA is composed of letters that convey information, and in humans one copy of the genome, as it’s called, comprises just over 3 billion of these letters arranged in a particular order. Some 25 years ago, my lab was engaged in what might be called ‘not very useful’ research, doing very basic studies to understand the mechanism that biology uses to copy DNA when cells replicate. I kept the research proposal for the grant that funded the work, and every now and then I re-read it and reassure myself that there was nothing ‘useful’ implied in anything that we were setting out to do. Now, what came out of that – and this is not atypical in science – was a new idea for a different way of decoding these letters, and we now call this sequencing. The word sequencing does not appear in the proposal – and it was not what we were trying to do; it just came out as an idea. We saw the potential for this idea to be developed in a way that could really transform the speed and cost of sequencing DNA. At that time, there was a big international cooperative project called the Human Genome Project, and we looked at what it was achieving, seeing the cost and the scale needed to produce one reference genome equivalent to one human genome. Yet the big goal was to really understand what it is that makes us all different in terms of our DNA sequence. And therein would lie information that may help us to develop new and improved ways of predicting and treating disease, and ultimately improving healthcare that’s just focused on humans.
There are of course many other organisms that have DNA. So, we went from early innovations (this was a collaborative project that involved a few people), to proof-of-concept, and then trying to figure out how to democratise this idea by making it into a system that could be deployed in people’s labs. At that time, there weren’t many ways of gathering resources for this, so we ended up starting a company. We would not have thought of ourselves as entrepreneurs, but there was no other way we could see to build an interdisciplinary team to do this quickly. So, we raised money, started a company, and the company developed the technology, turning it into a commercial system, and it evolved from there. Fast forward 25 years, which I think it actually a relatively short period of time in science, and we are going from serendipitous discovery and ideas to something that’s actually useful. And now it’s deployed quite widely around the world for basic research, and has made some important contributions, particularly in the area of cancer diagnosis and detection, diagnosing rare genetic disorders in young children and infectious diseases. In fact, it was used to sequence the coronavirus that led to the pandemic, and has been used for tracking the emergence of variants around the world, by sequencing the viral genome in infected patients.
Whenever people ask me, ‘What do we need to do to support and foster innovation?,’ I respond that we absolutely must maintain a strong base in basic research, with the necessary funding and infrastructure for it. Without that, at some point in the future, there will be no innovation, there will be no new knowledge and no new discoveries. The other message I give is that it took us 5 years for ideas and proof-of concept, and another 10 years to show that it all works and can be put it to practice. And then arguably a further 10 years to demonstrate that it can actually do something useful when put into action in society. So, the pathway from an idea or basic research, to showing usefulness in society, is not something that happens in a few years. And so, when thinking about creating change and policy-related change, the timescales we need to consider are at least multiple decades, not just a few years. We need to be long-term.”
HN: “Let me also add something to the importance of time, because politicians are very impatient – they want immediate solutions to problems. However, we must be adamant in saying that this is not the way research progresses. Sometimes, sudden and unexpected discoveries occur, but then other bottlenecks may be encountered. Take an example from physics, where the discovery of high-temperature superconductivity was a major scientific breakthrough, recognised by a Nobel prize. Everyone wanted to jump on it and set up a business, but then numerous problems appeared that we are still are grappling with today. There has been progress, but the evidence is overwhelming that we are still far away from the dream, the vision, that superconductivity can be made to work at room temperature. Progress in science is not linear and cannot be predicted.
In this context, I want to highlight the principle of serendipity, which is so crucial to the way science proceeds. Serendipity means that you find something you had not been looking for, but – and this is an important but – you realise its significance. Serendipity is a very potent ally of every researcher, especially in basic science where one never knows what one will find. You must have an open mind and let serendipity enter your lab space, into your way of thinking and then go on from there. Serendipity may strike in a moment, but the process of doing research takes time and follows its own temporal dynamics.”
What does the term ‘innovation’ mean to you and is it viewed differently by different stakeholder groups, like policymakers and industry leaders? If so, what are the consequences of this?
HN: “I want to start with Schumpeter’s idea of what innovation is about. Schumpeter was an Austrian economist who was one of the first who thought that innovation was worthy of economic thought. He defined innovation as the commercial or industrial application of something new, such as a new product or process, type of organisation, source of supply or market for a product. It was usually driven by entrepreneurs who were ‘new’ men, operating in new firms and often finding new sources for financing. They not only wanted to bring new products onto the market, but were driven by something that went beyond profit. I want to highlight his view of innovation as waves of creative destruction. Creativity exists when something new comes into the world, but it will also destroy the old, existing structures. For example, the coming of the railway system and electricity resulted in the large-scale destruction of existing infrastructures and jobs. This remains fundamental to innovation. However, if we turn towards the way politicians today throw their weight behind the word ‘innovation’, very often it’s a simplistic way of calling for economic growth, and looking at science and technology as the engines of that growth. This is a very instrumental and utilitarian way of thinking about science and technology. Politicians delight in speaking about ‘disruptive innovation’, but nobody quite knows what kind of disruption they want, and why. The term itself originated in an empirical analysis of seeing the way big and established firms resist innovation, because it disrupts their monopoly and therefore they guard against it. Innovation meets massive, vested interests and that is why big firms often buy start-ups that have some new and smart ideas. It is also why a huge number of patents are never used, as they are bought up by firms so that no one else can use them. A lot needs to be clarified about the term ‘disruptive innovation.’ It’s definitely not the solution to all our problems, yet this is very often the way it is presented. Innovation is usually the combination of what exists already and is incremental. Firms improve a product or process, create a new market and may find new ways of financing. There’s a long way from having a bright idea to actually make it work successfully in the market. So, a lot of hype exists around innovation.
But we are now also faced with the negative consequences of innovation. One cannot foresee the harmful consequences that eventually may come from it. An example would be plastic, which was a fantastic innovation if you think of women in Africa who had to carry water on their head for great distances. There are many ways plastic has changed our life for the better. However, right now, the sheer amount of plastic and microplastics that turns up as waste has become a huge problem and we have no idea about its health effects. We see it’s not good for the environment, because microplastic is dissolving into soil and water, it’s in the air and everywhere around us. It is a strong reminder to start thinking about the consequences of innovation, as far as we can foresee them.”
SB: “I’ll say what innovation means to me, from a personal perspective. Firstly, I would regard innovation as something that brings about substantial change. It can be in the way we think, or the way we go about doing things, or the way we live. For me personally, an innovation has to do these things for the better. In terms of how others think about innovation, I think the most common way of calibrating innovation or quantifying its impact is purely down to economics. How much money does it generate for a business, the GDP of a country, for individuals? However, many innovations that have transformed the way we live cannot easily be calibrated by these measures. One simple example would be an innovation that changes behaviours, or has long-term consequences for improving the environment. In these cases, the benefits are very difficult to quantify economically, although there are some economists who are now addressing this. As Helga said, innovation is often thought of in terms of the short-term outcomes and impact, and to many in policymaking or the commercial sector, impact has to be short-term, otherwise the innovation won’t survive. And this is a problem, as I illustrated earlier. The one major example I was involved with really took 20-plus years before it started to express the benefits to society and as a consequence of that, economic benefits as well. I think in terms of the timescale for innovations, there is a perception – which is incorrect – that basic research takes time but once you call it an innovation, it’s somehow driven at high speed and out pops a product, or a device, or something that immediately changes society. And this is generally not true – there may be some exceptions, but innovation really does take time. So again, I would say a lot of innovations are driven by commercial and venture capital investment. This is actually what drives the start-up and growth industry, and right now all of these industries are suffering because innovations that are not yet having an impact economically are not seen as worthwhile investments, in a world where interest rates are higher. So, things like this will actually have a ripple effect on the development of innovations because we do live in a world where multiple stakeholders take a relatively short-term view, and they need to measure impact by economic benefit on that limited timescale. This is the reality of how it works at the moment. I think any mechanisms to smooth this out and help sustain a much longer-term view on supporting innovations would be helpful.”
HN: “I would like to add to what Shankar said. I think we are fixated on technological innovations and the reason is partly because economists can get a grip on it. They can quantify and measure the GDP of a country, with innovation scores at the centre. Technological innovations often come as a device – they are tangible and you can see the difference they make. Using them promises to save time, effort and money. But we tend to forget that every technological innovation has to be embedded within a social structure, as it changes the way in which people relate to each other. So, every technological innovation also needs some kind of social innovation to function. Let’s take a hospital where a new kind of wonderful diagnostic instrument is introduced, a technological innovation that require a reorganisation in order to fit into the system of patient care. People need to be trained to understand its use, which may change the social hierarchy within an organisation or the way a team is composed. Social innovation has an impact on our mindset and on how we might do things in a different way. As Shankar mentioned, we want change for the better in how we live. Sometimes, social innovation precedes technological innovation when the social environment – the social group or team – is ready for it, and it then leads to the kind of betterment we all want.”
Both of you work across boundaries, for example, with other academic disciplines and/or other stakeholders like policymakers and industry. Are there important principles to keep in mind when engaging with those outside your discipline or sector? Is there a need to rethink/redefine boundaries between stakeholders (e.g., academics/ industry/ policymakers/ citizens)?
HN: “This is about principles of engagement. For an academic, ‘outside’ means firstly outside of one’s discipline. This leads to the oft-talked about principle of needing more interdisciplinary work – or as I like to say, ‘the world has problems, and the university has departments.’ So far, we have not been able to work together across disciplines in a way that meets the challenges we face. There are many reasons why universities stick to departments and academic disciplines, although the boundaries are changing. I would defend universities keeping disciplines, with the argument that a discipline teaches one what the relevant research questions are. Having said that, universities have an obligation not just to teach students the language of one discipline, but to enable them to understand the language of others. Interdisciplinary work meets a lot of obstacles in practice. If you search the literature for the term ‘interdisciplinary’, you will discover that the majority of articles are about why interdisciplinarity does not work. Interdisciplinarity is particularly hard for young people. They often want to reach out across the disciplines, as they’re attracted to solutions and eager to include different perspectives on a problem. These young people need to be protected against the academic system punishing them for not following mainstream disciplinarity. On the other hand, young people need to be encouraged to become what I call ‘competent rebels.’ Having competence, including interdisciplinary competence, but at the same time being ready to rebel against their teachers or elders.
In regard to policymakers, the kind of engagement I’m most familiar with is scientific advice. For instance, I’ve been the chair of the research advisory board advising the European Commission. What I discovered in this and other circumstances is what I call ‘the advisors’ dilemma.’ If one is too close to the person or the policymakers who are receiving the advice, one will fail because they know the answers already. So, you cannot make a real contribution to the options or decisions they are considering. And if one is too far away one will also fail, because the solutions and advice proposed will not be taken up, as they seem unrealistic and unfeasible. Therefore, one has to tread a fine line between too far away and too close. I’ve also discovered that what policymakers are looking for is to get advice from people whom they trust and who have no personal or institutional interest in advising them. And this is very rare in an individual or groups, because every policymaker is being lobbied from many different sides, and groups are pushing their advice and the interests they represent. Therefore, it’s worthwhile to see if we can create the kind of setting where trust is established between those who offer scientific advice, and those who receive it, while keeping vested interests and one’s own interests as far removed as possible.
I have far less experience of engaging with industry, however what people in industry look for is also people they can trust, who they think are talented, so there’s an open attitude of trying to uncover the potential that people can bring to something.”
SB: “Based on my experience in academia, interactions with people from other disciplines can be a tremendous recipe for creativity. The potential to see things that you might not see with people from your own subject area, is very high. It’s a very good way of cross-checking ideas, fallacies or misunderstandings, when you’re working at the edge of disciplines. Why doesn’t it happen more often, and why is it often not successful? Language barriers exist, so finding common ground and a common language is a challenge. I work in the sciences, and there’s a lot of very specialist scientific jargon which even as a scientist, leaves you struggling to follow the dialogue. I think as individuals we need to articulate what’s important about concepts, using language that is free of jargon, and this is a skill that very few people possess and do well, in my experience. One has to leave one’s ego aside in such a situation because you need to build mutual understanding, to respect people from other disciplines and also be respected by them. It doesn’t always come naturally, because in academia people feel – sometimes rightfully, sometimes less so – that they are leaders and pioneers in their own niche area. Yet, they really need to recognise the merits in others, and in my experience of interdisciplinary collaborations, the true test is whether it sustains. The best interactions I’ve had with people across disciplines have lasted 10- or 20-plus years. For me personally, it’s been a learning experience, where I can gain from the other person’s understanding of the world. But it does take time. The university structure which Helga alluded to does put us into our silos. We tend to teach in silos, and we tend to do research in departments. There are, around the world, some examples of pioneering experiences, where in one building you have people from quite a broad range of disciplines, working together on the same corridor, having coffee and tea with each other daily, helping each other. Over time creative dialogue, new opportunities and exciting projects inevitably emerge from these situations. Every young scientist should experience aspects of the history and philosophy of science, which is moving more towards the humanities and sociology, and I think this is often missing from science education. As you move into research these things become even more important. In universities, there is room for more creativity in how we organise the way we teach and do research. And indeed, creativity in the research environments that we build that could break down these barriers and give rise to new ideas and innovations in teaching and research.
I’ve had relatively limited experience of interacting with policymakers. In the experiences that I’ve had, I’ve often struggled to come to terms with their political agendas, and they don’t always want to engage with advice in a way that’s completely objective. That’s difficult if you’re a researcher – science is about data measurement and objectivity. I think one of the challenges here – and we saw some of this during the pandemic – is where you find scientists and clinicians having different interactions or ideas, as we often do. This is what happens when there’s uncertainty, there’s room for a spectrum of views and opinions from experts. In science, some of us are comfortable with uncertainty. It’s inherent in the nature of science, things are rarely black-and-white. But in policy you need to make a decision. And sometimes you have to present the decision as being very clearly substantiated by evidence. I think this is one area where there can be difficulties in blending scientific advice with policymaking. Sometimes, if you’re not careful, it can portray science in a very poor light, whereas those of us in the world of science accept uncertainty in some situations.
I’ve had more experience in industry. I think in industry, if you are to provide scientific advice, your role is to try and be the voice of objectivity. To be prepared to call something out that may be different from the internal view and to challenge those views in a scientific and evidence-based fashion. I think that can be a very useful role to play in industry.”
HN: “I fully support what Shankar is saying. I wrote a book about uncertainty with the title The Cunning of Uncertainty, and one of the underlying themes is precisely that science thrives on uncertainty, because scientists want to move into the territory of what is not yet known. On the contrary, in politics, but also in large parts of society, there is a craving for certainty. Politicians expect a very clear ‘yes’ or ‘no’ answer, where there is none. If a scientist is asked whether a particular substance is carcinogenic or not, the only honest answer is, ‘yes under certain conditions,’ or, ‘no, under certain conditions.’ But the politician does not want to hear about these and this makes it very difficult to convey a bandwidth of uncertainty. As scientists, we have to explain better that uncertainty is part of how the world functions – the only certainty we have is that we are going to die – but also that fear is the worst reaction to uncertainty. Uncertainty can open up new opportunities and pathways to be explored. This is what I call the cunning of uncertainty, which we should embrace.”
What has been the impact of technology on your field, and what further changes do you foresee over the next 10 years?
HN: “I’ll start with referencing another book of mine, which was released last year – In AI We Trust: Power, Illusion and Control of Predictive Algorithms. The largest impact of technology we have seen recently in scientific fields is the impact of big data, unprecedented computational power and sophisticated algorithms that also change the way science works. This takes a different form in physics, life sciences, social sciences and in the humanities. But the impact on the organisation of science, its methods and on concepts like reproducibility, the validation of results and what counts as evidence is undoubtedly huge.
One of the big feats in life sciences recently was AI coming up with a way of accurately predicting the structure of folded proteins [DeepMind]. But the influence extends far beyond such stunning achievements. Look, for instance, at the way big data influences the health system and clinical practice. We all know that any kind of big dataset contains a number of different biases, and if these biases discriminate against certain groups of people or exclude them, this will be transferred into the predictive algorithms that determine the decisions to be taken. This may lead not only to the exclusion of some population groups, but will also distort what is known about the medical condition and the efficacy of therapies. There are a number of pitfalls, as well as great opportunities that are opened up by the impact of digital technologies. We have to be very careful about the possible drawbacks, like biases or inconclusive evidence that enter into how we interpret and implement algorithm-based decisions. In the end, there must be room for human judgement and the possibility to appeal. We can already see a reconfiguration of expertise taking place; the experts of yesterday are no longer the experts of tomorrow, and this again will upset a number of ways in which we are working.”
SB: “I’ll answer by referring to my research area of DNA. If one considers DNA as an information molecule, the order of letters is genetic information. There are also other types of information that can be read from DNA. One form is much more dynamic information, which is sometimes called epigenetics. All these types of information ultimately report on the foundation of living systems and can also report on dynamic changes that are going on in the system, sometimes in response to the environment or disease. There are a number of areas where things are moving very rapidly in terms of technological advancement. I think over the years there have been big strides on how in laboratories you can make DNA to order. You can make literally millions of different DNA molecules by miniaturisation and scaling – this is writing. Then the area I’ve been more directly involved in is reading, which is sequencing. You can sequence DNA about 10 million times faster than you could 20 years ago. That’s 7 orders of magnitude – it’s a big change in the economics and speed of sequencing. The third area is rewriting or editing DNA, and there have been technological advances in our ability to make very precise changes to the code in a cell, or ultimately an organism. So, the three dimensions are each in the phase of manifesting themselves and continuing to improve. So, what can we do with this? I’ll refer to the area I know most about, sequencing. Today, in a lot of cancer clinics, certainly in the western world, you will sequence the tumour of a patient first, to find all the genetic changes that may be contributing to the characteristics of the tumour. Some physicians are already able to harvest some of this information to decide how they’re going to treat the patient. In fact, there was a case, just a couple of days back, on BBC News of a new-born baby that had a tumour, and they were going to treat the baby with chemotherapy. However, they did a whole genome sequence of the biopsy and, based on that information, decided that the child did not have a tumour that was problematic, but rather it was benign. It was not treated by chemotherapy, which was a great relief to the parents. This is an example of an actual decision on how to treat a patient. Also, rare genetic diseases are being diagnosed for the first time. These were undiagnosable in many cases. Now, one can sequence the genomes of Mum, Dad and child and rather quickly find what is unique to the child and today in some countries – including the UK – young patients with rare genetic disease can be sequenced in this way to seek a diagnosis. The third area is infectious diseases. The Covid pandemic has seen the sequencing of the virus, tracking emerging new strains and sequencing patients to try and understand the genetic basis for why some people suffer severe illness and other people have hardly any symptoms. These are examples Helga mentioned of AI. AI is relevant to all this because of the volume of data being generated through these types of projects, where a significant portion of the population is having their genome sequenced. These datasets – along with healthcare information and treatment – are in the process of being assembled and mined, often using machine learning and AI algorithms. I think data generation requires data handling and data mining, and the example Helga gave of alpha fold protein structure prediction is an example of a problem that scientists having been trying to crack for decades. And suddenly, the seemingly impossible has been made possible. I do think it is going to cause great change. As scientists, we like to understand how things work and for me, one of the potential issues of AI is ultimately that it’s a black box. It gives answers and makes predictions that can be astonishingly accurate, but it doesn’t convey how it got there in a way that the human mind can understand. One of the concerns is that the more we start committing things to AI and machine learning to find solutions, there’s a risk that we will sacrifice building understanding of how things work.”
HN: “The black box that AI represents is also important when thinking about open science. We know that it’s the large corporations that own and process much of the data, even those assembled in the cloud where we think they are secure. And many algorithms are not even known to the public, as they are private knowledge. So, it’s not just that we don’t understand what actually happens inside the black box, but we don’t have any means of opening the box and looking inside.”
Has the movement towards openness (open science/ research/ innovation) had a transformative effect, or do we need to go further?
HN: “I think we’ve seen a lot of traction generated by the call for open science, open access, open innovation and open collaboration, and that has been a very good move forward. On the other hand, we have also seen complexities that people have not thought about before. The call for open science and open access came about because researchers felt they were too constrained by the publishers and their paywalls. I think we have made progress, but it remains a very complex arrangement, with a range of different models (green, gold etc) in place that still disadvantage science in many parts of the world. Another problem is the sharing of data. Everybody agrees in principle, and we have seen it happening during the pandemic, when people were really sharing. Shankar already spoke about the rapidity with which there was a consortium for DNA sequencing, and people were collaborating around the world and sharing their knowledge, samples and results. We knew we had to be very fast in identifying the different viruses and the mutations that were to come. However, it depends on the particular context in which data is shared, as well as on actual practice. We also encounter difficulties with the peer review system, which has become dysfunctional while publishers continue to establish ever more new journals as a response to the growing demand, driven by impact factors and evaluation metrics. One problem I see is that we tend to overlook what is in the public and private domains. I’ve nothing against the private domain, but openness should not be reserved only for the public domain – we also need more openness in the private domain. And to come back to the importance of AI, we know from the way the Internet, Twitter etc function – the machine will recommend to you what the machine wants you want to see. If it is only owned privately, it distorts the way we get information about anything. During the pandemic, this was exacerbated – the flood of fake news, much of it been driven by hidden recommendation systems that target particular groups that the search engine recommends (and, of course, the people behind the engines). It reminds me of the predecessor of modern science, namely alchemy. Alchemy was secret knowledge, closely guarded by a few. The progress of modern science partly occurred because it thrived on knowledge being shared and open. It enabled the scientific community to organise itself and demonstrate publicly what scientists do. We need science to remain public. It is a public good and we need to protect it against a corporate ‘alchemy’ coming back in the guise of large corporations that keep scientific knowledge as private property that no one else has access to.”
SB: “I think there have been some improvements that are real and tangible. Open access publication has been a success; more of us are now required to publish everything in open access formats, so you don’t need a subscription to read the work. Anyone can access it from anywhere. I think that’s helped democratise published work. Another trend is preprints – work that has not yet been peer-reviewed, being deposited in publicly-accessible servers. This again is a way of disseminating results and ideas, although I think researchers are perhaps motivated to do it for reasons of competitiveness rather than reasons of openness. I think data access policy has also been continuously improving for most published research, particularly those that involve large datasets. So there have been some tangible improvements here. So why are we not completely open? I think there’s a tension between openness and competition, and in academia this relates to the reward system. Researchers are rewarded, based on their own individual success, not the success of the science. This implies an ownership of science, or scientific areas. Multiple times, my students have heard me say, ‘Nobody owns science.’ It is something that has its own space and identity, but yet the way the system works in some areas of science, there can be a perception of ownership by the people who claim it first. I think there’s room for a review of what constitutes success for researchers, and part of this is the notion of getting there first – whatever that means. This is how young researchers progress in their careers, it’s how they get promoted and recognised. I think therein lies some of the reasons as to why we aren’t completely open. I think there’s room for innovation in relation to how researchers disseminate their findings, in a way that allows them to get on with their careers but be more open.
We’ve just had a retreat in my lab up in the Pennine Hills in England. A common topic that comes up in discussion is how useful it would be to see things that have not worked, or how not to do things. I’m a great believer in “failure” because there’s no such thing as failure in research – unless you drop your flask on the floor and spill everything so you can’t make your measurements! Often, failure is the label given to getting the “wrong” result or not getting the anticipated result. Let’s rewind to what Helga said about serendipity earlier. This is where the opportunities arise, something happens that we weren’t expecting. What tends to happen with the reward system is that people publish things that are relatively straightforward, predictable, happened as per design, and if it doesn’t go that way it doesn’t get disseminated. Knowing ‘How not to do it,’ or the Journal of Unexpected Outcomes – it wasn’t what we were expecting, but it’s real and reproducible – could be a benefit.
In the commercial sector, I think the same is also true. If, for example, all the big pharmaceutical companies disseminated failures in clinical trials of therapeutics, with details that others could learn from, someone somewhere may see something in it that’s very helpful toward the next therapy. Both in academia and industry, disseminating things that don’t necessarily look beautiful or predicable could be useful. It’s still real data.”
Does society ‘get’ science? Do you think the relationship between society and science/scientists is changing and, if so, how? Is it changing for the better, or are there risks?
HN: “The pandemic has provided us with cultural experience of how the relationship between science and society is changing, and will change in the future. Initially, science was at the centre of public attention, as everything about the new virus was unknown. However, this changed when political decisions about policy interventions were to be made. Science came under attack, and the misunderstandings of the way science actually works multiplied. The public is unaware that science is organised scepticism. Doubting the results of colleagues and one’s own results is an imperative in doing science, as are the processes for checking, replicating and validating what is accepted as scientific knowledge at a particular time. This is something we have failed to convey sufficiently to the public. We have been so focused on getting out the message – “Look, we can produce wonderful results, we have new medicines, new therapies, beautiful products,” and we have failed to speak about the processes of getting there, the failures and how much failures can tell us.
François Jacob, the French Nobel laureate, spoke about day science and night science. The day science is where everything works well and wonderful results can be presented as benefits to society. But there is also the night side, which usually is not spoken about. Here, scientists encounter obstacles; experiments and data have to be thrown into the wastepaper basket and one has to start afresh. Yet, there is passion and persistence, and there will be daylight again.
This is a message that the media must also take on board, leading them to abolish the practice of ‘false neutrality’. This happens when the media presents as a scientific controversy the contradictory views of two scientists, where one represents the vast consensus of science while the other stands for a tiny minority that dissents. So, we have to communicate better to make the public understand that science is organised scepticism, and how science actually works.”
SB: “I think there have been some improvements. There are an increasing number of books and Helga is one of the authors who has been prolific – books about science that can be read by members of the public. I agree that the pandemic, the climate change crisis and the environment have touched everyone on the planet, and there’s a lot of important science associated with each of them. These have been very effective for promoting science to the public. The lifetime of these issues is sometimes relatively short, and it seems that the general public has already moved on from the science behind the vaccine and so forth. The challenge for science and scientists is to sustain public visibility and thereby sustain relevance to society, the public and policymakers. For me, the global media is becoming increasingly more headline-based and short-term in terms of what’s published. I wholeheartedly agree with Helga’s view on the process of how science happens, and the fact that scientists are normal human beings just like anyone else, who struggle with the normal day-to-day things we all do. There are some TV and radio programmes that convey it very well, and in public interviews scientists do talk about the struggles or the failures along the way. I think as part of this, policymakers need to understand how long-term we must think to address the solutions to big problems. Right now, we are facing economic stress in most countries in the world, which immediately leads to short-term thinking in terms of policymaking and budget setting. Things like the progressive shift towards changing our energy usage for more environment-friendly options are somewhat moved off the agenda because we have short-term crises to deal with. And I think better interaction with science and scientists would perhaps help the general public – and consequently policymakers – see the importance in taking a long-term view on such critical topics.”
If you could bring about one fundamental change to enhance the way that science impacts on/ engages with society, what would it be?
HN: “In one sentence – to have the courage of one’s convictions, and to say – whenever it is true and needed – “I don’t know.””
SB: “What grounds us every day in the lab and keeps us humble is the principle of scientific method. You’re one experiment away from being told (by the data) that you’re wrong and need to re-think your ideas. I would welcome more of this culture in mainstream society and in politics.”