Want to hear more about ethics in the media? I recommend:
Image credit: Medical News Today
If you happen to be an avid follower of US political magazines and the like, you may have seen a report recently in The Hill that calls antimicrobial resistance a "calamitous health crisis," a "plague on the U.S. health care system" that may leave us "defenseless" very soon. Similarly, if you happen to be an avid follower of UK policy, you might have seen this post on the UK.GOV website that reports the Health Secretary's claims that antimicrobial resistance is "the greatest threat to global health, tragically killing millions of people every year."
And yet, if you're not an avid follower, you might not actually have heard much at all about antimicrobial resistance (AMR). Why not, and why is that a problem? Let's explore that.
AMR is where micoorganisms like bacteria and parasites adapt, to become resistant to the drugs we use to try to kill them. When you or I are infected with any of a range of common diseases—anything from tuberculosis to gonorrhoea to malaria to staph—we risk getting a kind that doctors can't treat using their usual go-to of antibiotics, antifungals, etc. In severe cases, the disease may not be treatable, and the patient may die.
AMR is sometimes also called, in articles like those linked above, "the silent killer." It's a phenomenon we don't often talk about, perhaps for a couple of reasons. First, it isn't a single pathogen or pandemic that we can put a name to and worry about. Whilst in one part of the world, you might be exposed to drug-resistant malaria, in another part of the world that's highly unlikely, and you're more likely to catch drug-resistant gonorrhoea. A second reason we don't talk about it much might be that we think it's solveable, or not a big issue. Scientists have the tools to develop new antibiotics, right? And it's not killing as many people as better-known health issues like COVID-19, right?
Wrong, and wrong. Firstly, AMR was associated with 4.95 million deaths in 2019 alone. That number is projected to increase to 10 million deaths per year by 2050. That's a lot more than COVID, which has killed 6.32 million people since it started. This is a large-scale health threat, and we should be acting in proportion to the significant suffering that AMR is already causing and will cause in the future. Secondly, this problem isn't only large-scale, it's what we might call 'intractable', too. We don't have easy solutions at hand. Scientists come up with new drugs to treat drug-resistant infections, but this is an arms race. It's the scientists against the superbugs, and it seems like the superbugs are winning: drug development is plateauing as big pharma realise that the efforts they spend developing new drugs will soon be obsolete as the bugs overcome them. There is less investment, there is less development, there are fewer drugs around to treat infections.
There is one more factor that makes AMR a terrifying problem that should be more publicised: we are making it so much worse, often without even realising. In this article, I discuss the case in depth, but here's the short version. During COVID-19, there were two measures (and probably a bunch more) that we took that helped us reduce suffering and death during the pandemic, but that may have exacerbated AMR and had an overall net-negative effect in the long run.
The first was increased antibiotic prescription by doctors. During the pandemic, doctors were prescribing more antibiotics than expected, possibly because they were, 1. having more telehealth appointments, and 2. trying to prevent people from getting dangerous infections like bacterial pneumonia on top of COVID. Both of these trends cause increases in antibiotic prescription, but neither case necessarily prevents much suffering or provides much benefit. For one, in telehealth appointments it's hard to tell whether the patient really needs the antibiotic. For two, when using antibiotics as preventive measures, you might be giving a bacteria-killing agent to someone who has or is more likely to get a viral infection. In both cases, by giving the patient the antibiotic you increase the risk of other microbes in their system developing resistance.
The second measure we took was increased use of biocidal agents in households. Hand sanitizers were sold out at supermarkets, as were a lot of other antibacterial cleaning products. This increase in use of biocides is important in settings with medically vulnerable people, to prevent the spread of COVID and other diseases, but it's not so useful in your average household, where people are more likely to suffer only from mild infections, which will offer them a chance to develop their immunity anyway.
AMR is a serious problem. We are running out of solutions, and inadvertently making it all worse, particularly taking some measures against COVID. But so few people know about it. Let's help spread the word, before it's too late.
Image credit: Science Business
When you think of biological or chemical weapons, you may think of the mustard gas used in WWI, causing blindness, respiratory diseases, and death. Or you may think of the British TV series Black Mirror's bee-like autonomous drone insects or 'roach'-humanoid fighters. Maybe, though, the first thing to come to mind is COVID-19, and the theory that the virus was leaked from a laboratory in China, namely the Wuhan Institute of Virology.
The true origin of this pandemic remains a mystery, but the WHO's new Scientific Advisory Group for the Origins of Novel Pathogens may help identify the sources of future pandemics. What teams like this uncover might become increasingly important in the future, as the risk of pandemics engineered by humans becomes greater. The risk of pandemics as future bioweapons is one that ought to be taken seriously, and I'll make the case for that in this post.
Highly deadly pathogens are less likely to arise and spread naturally, as a high death rate usually reduces opportunities for transmission and spread, the main evolutionary goal of any pathogen. But with some tweaking of virus' and bacteria's DNA, new strains can be engineered that combine lethality with powerful mechanisms to increase transmission, and perhaps even target particular populations. This is highly related to a concern raised in a recent Nature article that followed the publication of new computational machine learning models that could be used to discover how new toxic molecules could be created.
The team behind the models had used them to develop molecules to help treat and prevent diseases. But when they simply inverted the commands on their model and exposed it to data on current drugs and toxic molecules, the model identified over 40,000 new deadly molecules in under an hour, and even re-designed some existing chemical warfare agents. Similar work can be done with identifying changes to the structures and genetic codes of viruses and bacteria which might make them more able to enter human cells, or enable faster replication, or make them resistant to common drugs that might be used to treat them. Research that looks at these changes is called 'gain of function' research. The goal of such research is to identify changes and prepare in advance in case they develop naturally. But the more of this research that is performed and published, the greater the risk that a malevolent actor could use the research or its outcomes to harm people. Recognising this risk, the US had banned gain of function research up until 2017, when it was replaced with a more permissive review policy under a research oversight board. Since then, gain of function research has continued on a small scale around the globe, often performed under inadequate conditions of biological safety. What's more, with more open-access and open-source research being performed, there is increased risk that the methods used for these studies might be published and accessible more widely than we might think wise.
Toby Ord and other bioethicists have considered the ethical implications of gain of function research. In his The Precipice, Ord suggests that the availability of gain of function research poses an 'information hazard', a risk that dangerous data or ideas are leaked from labs and used to hurt a particular group, or all of humanity. Effective Altruists like Ord think that this poses a major threat to the existence of humankind in the future. We must act to ensure gain of function research is well-regulated, performed in biologically safe, secure environments, and that only relevant, necessary data is made openly available, and only when it cannot significantly contribute to information that might allow malevolent actors to design new infectious diseases. To determine whether such research should be performed to begin with, we might use ethical frameworks like the one suggested here.
Although estimations are difficult, a 2008 survey of global catastrophic risks claimed a 10% chance of an engineered pandemic killing over 1 billion people in the next 100 years [pdf]. We can reduce this risk, and still secure some of the benefits of valuable gain of function research, but only if we avoid the kinds of developments identified in the toxic molecule machine learning case, and, possibly (if improbably), the COVID-19 case.
PS: I'm actually working on this topic in a couple of months, during my Global Priorities/Forethought Fellowship residency at the Global Priorities Institute, University of Oxford. If you have any ideas, would like to know more, or would be interested in collaborating, please do get in touch!
Image credit: Martin Rowson, via New Humanist
Recently, advances in genome sequencing have produced a vast and growing amount of human genetic data. This data is collected, stored, and shared via biobanks. With biobank access for particular studies, researchers can do important work to discover how our genes shape our health, our history, and our likelihood of certain future genetically-influenced outcomes.
One aspect of this is that now we can identify differences between people's genomes, and associate them with a the person's ancestors and where they lived. On a population scale, this type of ancestry research can tell us how we're all related and how we came to be distributed around the world.
But this is big data. Working with big data and drawing conclusions from it is an ethical minefield, not only when it comes to the results we get and conveying them to individuals, bit also concerning fair recruitment, privacy, and what the societal effects of greater understanding of our ancestry could be.
To start, the process of conducting ancestry research relies on the collected data being representative not only for Western European populations (those whose data mostly makes up biobanks currently) but also for groups of different genetic ancestry. Otherwise, this research will produce accurate tracing through history for the family trees of some, but will be systematically inaccurate for others, on the basis of their not being properly represented in the biobank data. Even worse, when it comes to using genome sequencing for learning about rare disease-causing mutations, it means we cannot help people from non-Western populations as much through personalised medicine.
To try to solve this problem, proper representation itself relies on data collection efforts across diverse populations. This requires building trust in communities that have been structurally marginalised in the past, some of whom may have been mistreated in medical research, and for whom established medical practice may have failed to fulfil their needs.
Any data diversification project, for genetic ancestry or genomic medicine, must address this history, communicate well, build trust in communities, and ensure the benefits of data sharing actually befall the participants, in order to be successful. For example, recruiting more people with genomic heritage from Africa in the UK population might require discussion of how personalised medicine would help members of medically underserved black communities in London. For some populations, recruitment may need to be sensitive to linguistic or cultural barriers, and ascertain whether these can be overcome for prospective participants.
By gathering more data from these groups, the accuracy of ancestry testing and genomic medicine will improve, but we need to know that recruitment is happening in the right way, and that the people who give their data for this research are also going to benefit. It remains to be seen whether personalised medicine, in particular, can deliver on such a goal, or whether it will draw focus away from the structural determinants of ill health that often have greater impact on the well-being of non-Western European populations.
Image credit: Murugan et al./Tufts University/Science Advance via The Guardian
Last week, another new medical milestone was passed. Although they seem to come around every few weeks now, it's still worth keeping an eye out, and considering how all these small steps forward might affect our health and wellbeing in the future. How might your life change, for instance, if after a severe injury resulting in amputation, you could have your leg regenerated?
It took an African Clawed frog 18 months to regrow its leg after being put on a cocktail of drugs being tested for potential future use in human patients. Whilst the leg is somewhat incomplete, it is functional, with the bones, muscles, nerves and tendons working as they should. This frog, and others on which the same experiment was performed, can swim using their new legs.
The study is a landmark, and might make us pause to think about the broader consequences if regenerative medicine had the same success in humans (although this might be a ways away yet, as we have much less regenerative capacity than frogs to be triggered using the drugs tested). If we could have our limbs regenerated, our organs regrown, what would this mean for how we view health and disease, how we care for our bodies?
Imagine you were having a night out on the town. You'd had a few drinks, and a friend told you to get a taxi home rather than driving. You (hazily) considered your options, your chances of having a crash and harming yourself or others. But you felt relatively confident that you were sober enough to drive, and besides, if you hit anyone you could go to the hospital and be as good as new very soon.
That last sentence seems to make all the difference. Our moral deliberations often depend on the anticipated costs of our actions, for ourselves and others. Whether drunk or not, we might be more tempted to engage in risky actions when we anticipate that any harms we might cause are only temporary, anyway. The same thinking might apply to behaviours such as smoking, which can cause significant organ damage. The costs of these behaviours seem smaller if regenerative medicine can grow us a new organ in a flash.
That's all very well if the costs are that much smaller. We might be able to engage in more risky but fun behaviour, perhaps, and if this comes with fewer bad consequences, all the better. But we might be concerned about negative externalities, too—the costs of our behaviour that can't be taken into account with the usual metrics. In this case, there might be costs other than damage to our bodies that result from risky behaviours, that regenerative medicine can't cure. Often, accidents and injuries cause trauma more broadly. In fact, around 9% of people develop PTSD after a car accident. That's a harm that can't be removed by regenerating a severed limb.
Regenerative medicine is massively promising, and we shouldn't underestimate the effects it could have on our physical health. But whilst it removes some of the costs of injury, it cannot address them all, and we need to ensure we don't neglect these negative externalities in our decision-making, particularly about engaging in risky behaviours.
Image credit: GETTY IMAGES via Wired
The NHS is planning on introducing a new programme under Genomics England. The Newborn Genomes Programme aims to sequence 200,000 babies' genomes in a pilot study alone, with the longer-term goal to offer whole genome sequencing as the standard screening for diseases for newborns in the UK.
The test can screen for up to 200 diseases, as opposed to the nine diseases that more conventional tests can uncover, so what's not to love?
There are two main things: firstly, that parents may be choosing this option for their newborns without being adequately informed; secondly, that it might not be in newborns' best interests to be given this genomic testing as opposed to more conventional tests.
Newborns lack capacity; that is, they do not have the required mental processing, communication, and other abilities to make decisions by themselves. We take this as granted for all young children and infants, and routinely leave it to parents or guardians to make decisions in the child's best interest. Whole genome sequencing might seem like a clear-cut case for many parents and guardians. The child can be tested for life-limiting, severe diseases that would affect their quality of life or reduce their lifespan significantly. In some cases, preventative measures or early treatment might then be possible. But, parents should be informed of more than this one option. The comparison must be offered and the full motivation for offering genomic testing disclosed in order to satisfy an ethical requirement of transparency, says Oxford geneticist and bioethicist, Anneke Lucassen. Transparency is important both in maintaining informed consent for parents making this decision, and for ensuring continued trust in the public health system. The alternative to genomic sequencing is different in two key ways. First, the conventional tests are more targeted to specific, high-risk diseases (with lower rates of false positives), and second, they produce less genomic data that would be of use to the NHS. Disclosing this information may affect parents' decision-making. The increased risk of a false positive leading to unnecessarily treating a healthy baby may put some parents off. On the other hand, more altruistic parents may wish to donate their baby's genomic data to the NHS for research purposes, to improve medicine in the future. Without this information being disclosed, parents cannot make a fully informed decision.
The second issue concerns the child's best interests. The sixfold increase in false positives for metabolic disorders that whole genome sequencing shows may lead to treating and medicalising a healthy baby, significantly affecting the child's life. It may, then, not be in the child's best interests to have whole genome sequencing over alternative, more targeted testing. However, insofar as that child or people they care about in the future may be affected by diseases later in life, their contribution of their genomic data to the NHS' growing database could benefit them (and others) in the future, and therefore render it in their best interests to undergo this testing.
Thus far, the NHS seems to have assumed that parents won't go for whole genome sequencing if they are told there are more specific, targeted tests that work as well and don't involve giving the baby's genomic data to the NHS.
The NHS certainly shouldn't sacrifice informed consent and the child's best interests on this basis. Besides, it may be an overhasty assumption in any case. Is it really ineffective to appeal to parents' sense of solidarity and considering the future interests of their child and those they may care for as a motivation to undertake whole genome sequencing? In some ways, the UK may be an individualist country, prioritising the needs of the individual over the collective in some cases, but its public health system shows its other side, too. The NHS relies on trust, solidarity, and community contributions, and appealing, transparently, to these values may not be so ineffective, after all. It's a chance that, I believe, the NHS must take.
Image credit: VERONIQUE JUVIN, SCIARTWORK, via TheScientist
I'm a little behind reporting on this one, but it came up in discussion with a friend recently that, back in May this year, the field of clinical optogenetics was apparently born. Optogenetics involves the use of tools like gene editing and the introduction of new genes to improve or restore vision. The clinical case I was alerted to was published in Nature Medicine, and it went like this:
A man who had been blind for 40 years as a result of a degenerative disease volunteered to participate in a new gene therapy to attempt to restore his vision. Scientists have found that a gene in algae, 'chrimson', helps the algae perceive sunlight, and thus grow toward it. This gene was injected into the blind man's retina. Although the effects aren't strong enough without further aid, when the man wears goggles that increase light-dark contrast in the yellow-orange light spectrum, he can recognise, count and describe objects on a desk in front of him.
The restoration of vision via gene therapy is a big step forward. The team behind the case have received accolades from colleagues in the field, who claim that whilst there is a long way to go from a single-patient phase I/IIa trial, the results are promising. It may take a while before we are able to make the blind see again well enough to recognise faces and other visual details, but the possibility of this in the future raises important ethical questions.
The most obvious is one that comes up in my thesis work, as well: Are we 'playing God' by intervening in our own biology like this, and if so, is that wrong?
The 'playing God' objection comes up a lot in relation to human enhancement, whether via pharmaceuticals, physical interventions, or gene editing. It also comes up in other areas that seem to intervene in decisions usually left up to 'God' such as life and death. The idea has religious roots, clearly, but it has a secular translation, as well, replacing the idea of 'God' with that of 'nature' or 'evolution'. Perhaps its most hard-hitting and recognisable popular formulation is Frankenstein, wherein Mary Shelley highlights how scientist Frankenstein goes beyond acts of human creation in the making of his monster, which, although making him feel he "could not rank myself with the herd of common projectors", resulted in cosmic punishment for his hubris: "like the Archangel who aspired to omnipotence, I am chained in an eternal hell."
Are the scientists behind the new optogenetic therapies modern Frankensteins, risking cosmic revenge for their taking power over the forces of God, nature, evolution or luck? Ought they be condemned for their efforts?
I certainly hope not. If so, we're all doomed. To respond along a common line to the objection, we have been intervening in nature for time immemorial. Modern medicine, education, everything from ancient fertility therapies and executions up to today constitutes an intervention in realms previously beyond our control. Where would we be if we hadn't expanded our circle of control? Education has allowed progress, has increased our understanding of the world, and our abilities to flourish in it. Modern medicine has saved us from suffering unimaginable, when you add it up across the centuries. The playing God objection holds sway only in cases where we fear progress, when we fear where our advances will lead us. Insofar as this is true, there may be good arguments hiding behind the objection. Maybe, in fact, what we fear is the devaluing of human life through the creation of people—possibly post-humans—willy-nilly. Perhaps we think that genetic selection will lead back to the eugenic practices of the 20th century. It might be the case that optogenetics is a gateway to uses of gene therapies that are unsafe, ineffective, or available only to the rich.
Certainly, there are many objections to explore when it comes to uses of gene therapies, an area about which we still know little, and as such, should be wary. But the playing God objection is worse than useless if it obscures these with a label that accurately represents none of them. In and of itself, the fear of progress is a poor argument. Insofar as it acts as a catch-all for other fears, it minimises them. Instead, they should each be evaluated on their own merits.
Should we reject optogenetics as playing God? Certainly not.
Image credit: Joe Giddens/PA at The Sunday Times
In the UK news this week, it was reported that baby Marley is in recovery, after receiving Zolgensma, the world's most expensive drug (per dose) to treat his spinal muscular atrophy. His parents are "overwhelmed" by the UK NHS' provision of the "miracle drug" and its successful use, but the case might be overwhelming in more ways than that.
At £1.79 million per dose as its list price, the world's most expensive drug might seem like a bargain for its effects: curing a serious medical condition that is usually fatal within the first months or years of life. Baby Marley's case follows hot on the heels of a previous baby, Edward, who was the first to receive the drug on the UK NHS in mid-August 2021. Whilst the NHS bargained down from the list price paid for the drug, the actual amount remains undisclosed.
Concerning funding the drug, the NHS England Chief Executive said "Spinal Muscular Atrophy is the leading genetic cause of death among babies and young children, which is why NHS England has moved mountains to make this treatment available, while successfully negotiating hard behind the scenes to ensure a price that is fair to taxpayers." Let's set aside that this fair price wasn't disclosed. There is a prior ethical question: what is fair to taxpayers, when it comes to funding drugs in a national healthcare system? Assuming we have limited resources, how should healthcare be rationed among treatments, among diseases, among patients?
This is one of the biggest and most ongoing questions in public health ethics. The distribution of resources concerns us all, especially in the case of collective, public goods like the NHS. Funding from the NHS comes from taxpayer contributions, which makes it a collective good. The system itself is also designed to be non-excludable and non-competitive. That is, healthcare in the UK should be accessible to everyone, no matter their background, finances, etc., and there should be enough beds and treatments to go round. But like most public goods, this one isn't entirely 'pure' [pdf]—it isn't entirely non-excludable and non-competitive. Those services requiring additional contributions may exclude those unable to pay, and many services offered in the NHS are variously supported depending on postcode. For example, the number of IVF cycles prospective parents in the UK can access depends on where they live, and even on whether they smoke or not. What renders public goods less pure? Increased use of resources without top-up funding. That means that point-of-contact contributions increase, and that fewer beds may be available.
During the COVID-19 pandemic, the UK NHS has been rushing to provide more beds, more vaccination clinics, to ensure that the population is protected, and that healthcare remains accessible and non-competitive. But large single-treatment instances of resource use may make some wary. After all, if that £1.79 million hadn't been spent on saving one baby's life, how many other lives might it have saved that required less expensive treatment?
Questions of healthcare resource rationing aren't that simple, however. We cannot simply say that we should never fund treatments over £x. Because the right to healthcare may ask more of healthcare systems than that. Whilst it is rare to see a right to health as such defended in the bioethics literature, the right to healthcare is more regularly defended, and it remains contentious what, exactly, that might entail. Are all people entitled to a certain amount spent on them in their lifetime? To be fair, should this not depend on need? Perhaps on accountability for life choices that contribute to poor health? Perhaps on the amount of time a person has left to live, and what their quality of life will be during that period?
One popular view is that rationing should be performed (in part) according to QALYs: quality-adjusted life years that a given treatment, bed, ventilator, or amount of healthcare spending would save. This is not a matter simply of counting up years, but of evaluating the quality of those years. Inevitably, this relies on normative assumptions. How much is a life worth if someone has asthma and suffers every spring with hay fever? What if they are in a chronic, abusive relationship? What if they have a certain disability or impairment? The calculations might seem impossible. But maybe we don't need exact calculations.
In the case of babies Edward and Marley, it seems that the NHS has decided that, whatever the price they settled was, the full lives of two otherwise-healthy babies were worth enough to justify spending taxpayer money on Zolgensma, and thereby, on the saving of (so far) two lives. These two identifiable patients have benefited from that. But it is for the NHS to balance this against the lives that could be saved with the same amount for the UK population suffering with, say, bacterial infections that require basic antibiotic treatments, or life-saving operations.
I'm not in an informed enough position to pass judgement on the NHS' decision, but we should hope that the decision has been considered long and hard. When it comes to healthcare rationing decisions, and what it means to continue to support the purest form of public good that the NHS can manage, an active process for analysing cost-effectiveness may be essential.
This week, I wrote a short article for ABC Religion and Politics.
In the piece, I discuss the sketches done by one of my favourite Australian comedians, Sammy J. Sammy J's sketches appear just before the 7pm ABC News around Australia a couple of times a week. Whilst we benefit from the laughter the comedy brings, in this piece I highlight a concern with comedy before the news: it could affect how much attention we then dedicate to good news stories compared to bad news. These kinds of "affective primers" are everywhere, but in this case, could they lead to political complacency?
For more, see my article, here.
Image credit: https://blogs.technet.microsoft.com/
A few days ago, Reuters published a news article entitled, "UK should be concerned at Chinese gene data harvesting, lawmaker says". This provocative headline refers to data that is sent to China as part of using a new Chinese reproductive technology now available in the UK. The technology is non-invasive prenatal testing (NIPT), a new reproductive technology that tests for possible health conditions a future child might have by analysing the foetal cells containing the foetus' DNA in the mother's bloodstream. This is preferable in safety terms to previous methods like amniocentesis. However, it means gathering and analysing genomic data, raising legal and ethical issues around data sharing. The NIFTY test is designed by the Chinese BGI group, and raises particular issues, according to the chairman of the British parliament's Foreign Affairs Select Committee, because of the group's links to the Chinese defense force.
In an interview, the chairman said that data sent overseas should be "treated with the respect and privacy that we would expect here at home". But he was concerned that the privacy terms and conditions for NIFTY allow the sharing of genomic data gathered via NIFTY with the defense force.
What is that respect and privacy? The General Data Protection Regulation (GDPR) is originally an EU regulation which, post-Brexit, is transitioning to UK GDPR, implemented in the UK Data Protection Act 2018. In either form, GDPR sets out certain key requirements for acceptable gathering, storage, processing and sharing of data—including genetic data. Data must be gathered using valid consent processes that ensure individuals give their voluntary, informed and capacitous consent to share their data. The data should only be used for the purposes laid out in transparent explanations to those consenting individuals, and it should be anonymised for storage. The data should not be shared with third parties without patients' consent. The legal requirements for data sharing are based on important ethical principles and reasoning.
Firstly, GDPR upholds the importance of transparency for valid consent. Sharing one's genomic data can have serious consequences if it is misused or mis-handled. For example, if the data made its way into the hands of ensurers, then those who have genetic predispositions toward certain diseases may find it more difficult or expensive to get health insurance or additional services. The outcomes of data sharing on the collective level can also be serious. Large amounts of genomic data can tell a lot about a population's commonly shared genetic vulnerabilities or predispositions. Whilst there is no reason to assume that this information would be misused, we might question when, in this case, the BGI group might deem someone's genetic data to be "directly relevant to national security or national defense security". What's more, as the GDPR does not apply in China, data storage and sharing is governed by different laws, which may not have as stringent requirements for anonymising and minimising data for storage. This both directly undermines UK citizens' privacy, and may undermine their interests, if the data were used in certain ways, such as in the health insurance case, or ways more related to China's national defense.
So, should the UK be concerned? The headline may sound inflammatory, but with data privacy an increasingly pressing concern, and ties with China weakening, it might, in fact, be drawing our attention to an important issue.
Image credit: PA media, via BBC
Yesterday, it was announced that US Senate Majority Leader Charles Schumer is intervening in a legal case heard in the UK High Court earlier this month. What kind of a local case could attract the attention of Americans to the UK? One with significant ethical repercussions. Toddler Alta Fixler was born with severe, untreatable brain damage, and has been on life support ever since. Manchester NHS sought a High Court ruling as to whether it was in Alta's best interests to withdraw her life-sustaining treatment. The court has now ruled that, indeed, it is in Alta's best interests to withdraw treatment, as she has no capacity for experiencing pleasure, and has no prospect of medical benefit from continued treatment.
However, the toddler's parents are Jewish, and have said that their faith prevents them from agreeing to measures that would lead to her death. Instead, they wish to take her to Israel or the US, to continue life-sustaining treatment. It is here that Schumer's involvement begins. Whilst the UK High Court ruled against Alta's parents' petition to transfer her to a hospital in Jerusalem, Schumer is in the process of obtaining US citizenship papers for Alta, allowing her to be moved to the US. Once in the US, Alta would fall within not only a different legal jurisdiction, but, in some ways, a different (medical) ethical jurisdiction.
Yesterday, I attended the UK postgraduate bioethics conference, and heard a talk by Professor Richard Huxtable, on just this geographical difference in bioethical approaches. Whilst we continue to search for a unified ethical approach, American bioethics remains different in some ways from European, Asian, African and other bioethical counterparts, which all also differ from each other. In my own opinion, it seems that American bioethics is more principlist, more deontological, religious, and individualist than some other approaches. It emphasises the importance of values (some with religious origins) like the sanctity of life, human dignity, and individual freedoms. This approach is no less valid than any other, but it raises an essential question, particularly in relation to Alta's case:
Under what circumstances (if any) is it permissible to seek an "ethical second opinion" by moving to another ethical jurisdiction?
In the UK, court decisions surrounding withdrawal of medical treatment rely on ideas of the patient's best interests. Interest bases like this are, in some ways, inherently utilitarian: the idea of best interests is considering the things that are good for and to someone (eg. their health, a wide range of valued choices, social and familial connection, the avoidance of suffering), to determine which out of a range of possible actions would best promote those interests. In cases like Alta's, it's relatively straightforward. Alta will be unable to experience many of the listed goods. What is left, is the avoidance of her continued suffering. Decisions to withdraw treatment can align with British (and possibly 'European') bioethics, but they may be more controversial in jurisdictions where life is considered valuable in itself, and parents' wishes are considered an expression of their individual freedom that may be wrongly neglected by doctors' best interests assessments.
The situation is further complicated, because Alta's parents are not merely considering moving themselves to another jurisdiction that better aligns with their values; they are moving their child there, and it is their child's life that will also be affected by the change in ethical norms and legal decisions.
I have no answers to the question of whether Alta's parents are right to seek a second ethical opinion abroad. But so long as ethical and legal jurisdictions remain separate and somewhat different in their values and decisions, there's an important ethical question that remains unanswered.
This week, I wrote a blog post for the Oxford Uehiro Centres' blog, Practical Ethics in the News.
In the blog, I discuss a recent guest lecture given by Professor Maureen Kelley at the Special St Cross seminar series, focusing on moral distress experienced by health researchers. This moral distress may be a reaction to our current negligence of extensive ethical responsibilities toward research participants in lower-middle income research settings.
For more on the topic, see my blog post, here.
Image credit: Netdoctor
You may have heard in the news recently that our world population is in jeopardy. As soon as 2045, reports have it, western males will be totally infertile. Should we be worried? Are we morally obligated to at least attempt to procreate in order to prevent our species from going extinct? I hope to pull these questions apart in this post.
First, some background. The recent claims follow the publication of a new book that details fertility around the world. The book is written by Shanna Swan, an epidemiologist who was co-author on a research paper from 2017 that showed a significant decline in male fertility. The study caused a splash at the time, and the new book is where the 2045 figure comes from. The fears for our extinction, however, may be misplaced. The 2017 study that showed significant decline, and from which the book projects, divided the male population into “Western” and “Other” males. Whilst the decline is clear for one of those sub-groups, it’s not for the other. As this critical Slate article and this follow-up study point out, the paper seems to have a focus that correlates with white male fertility. Rather, when considered globally, the follow-up study argues, “the interpretation that population sperm counts vary within a wide optimum with little consequence for fertility is at least as plausible as the interpretation that steady decline occurs.” This is clear using existing fertility map tools that show birth rates of over four children per woman in many other areas of the world.
This interpretation raises new moral questions. With intense media focus on sperm count declines, and with the co-opting of the message by white supremacist groups, the questions shift from saving our species to saving certain sections of it.
Are we morally obligated to reproduce in proportion to maintain declining white or “Western” populations?
There is an assumption underlying current claims that we should worry about Western population declines. The assumption is what is often called a status quo bias: that the way things are now is morally preferable. In this context, the assumption is it’s better to have larger “Western” populations (the status quo that the study worked from) compared to a potential future with that population reduced in number. Is that a defensible assumption? We need to question what it is about this racialised category that makes it morally preferable. Sure, living conditions, health, and other goods we value may be better in Western countries, but counting the resources in a country better than other countries in the world is very different from claims about that country’s population—especially when it comes to genetic traits associated with race, which have nothing to do with individuals’ or countries’ resource levels, productivity, etc.
It may be that there are benefits from having a certain group represented in a society. Maybe the existence of that population adds valuable things like genetic diversity, that can have implications for, say, resilience against disease. Arguing for the continuation of a population, in that case, may be justified. But any arguments for a moral obligation to maintain the Western population at current levels seems, rather, to advert to our first, status quo-biased assumption. Unless there are reasons for specifically maintaining the current population ratios between “Western” and “Other” groups, that’s just racist.
Is there ever a moral obligation to reproduce to maintain a population?
It may, nevertheless, be claimed that under some circumstances—if we were, say, to find a global population decline that threatened the survival of our species—individuals would be morally obligated to reproduce. This is actually an existing criticism of certain theories that have pro-natalist implications, such as some forms of utilitarianism. If, this utilitarian may argue, we aim to produce the greatest amount of net happiness, then we should produce more children, so long as they can be expected to have lives that will contain more happiness than sadness overall. This reasoning seems flawed to those who think, rather, that it is not simply the production of happiness that is valuable, but the promotion of greater net happiness over individuals’ lifetimes. Some go so far as to say that if we should prevent suffering as well as promote happiness, we should be anti-natalists, as life inevitably involves suffering, and the only way to avoid it is by not being born at all.
We need not go this far, but the anti-natalist perspective does give us reason to question whether there are reasons against procreation that would need to be overcome in order for there to be a moral obligation for an individual to reproduce to maintain a population. Based on the first moral question I raised, we would both need to find reasons for a population to exist, and find a lack of outweighing reasons for it not to exist, in order to say that an individual should reproduce to maintain the population.
Maybe, given the difficulty of finding that data, and the lack of concern with fertility at the global level, we needn’t be too concerned at the idea of declining Western white male fertility.
Image credit: CNN.
Consent is a concept we talk about a lot in medical ethics, research ethics, and other areas. The idea is, basically, that rational, autonomous persons (in the moral sense) should be asked for their consent before they are treated, experimented upon, or affected in certain other ways. Residents of Florida Keys (yes, it's always Florida) have been protesting recently. They are expressing their dissent to the release of genetically modified mosquitoes in their area. The question is, Is their consent morally required?
Let me give you a little context. The mosquitoes in question were gene-edited by an Oxfordshire-based lab. They belong to an invasive species that carries diseases like yellow fever, Dengue fever and Zika virus. Whilst mosquitoes can be important for many ecosystems, this is not the only species in town, so there's no ecological dependence on its continuity. What's more, the males of the species, who are pollinators and don't bite humans, are left alone. Rather, the genetic edit is lethal to female offspring. Once it is released into the population, the offspring of edited mosquitoes will die if they are female, reducing the population over generations.
Is this a good thing for the residents of Florida Keys, or are they justified in dissenting? After all, gene editing is relatively new, and we don't know all the possible outcomes of releasing it into the wild, as such. There may be harms to dissent to. But there are foreseeable, likely benefits, too. First, let's set aside the fact that these exact same mosquitoes have been released in Brazil, Panama and Malaysia, in pilot experiments. Let's also set aside the fact that Zika virus spread in Florida Keys in the 2016 outbreak, and Dengue fever is also common there. What I'm interested in primarily is whether Florida Keys residents are justified in their dissent not on the basis of an accurate cost-benefit analysis, but on the proposition that they themselves are unwilling participants in a research trial, meaning that any risks (or benefits) are imposed on them against their autonomous choice not to participate. If this is the case, it may be that trials should not include dissenting residents, or should not be allowed if including dissenters is unavoidable.
We don't ask for consent whenever people are affected by a change in their environment. I never consented to the removal of the salad option at my college lunch canteen, and I'm sure you never consented to the release of so many pollutants into the air over Hong Kong to cloud out the sun with smog semi-regularly. Yet these changes in our environments have measurable effects on us—individually, in my case, and collectively, if you're the residents of Hong Kong whose population health is threatened.
We don't always require consent to an intervention when its effects on people individually are insignificant, when consent is assumed or given implicitly, or when there is an overriding public interest in going ahead with the intervention. Sometimes, the lack of consent or harmful outcomes are made up for via compensation, as has been suggested for certain coercive public health measures. In some ways, it seems like there may be an overriding public health interest in protecting the Florida Keys residents from the diseases carried by mosquitoes. That would justify a public health intervention. But this situation is more complicated: the release of gene-edited mosquitoes is not talked about (even by the FDA who approved release before putting the EPA in charge) as a public health measure. It's talked about as a "field trial". Data is still being collected to determine whether these mosquitoes should be more widely released. And participation in research is an area where participant consent is a widely agreed requirement for going ahead. True, the Florida Keys residents aren't themselves being researched, but the effect that these mosquitoes have on their environment is. Does that environment include residents?
I'm not saying that the field trial should not go ahead. It's contentious whether the residents of Florida Keys are sufficiently affected by this work to count as research participants. But certainly, if we're uncertain about whether residents' consent is morally relevant in going forward with this research, then more needs to be done (and a lot already has been done) to ensure that the residents are informed and reassured that they aren't going to suffer a bite from a "mutant mosquito".
Image credit: Weizhi Ji, Kunming University of Science and Technology, via TheScientist
A week ago, a paper was published in Cell, detailing the successful generation of human-monkey chimeric embryos. It's raised something of a media storm, and a lot of people are asking why we want to create chimeras, and whether it's ethical to create and use them. Chimeras are created upon the combination of biological material from different species, meaning that the resulting organism has the genes of both species. In this case, human stem cells were grown in macaque embryos. There's variation between countries regarding regulations on whether scientists can create (part-human) chimeras, what they can be used for, and how long they can be kept before destruction. However, the main point that many regulations are getting at is that it is ethically wrong to create chimeras without good scientific reason, and that they should not be kept up to a point where they may start to develop some level of consciousness.
It's interesting to note that these regulations are generally more stringent for the creation of part-human chimeras than other types. That is, they imply that it is more ethically precarious to create, use and keep chimeras that are in some way human, compared to other mixtures. As practical ethicist Julian Koplin has pointed out, whether humanness is relevant depends on whether you think there is something special about humans in comparison to other species. If there is, then maybe the regulations have a good basis. Maybe, too, it would be more wrong to go ahead and bring to term part-human chimera embryos than it would be to do the same with non-human chimeras.
But the view that humans are morally special is not a position we can take for granted anymore—at least when it comes to considering potential harms such as physical—and possibly to some extent, emotional—pain and suffering. At least that's what more and more practical ethicists are saying in their developments on Peter Singer's Animal Liberation. This seminal work maintains that we are "speciesist" if we do not consider animals' interests regarding freedom from suffering, interests that they share with humans, as having (equal) weight. That is, if an animal has the same interest in not suffering as a human does, then this needs to be accounted for as a morally relevant consideration in human activities that cause animal suffering.
One of the reasons why regulations prohibit keeping part-human chimera embryos beyond a certain stage of development may be the increasing risk of suffering as development continues. In many jurisdictions, women are not able to access abortions after the foetus reaches 24 weeks. This corresponds with the development of the cortex, which has been linked to the ability to feel pain (although, more recently, research has questioned whether human foetal pain may be felt earlier). For macaques, cortical development begins around 11 weeks. If current limits to keeping part-human chimeras rely at least in part on the avoidance of human suffering upon destruction of the embryo, a "species egalitarian" may ask, why are regulations more stringent when it comes to part-human chimeras?
Although the current media storm has asked whether it's acceptable to keep these human-macaque chimeras for 19 days, surely species-egalitarians out there are wondering about how these regulations consider limits on the suffering of other animal chimeras.
Image credit: Drone Base/Reuters via The Guardian UK
In Florida, on Saturday April 3rd, a state of emergency was declared, due to a suspected leak of toxic waste water from a reservoir previously connected to a phosphate processing plant into the surrounding Tampa bay ecosystem. It is feared that if the toxic waste-contaminated—including high levels of phosphorous and nitrogen—water is not contained, it may lead to flooding, posing a serious risk to fish and plant life in the area. Worse, if serious flooding occurs and people come into contact with the water, it may pose a serious risk to health.
There are a number of questions being asked about this health and environmental threat in the media, notably concerning why the plant's crumbling infrastructure wasn't reinforced sooner by state authorities, and when federal officials will respond adequately to the environmental threats posed by the fertiliser industry. But these questions already rely on different, important assumptions about responsibility: the first assumes that it is up to Florida state to ensure plant infrastructure is maintained, and the second assumes that it is up to the federal government to prevent the potential disaster resulting from the leak. One question we might ask from an ethical perspective is, Who really has moral responsibility for plant maintenance and disaster response, when it comes to local environmental and health threats?
There are two types of responsibility often discussed in the relevant literature. The first is backward-looking: blame responsibility implies the blameworthiness and moral accountability of an individual or group for a certain wrong. The second is forward-looking: remedial responsibility refers to the obligations that an individual or group has to bring about a certain state of affairs. Usually, we would only ascribe this latter type of responsibility to the kinds of groups that have the power to change the state of affairs. Hence, perhaps, the emphasis in the Florida toxic waste case on state and federal authorities.
This attribution of different kinds of responsibility, though, highlights a disconnect between where we assign blame for a problem, and who we consider responsible for fixing the problem. Some philosophers don't consider this to be so much of a problem. After all, ought implies can—so how can we saddle someone with a moral obligation to fix a problem if they are incapable of doing so? And there are certainly other considerations involved other than backward-looking responsibility to think about when we are assigning remedial responsibility: Is the responsibility fairly distributed? Is the problem actually an important one to fix, compared to other problems for which that person or group may already have remedial responsibility?
Nevertheless, surely we want to maintain some link between blame responsibility and remedial responsibility. I propose that this may come from considering violations of another kind of forward-looking or task responsibility that a relevant group may have had. When they are given permission to develop and operate chemical processing plants, corporations (in this case, in the fertiliser industry) are (morally, at least) tasked with ensuring they do not harm the surrounding environment or people near their plant. Yet, when the plant goes into disuse and its infrastructure crumbles, it seems to be only state and federal authorities who are being held responsible for fixing the problem.
If corporations have a forward-looking task responsibility to ensure safety of their plants at the time that their permits are granted, then violations of this responsibility—regardless of whether the plant is still operational—surely have implications for both their blame responsibility when something goes wrong, and their remedial responsibility to fix the wrong. This, at least, seems to be where intuition leads. That may be because of the root cause of both the blame and the need for remedy: the violation of a prior task responsibility. It seems unfair and therefore, in this case, wrong, that those who caused a problem through responsibility violation are not required to fix it (or at least to contribute a fair amount toward doing so). Certainly, the intuition seems to apply on the individual level, as well. We would usually hold that, although health status is partly determined by structural conditions (say, obesity being caused by lack of affordable nutritious foods and access to spaces for exercise), individuals are at least party responsible for attempting to remedy their own health problems. At the least, this holds where they have a reasonable chance at doing so (say, because of there being more spaces opened up for their exercise use, subsidies for healthy foods, and better education on diet and health available).
How could we better address these responsibility attribution concerns (if you accept my intuitive argument above)? The goal will be to maintain the link between prior task responsibility and fairness in the attribution of remedial responsibility. In the current case, maybe we could achieve this through alterations to permit processes. My knowledge and expertise surrounding existing regulations is highly limited, but I suspect that the current processes are overly curtailed if they do not currently hold industry accountable for environmental threats like this leak, as seems to be the case. Corporations need to receive clearance to operate chemical processing plants that could pose safety concerns to the surrounding environs. Perhaps existing permit and reporting processes currently conducted via multiple US agencies, including the Environmental Protection Agency, should be more stringent regarding how corporations will be expected to make amends (perhaps via a financial contribution) for any future problems arising from their violating these task safety responsibilities that come with the permit.
Changing regulations to better hold corporations accountable for their failures of responsibility is notoriously difficult, and is bound to take time. I hope, though, that cases like this show the underlying moral argument for such change. They certainly highlight its urgency, as states are more drained by efforts to fix infrastructural problems and face other threats that are not of their own making.
There is so much "ethics we do" in our everyday lives, yet it can be hard to find analyses out there. What are the moral problems we face, and are there any existing thought-tools we can use to think about them better?
Each month or so, I will choose one topic that's receiving current media coverage, and discuss the underlying ethical issues it raises, and concepts from practical ethics that can help us think through what's at stake.
I look forward to getting started, and having you join me on the journey!
Best,
Tess
Disclaimer: The views expressed in this blog are my own, and do not represent the views of any other people or institutions