Along with their potential to greatly benefit health, biotechnological advances surrounding medical devices may exacerbate risks, and pose new kinds of risk. Our primary mechanism for managing these risks is therapeutic goods regulation. As these technologies advance, it is apt to question the basis of the current regulatory approach.
Ethics and the purpose of therapeutic goods regulation
Regulation of therapeutic goods has two purposes that are sometimes at odds with each other. Regulation seeks to safeguard the public’s health and safety, while allowing or even incentivising beneficial innovations to reach the market as quickly as reasonably possible. In current systems, a major part of how regulation achieves both aims is the requirement that manufacturers present evidence of a product’s safety and effectiveness. On the one hand, this protects consumers from using products that are unsafe or won’t be beneficial. On the other, it means that commercial success must be based on sound research, incentivising quality innovation.
Any such approach to regulation must deal with difficult questions about the standard of evidence it will require. Answering these questions requires not only scientific input, but ethical decisions, since it will involve judgements about what levels of risk are acceptable, and which of the two aims should outweigh the other. If evidentiary standards are too low, regulators’ safeguarding role might be compromised; too high and they may unnecessarily prevent patients from benefiting from new advances.
Problems of evidence about devices
With regard to devices, a lower standard of evidence is often accepted. There appears to be no principled reason for this; rather it has resulted from historical accident combined with some difficulties in obtaining evidence about devices. For example, controlled studies of devices can be difficult, since outcomes can depend on how they are used, for instance in surgical procedures, which may vary. Device risks may also be long-term, but many studies do not report on long-term outcomes – and doing so would require long delays for approval.
Further, lower-risk devices are not always required to undergo pre-market approval, instead being approved on the basis of their similarity to previous devices, with lower evidential requirements. There are reasons for this system – the sheer number of minor alterations made to devices would make subjecting each new iteration to full scrutiny unfeasible, and many alterations are unlikely to affect clinical outcomes.1 But it was also involved in the approval of two devices later found to be harmful, metal-on-metal hips and vaginal mesh.2 Particularly where there are a series of small alterations, it can be difficult to judge when outcomes will be affected.
The major ethical issue in device regulation, then, is that we currently accept a high level of risk at the market approval stage – but this is because of problems of evidence collection and the practical needs of a regulatory system as it applies to devices, not because the risks have actually been assessed as acceptable.
On the contrary, many consumers assume that any product on the market has been thoroughly assessed for safety and effectiveness.
Partly to compensate for the difficulties of pre-market evidence collection, most jurisdictions expect manufacturers to undertake post-market studies and other ‘vigilance’ activities such as adverse event reporting. While this might be a good solution in theory, sometimes requirements for post-market studies have not been enforced, and there is often under- or inconsistent reporting of adverse events. It also raises the ethical worry that, to the extent that the evidence for safety and effectiveness is collected post-market, the first patients to use a device are de facto research subjects. Yet they are not protected, as subjects in pre-market research studies are, by ethical oversight and informed consent procedures. On the contrary, many consumers assume that any product on the market has been thoroughly assessed for safety and effectiveness.
In the context of these existing challenges, emerging technologies pose further difficulties. I will discuss just two.
First, devices are increasingly computerised and many, such as pacemakers and insulin pumps, incorporate software into their functioning. This can have great benefits: automation of functions for easier management; better calibration of devices to patients’ needs; collection of physiological data of clinical value; and remote, thus more efficient, adjustment of device functioning.
Software in or as a medical device exacerbates old challenges, and introduces new ones. It means even more frequent updating of devices – and these updates may affect the functioning of devices that are already being used by, even implanted in the bodies of, patients. Ensuring that devices remain safe and effective through each change will become even more challenging. Manufacturers will need to take more responsibility for ongoing device functionality.3 Software also involves new kinds of risks, for instance in attempting to predict how functionality could be affected when used in conjunction with a range of different technological systems, and when integrated into different clinical situations.4 Another important, and somewhat new risk relates to cybersecurity: the possibility of devices being hacked and used to harm their users. Notably, Dick Cheney had his implantable cardiac defibrillator’s wireless connectivity disabled for the term of his office as US Vice President for this reason.5 Similarly, the collection of physiological information could constitute a risk for patients if it is misused. There are also further ethical questions to consider with regard to the research use of this data, to which patients may not have consented.
Again, customisation also poses new kinds of challenge. Given the unprecedented accessibility of this method of manufacture, it may simply be difficult for regulators to capture all uses.
A second emerging possibility is for increased customisation of devices, particularly through 3D printing and computer-aided design. Commonly used implants such as artificial hips can now be far more easily manufactured with dimensions matching specific patients, and bespoke devices can even be modelled directly on patient physiology. Intuitively, this could benefit patients; but obtaining rigorous evidence of safety and effectiveness for custom devices is even more difficult than it is for standardised devices. The best evidence for regulatory purposes is generated from populations of research subjects who receive a standardised intervention, and this is fundamentally at odds with customisation. Customisation thus exacerbates existing difficulties with obtaining good evidence about devices. Thus far, custom devices have usually been used under research regulations, or regulatory exemptions.6 The more customisation is used, the less appropriate this will be.
Again, customisation also poses new kinds of challenge. Given the unprecedented accessibility of this method of manufacture, it may simply be difficult for regulators to capture all uses. Clinicians and basic science researchers, among others, may engage in creating bespoke devices without being aware of regulatory controls on manufacturers, and without experience in quality assurance practices, putting patients at risk.
Questioning current approaches
As advances further challenge the current system, it is worth questioning whether there could be alternative approaches to device regulation. Most radically perhaps, we could question the way current regulatory approaches incentivise research by linking it to commercial success. This link itself leads to ethical issues, such as that research is primarily directed towards addressing the health problems of the most well-off. Healthcare inequities are likely to increase with increasing technological sophistication, since this comes with increased cost. Some technologies also have the potential to reduce inequities, such as using 3D printing to provide lower-tech devices in low-income countries – but while research is incentivised as it currently is, this potential may not be fulfilled. That healthcare innovation primarily focuses on marketable products can also lead to neglect of improvements that could be made through social or institutional change.
Admittedly, making research necessary for commercial purposes, while in these respects not an ideal feature of a regulatory approach, might be the best possible one overall (and certainly be extremely difficult to change). Less radically then, we might question the way the current system is arranged around the pre-/post-market distinction, and the corresponding research subject/patient distinction. Other options, like creating a third category between research and practice, or developing new methods for post-market investigation (including ethical oversight where appropriate) or compliance, are worth considering.7
Whatever the result of these considerations, my point is that there is value in questioning all features of the system and the assumptions built into them, even radically, if we are to arrive at an approach based on reasoned assessment, and defensible ethical decisions.
1 Gibbs JN, et al. 2014. 510(k) statistical patterns. Medical Device and Diagnostic Industry, 2 Dec, https://www.mddionline.com/510k-statistical-patterns.
2 Roger WA, Hutchison K. 2017. Hips, knees, and hernia mesh: When does gender matter in surgery? International Journal of Feminist Approaches to Bioethics 10(1):148-174.
3 Hutchison K, Sparrow R. 2017. Ethics and the cardiac pacemaker: More than just end-of-life issues. Europace, online first doi:10.1093/europace/eux019.
4 IMDRF Software as a Medical Device (SaMD) Working Group. 2014. “Software as a Medical Device”: Possible Framework for Risk Categorization and Corresponding Considerations. International Medical Device Regulators Forum, http://www.imdrf.org/docs/imdrf/final/technical/imdrf-tech-140918-samd-framework-risk-categorization-141013.pdf.
5 American College of Cardiology. 2013. From IEDs to ICDs? Credible threat led to disabling Cheney’s ICD in 2007. http://www.acc.org/latest-in-cardiology/articles/2013/10/20/21/04/from-ieds-to-icds.
6 E.g., Therapeutic Goods Administration. No date. Custom-made medical devices (fact sheet), https://www.tga.gov.au/custom-made-medical-devices.
7 Olsen L, Aisner D, McGinnis JM (Institute of Medicine). 2007. The learning healthcare system: Workshop summary. National Academies Press, Washington DC, https://www.nap.edu/search/?term=learning+healthcare.