We all understand the role of clinical trials in testing new medicines but many people would  be surprised to find out that there are different regulatory processes in place for the introduction of new medical devices and surgical techniques and procedures.

This can mean that the evidence-base for these components of health care is less than those for medicines and therefore there is potential for them to cause unnecessary harm.

Below Professor Ian Harris, Professor of Orthopaedic Surgery at UNSW, discusses the problems with the current regulatory environment and suggests changes to improve the safety and quality use of devices and surgery.

This article is published as part of the TOO MUCH of a Good Thing series, which is investigating how to reduce overdiagnosis and overtreatment in Australia and globally, and is published as a collaboration between Wiser Healthcare and Croakey.

To follow the series, bookmark this link, and follow #WiserHealthcare on Twitter.


Ian Harris writes:Ian Harris

Most people have some idea of the rigorous testing that is required for any drug to be accepted for clinical use and government reimbursement: the drug must be compared to other drugs or a placebo in a rigorous scientific trial in terms of potential benefits and harms – usually a randomised trial and often with patients ‘blinded’ to what treatment they are receiving.

And only when these tests have been reviewed in detail by expert panels will they be approved for any kind of government reimbursement. It’s a high bar and it needs to be, otherwise public money may be wasted and/or people may be unnecessarily harmed.

Regulation of medical devices

How high is the ‘bar’ for surgery to clear before acceptance? Well, it depends. For a start, there are two different pathways: one for devices (things that are implanted in the body, like joint replacements, stents and heart valves) and one for procedures. Implantable devices are not required to have high quality (randomised trial) evidence comparing them to previously approved devices or placebo. In fact, they just need to show that they are “similar” (structurally) to existing (already approved) devices and that they are safe.

This ‘low bar’ for devices has led to problems like the worldwide recall of some hip replacement devices after being used in thousands of people. The mechanical properties and structural design of these hip replacement devices were very similar to already-approved devices and they passed the approval process based on that similarity. Unfortunately, however, the subtle differences in design in these new hip replacement devices were enough to cause catastrophic failure in a large number of cases – something that would have been easily detected if clinical evidence of actual effectiveness in real people had been required, like it is for drugs.

Some years ago, partly in response to such cases as described above, the government made “clinical evidence” for new devices a requirement for acceptance for clinical use, yet the type or standard of clinical evidence has never been made clear. Simply showing that the device has been implanted in some people without terrible early failures seems to be enough – nothing about comparative studies where the device is tested against a successful device.

Regulation of techniques and procedures

Separate to devices is the ‘bar’ that techniques and procedures are required to clear. Here, there is almost no bar if the procedures falls within an existing description. As an example, orthopaedic surgeons recently started doing hip replacements using a new technique: the anterior approach, which involves literally coming at (“approaching”) the hip from the front rather than the back or the side.

There was a bit of a learning curve involved, as there were more complications associated with using this new technique (for example, the thigh bone would break more often when trying to insert the hip). Eventually, things worked out and this new technique is commonly used and probably not any worse than other techniques. For a while however, it looked like a potential disaster, with no regulatory or ethical oversight involved, just individual surgeons trying a new technique.

In fact, the regulatory environment that does exist around new techniques is not only inadequate, it is actually counter-productive. For example, for a surgeon to invent and perform a new technique (perhaps yet another way to perform hip replacements), there is no regulatory oversight, either from the government or from institutional ethics committees: it is not necessary to get ethics committee approval to trial new procedures or techniques.

However, if the surgeon wanted to gather information on the effectiveness of a new technique by performing a comparative study against the old technique or contacting patients to accurately measure the results of surgery, that surgeon would not be permitted to do so without approval from an ethics committee. So it is OK to try new things without measuring the outcome or studying it scientifically, but if you want to find out if it works or is not harmful, you are faced with an onerous process of getting ethics approval to contact the patients and publish the findings. Many would argue the exact opposite, that ethical approval should be required if a surgeon did not measure and report the results of a new technique.

Changing the paradigm

This problem reflects our underlying tendency to assume that something is effective when it has not been tested: we end up with wasteful, ineffective and often harmful procedures and devices being approved and used. At best, we end up with procedures with unknown effectiveness.

There is no good reason to have such a low bar for the introduction of new devices and techniques. The reason we have such a low bar is tradition, which is almost never a good reason.

The reason for this low bar is that many (probably most) of the currently performed surgical procedures listed in the Medical Benefits Schedule (MBS) were “grandfathered” in when the scheme began in the 1980s. Many have never been subjected to rigorous testing against non-surgical alternatives or no treatment at all. The only time rigorous evidence (an appropriate bar) is required for a new surgical procedure or device is if it does not lie within descriptions currently listed in the MBS. In these cases, applications for listing new surgeries are frequently rejected for failing to have sufficient high-quality evidence.

The system needs overhauling, but the inertia inherent in the current systems is such that this may never occur. It may be a massive task to test (already approved) surgical procedures that lack good evidence for effectiveness, but that doesn’t mean we should never start. It wouldn’t be too hard to begin with a few procedures and build up the evidence over time.

For example, instead of funding spine fusions for back pain indefinitely, why not put the money into trials of effectiveness? This could potentially save billions of dollars and patient harms. Any resulting savings could be put into testing the next procedure on the list. This could be done with one simple decision from the government: to make the bar for procedures currently on the MBS (that were grandfathered in in the 1980s) the same as the bar for newprocedures to be listed on the MBS. Otherwise, it must justify having such a paradoxical system of approval.

Ian Harris is Professor of Orthopaedic Surgery at UNSW and the author of Surgery, The Ultimate Placebo


This article is part of an ongoing series that is published as a collaboration between Wiser Healthcare and Croakey.org.

The series investigates how to reduce overdiagnosis and overtreatment in Australia and globally. The articles are also available for republication by public interest organisations, upon request.

Bookmark this link and follow #WiserHealthcare on Twitter.