Magnetic resonance imaging (MRI) got up and running clinically around 1980 and the unit at Royal Brompton opened in 1984 – it was either the first, or certainly one of the first, dedicated cardiac MRI units in the world.
The history of cardiac imaging
The history of cardiac imaging really starts with plain chest x-ray has been around for over 100 years – taking a plain chest x-ray will show you the size and shape of the heart, but is otherwise very limited.
The next advance in 1903 was the electrocardiogram (ECGs) – of course, you don’t really form a true image of the heart with an ECG – it’s squiggles on a piece of paper or a screen – but it tells us something about what’s going on.
Then it became possible to carry out ECGs during exercise, so we could see whether the squiggles got worse. That gave us some idea of blood flow through the coronary arteries: if the squiggles get worse, the usual assumption is that the coronary arteries have a problem and are narrowed.
Ultrasound and nuclear imaging came along in clinical medicine after the Second World War. Ultrasound uses a probe attached to the scanner, you put some gel on the end to transmit the sound, place the probe on the chest between the ribs to put sound waves into the heart and the reflections can draw a picture for you. Many people will be familiar with its use during pregnancy.
Nuclear imaging involves injecting patients with substances called tracers, which are a radioactive element, often connected to another molecule. They are very useful for identifying properties in the organ concerned. The first agent used for cardiac imaging was called thallium – which shows how blood flows through the heart. There are now newer tracers linked to technetium, which give a lower radiation dose and very good pictures.
CT (computed tomography) scans, which use use x-rays and a computer to create detailed images of the inside of the body, came along in the 1970s and use similar mathematical techniques to nuclear medicine, and the PET-CT scanner came along much later, around the turn of the 21st century.
Magnetic resonance imaging (MRI) was a hugely significant development in understanding disease, and arrived for clinical use in the early 1980s.
Cardiac MR
Unlike CT scans, MRI scans don’t use X-rays. Instead, they use powerful magnetic fields and pulses of radio frequency to produce detailed pictures of organs, soft tissues, bone and other internal body structures.
MRI is special because it doesn’t use radioactive substances, and it doesn’t involve x-rays, so there’s no radiation burden for the patient. It’s also very good at looking at soft tissues. X-ray is very good at looking at hard tissues and bones, whereas MR is better at looking at soft tissues like blood vessels and muscles.
It’s also very versatile in the heart – you can measure a lot of different things with it. You might just look at the shape of the heart and its size, but you can also work out how well it’s beating, how much blood is coming out with every heartbeat, you can measure blood flow in the main arteries, you can look for damage to the heart which usually manifests as fibrosis (scar tissue), and that’s proved to be extremely important.
You can also then measure characteristics of the heart which are completely unique, the ‘magnetic relaxation’ of the heart gives us important information about the tissue that’s there.
Truly translational - when imaging can save lives
The best example of the life-saving properties of imaging is something we developed here at Royal Brompton in the late 1990s called T2* imaging. It’s all about seeing how quickly magnetism disappears from tissues once you’ve magnetised them. T2* measures iron in patients’ hearts: when patients have repeated blood transfusions for anaemia, iron can accumulate in the heart, and when it does, it destroys that magnetism rather quickly. So we found a very simple test: if magnetism disappeared quickly from patients’ hearts, you could tell that there was iron present there.
In the 1980s, like today, patients with thalassemia major – a condition where the body lacks the ability to produce adult haemoglobin causing chronic fatigue – would have been receiving regular blood transfusions. These patients are often diagnosed at the age of two with profound anaemia. They’re given transfusions and they do fine to start with.
However, each pint of blood contains about a quarter of a gram of iron, and an adult patient needs about two pints of blood a month – so typically they’re getting half a gram of iron a month, or six grams a year. And we can’t excrete that, which is a huge problem, because it sticks around, it gets into tissues and it’s very toxic.
So the typical time course of a patient with thalassemia major in the 1980s would have been diagnosis at two years old. And then they would receive treatment with an iron ‘chelator’, which binds the iron and then you’re able to pass it out through urine or faeces.
If they didn’t receive an iron chelator, typically children would be transfused for about eight years and die at the age of ten. If they did receive an iron chelator, the typical life expectancy was 35. The patients, on the whole, still died of heart disease – the heart is very sensitive to iron. About 70 per cent of all deaths from thalassemia major were due to iron build up in the heart.
The haematologists looking after the patients typically would monitor the amount of iron in the liver by doing biopsies, or the amount of iron in the blood by taking blood tests. But the patients were still dying. So we knew, then, in the late 1990s that measuring iron in the liver or blood wasn’t sufficient.
So our idea in 1998 was that we would find a way to measure the amount of iron in the heart directly, because that was the organ that was causing problems.
When we did that, we developed a technique called T2* which was developed by our physicists here – these were scientific developments with the physicists embedded in the hospital, a brilliant example of translational science.
One of the first things we found was some patients had iron in the heart but not in the liver, and some had iron in the liver but not in the heart. So we discovered that the two organs load iron differently, and that was very, very surprising.
Patients who have rather little iron in the liver will have a near-normal liver biopsy, they may have not very much iron in the blood, but they may still die of heart failure if the heart is loaded. And the reverse is true in the other group: they have lots of iron in the liver, so they have high iron in the blood, but if the heart is not loaded they won’t die of heart disease.
This was a huge step forward in understanding.
We also found that the average iron in patients who had died was six milligrams per gram. Now, the liver can easily hold six milligrams per gram without blinking. To damage the liver you might need to have 40, 50, 60 milligrams per gram. So clearly the iron is much more toxic to the heart than to the liver, and that’s why patients die of heart disease.
We took 650 patients in the UK and scanned them all, and followed them up for a year. We found that those patients with the highest iron, were the ones developing heart failure and dying.
This was very important – we had suspected this was happening, but we proved it.
And finally, we carried out drug trials. We published papers, which showed that an oral drug – deferiprone – would suck iron out of the heart better than conventional therapy, and it’s now become standard of care for patients with iron in the heart to be given deferiprone.
Reducing mortality
When we looked at the mortality of thalassemia patients in the UK after we’d introduced T2*, the mortality dropped by 70 per cent.
The T2* scan now operates in over 100 hospitals throughout the world. In principle, and this is very rough, but in principle if every thalassemia major patient in the world could get access to a T2* scan, which is a big ask, it would be roughly possible we think to save about 30,000 lives a year.
So this is a very substantial issue with impressive outcomes. It’s a translational science event, from a physics team embedded in a hospital at Royal Brompton; and it’s changed the mortality of a deadly condition for young people.
It’s remarkable to think that just as recently as the 1990s, life expectancy was 35 and many patients are now taking their pension after a lifetime of drug treatment and transfusions.
It’s very rare to find an imaging test that can be directly linked to reduced mortality. This is very rare. And it happened here.
The future
The obvious thing on the horizon is the rise of artificial intelligence (AI). All humans make mistakes, all radiologists make mistakes, all imaging consultants make mistakes. It’s probably the case that we will develop AI to be more accurate at diagnoses than humans. And, given time, I think that will pervade through into the whole diagnostic and imaging space.
If you take imaging in the broadest sense, to mean diagnostics, I think they will enter an extraordinary phase of being real-time and non-invasive, and we’ll be able to measure things and make decisions much, much faster than we can now.
Just look at genetics which, to me, is a type of imaging: we’re imaging the genome. In 70 years’ time, I would imagine we’ll be able to map genomes in real-time, or five minutes, or something that today would seem completely ridiculous.
Lastly, I think we can say for definite that in 70 years’ time, scanners will be much smaller, much more capable, much more versatile, measuring new and different things that we can’t measure today, and potentially all handheld.
When we look at Star Trek, with Bones McCoy and Captain Kirk, they always had this thing called a tricorder, and they’d waft it over someone and say, “Oh dear, you’ve got a blood leak” or something, I think that is probably going to come true.
Professor Dudley Pennell is a professor of cardiology and director of the cardiovascular magnetic resonance unit at Royal Brompton Hospital.