Healthcare and The Artificial Intelligence Revolution
The healthcare sector has always embraced technology. Since the advent of the computer, technologists and healthcare professionals have been working together to exploit technological breakthroughs so that they might improve patient outcomes while also minimising costs and delivering high standards of care to a greater number of patients. When a technology becomes reliable, cost-effective and scalable, it is embraced and generally thrives. We saw this in the '70s with the adoption of mainframe computers, in the '80s with the widespread adoption of personal computers and local networking, in the '90s with internet-based systems and more recently with the adoption of mobile technologies. It appears now we are on the cusp of the next technological revolution within healthcare, combining the vast amounts of data available, cloud computing services and machine learning techniques in order to create artificial intelligence (AI)-based solutions that can provide expert insight and analysis on a mass scale, at a relatively low cost.
What is AI?
AI is concerned with replicating mechanisms of human intelligence using computers and software. One popular technique involves replicating the brain's neural network (a modelling technique known as 'artificial neural networks') to analyse information, extract layers of detail from within it and ultimately attempt to interpret the results. This makes the technology perfect for performing tasks such as analysing language and identifying objects within images. The basic principles have been around since the '60s and were refined in the '90s to allow systems to 'learn' based on previous results. In medicine such methods were used to perform tasks such as analysing pap smears.
Fast forward by 15-20 years and we have reached the magical point in time where the key ingredients required for the technology to become truly transformative have converged, giving us what many refer to as 'deep learning' neural networks. The reduced cost and increase in speed of modern-day systems has led to the widespread use of the technology on a commercial basis with deep learning neural networks forming the backbone of many apps and services that we all take for granted on a daily basis (eg Google Image Search, Siri, etc).
In recent years companies such as Google and IBM have been investing billions of dollars in research, development and acquisitions specifically for the healthcare applications of their products, and we are beginning to see the results of this investment in the real world of healthcare.
It's already begun
Recently it was widely publicised that IBM's flagship AI, Watson, had helped doctors at the University of Tokyo to identify a rare form of leukaemia in a 60-year-old woman who had been incorrectly diagnosed just months earlier. How was this achieved? Watson was initially 'trained' on a data set of around 20 million oncology studies. Once given the patient's genetic information, Watson was able to find patterns in the data set that were consistent with those of the patient and thus was ultimately able to make the correct diagnosis. Imagine trying to achieve this feat using traditional means. It would have taken thousands of man-hours from highly skilled scientific experts to trawl through the data and attempt to look for similarities. The pure cost and time to perform such a task would make it prohibitive, not to mention the impact of taking those skilled experts away from other valuable duties, such as patient consultations. Therefore, if the tech companies behind these advances can deliver consistent results and make it affordable, it is just a matter of time before the use of AI in these scenarios becomes widespread
Collaboration is key
However, the technology companies alone cannot make AI proliferate into mainstream medical use. The medical communities and legislators around the world must play their part too. The AI platforms are only as good as the data they are trained on, so in order to achieve correct and consistent results, access to large relevant data sets must be available. There is always a degree of concern when people hear about medical records being stored electronically, particularly when handed over to third parties for what could be seen as commercial benefits. A prime example of this arose recently when the NHS partnered with Google-owned DeepMind on a number of initiatives to use machine learning techniques in order to improve patient care and medical research understanding. The first of these initiatives involved the use of 1.6 million patient records from the Royal Free NHS Trust, and the media at the time were quick to print headlines such as 'Google given access to London patient records for research'. In isolation these headlines may be concerning, and at the time there were legitimate discussions around the use of this data and the NHS's standard third-party data access policies. It certainly highlighted a level of scepticism in society surrounding the use of medical records by large corporations. More recently, and less controversially, the NHS and DeepMind have partnered to use machine learning on fully anonymised records to potentially recognise sight-threatening conditions from digital scans of a patient's eyes. This initiative came about when a consultant ophthalmologist, Pearse Keane, contacted DeepMind directly after reading about its machine learning-based image recognition capabilities. He immediately made the connection between the capabilities of the technology and the benefits it could yield for his profession and ultimately patients. A colleague of Pearse Keane, Professor Peng Tee Khaw, the head of Moorfields' ophthalmology research centre, eloquently stated the desired benefits of this approach when interviewed recently by The Guardian: “It takes me my whole life experience to follow one patient's history. And yet patients rely on my experience to predict their future. If we could use machine-assisted deep learning, we could be so much better at doing this, because then I could have the experience of 10,000 lifetimes.”
When you hear of this technology being discussed in these terms, it just makes sense. The results of this medical trial will be eagerly awaited.
As healthcare professionals learn more about AI-based initiatives they will be more inclined to explore these solutions themselves, which could be the main driver for the proliferation of the technology's use.
Healthcare professionals are now not only influencing tech companies, they are starting their own. We need look no further than biotech start-up Berg to see how tech savvy healthcare professionals, such as their president and co-founder Dr Niven Narain, are altering the landscape of the industry by using AI. Berg recently published data on a drug it had developed using AI techniques. The drug (BPM31510) originated when the company fed large amounts of cell information into its AI system, allowing it to learn common characteristics and differences between healthy and cancerous cells. With this learned knowledge, the system could highlight possible methods by which a cancerous cell could be restored to a healthy cell again. Early trial data is encouraging, with indications that the drug is effective in treating certain tumours. The exciting aspect of this method is that the technology should only improve over time: as patient data from trials is fed back into the system, it will begin to refine its knowledge of the disease and will be able to make more insightful suggestions.
Dr Narain and his Berg colleagues are fundamentally challenging the traditional scientific method used for drug discovery. This was highlighted in a recent interview with Wired when he said: “You need to use AI to find how normal cellular processes break down, how that leads to disease and what the potential treatments are. Most people say: 'This is not how drugs are developed.' My answer to that is: 'Exactly, but this is the way drugs should be developed.'”
This quote reminds me of Henry Ford's famous line: “If I had asked people what they wanted, they would have said faster horses”, and shows that for progress to be made it takes pioneering individuals to embrace technology and challenge the status quo.