In the future, artificial intelligence will play an important role in medicine. In diagnosis, successful tests have already been made: for example, a computer can learn to classify images with great accuracy according to whether they show pathological changes or not. However, it is much more difficult to train AI to examine the time-changing conditions of patients and to calculate treatment suggestions – and this is exactly what has now been achieved at TU Wien in collaboration with the Medical University of Vienna.
With the help of extensive data from the intensive care units of various hospitals, an artificial intelligence has been developed that makes suggestions for treating people who need intensive care due to sepsis. Analytics show that AI is already outperforming human decisions. However, it is now important to discuss the legal aspects of such methods.
Optimization of existing data
says Professor Clemens Hitzinger of the Institute for Scientific Analysis and Computing at TU Wien (Vienna). He is also the Co-Director of the Center for Artificial Intelligence and Machine Learning (CAIML) at the University of Vienna.
The medical staff makes their decisions on the basis of well-established rules. Most of the time, they know very well what criteria they must take into account in order to provide the best care. However, a computer can take into account many more parameters more easily than a human – and in some cases this can lead to better decisions.
The computer as a planning agent
“In our project, we used a form of machine learning called reinforcement learning,” says Clemens Hitzinger. “This is not just about simple categorization — for example, separating a large number of images into those that show a tumor and those that don’t — but it’s about temporally variable progression, about the progression a given patient is likely to go through. Mathematically, that’s something Quite different. There has been very little research in this regard in the medical field.”
The computer becomes an agent that makes its own decisions: if the patient is unwell, the computer is “rewarded”. If the condition deteriorates or death occurs, the computer is “punished”. The computer program has the task of maximizing the hypothetical “reward” by taking action. In this way, comprehensive medical data can be used to automatically select a strategy that achieves a high probability of success.
Really better than a human
“Sepsis is one of the most common causes of death in intensive care medicine and poses an enormous challenge for clinicians and hospitals, as early detection and treatment are essential for patient survival,” says Prof. Oliver Kimberger of the Medical University of Vienna. “So far, there have been few medical breakthroughs in this field, which makes the search for new treatments and approaches even more urgent. For this reason, it is particularly interesting to investigate how much AI can improve medical care here. The use of machine learning models and intelligence technologies is Other treatments provide an opportunity to improve the diagnosis and treatment of sepsis, ultimately increasing the patient’s chances of survival.”
The analysis shows that AI capabilities are already outpacing humans: “Cure rates are now higher with an AI strategy than with purely human decisions. In one of our studies, the cure rate in terms of 90-day mortality was increased by about 3% to about 88%,” he says. Clemens Hitzinger.
Of course, this does not mean that one has to leave medical decisions in the computer ICU alone. But the AI may act as an additional device at the bedside — and medical staff can refer to it and compare their own assessment of the AI’s suggestions. Such artificial intelligences can be very useful in education.
Discussion about legal issues is essential
“However, this raises important questions, especially legal ones,” says Clemens Hitzinger. “One might consider the question: Who will be held responsible for any mistakes made by the AI first. But there is also an inverse problem: What if the AI made the right decision, but the human chose a different treatment option and as a result, the patient got hurt?” The Doctor then charged that it was better to trust the AI because it comes with a massive wealth of experience? Or is it human right to ignore computer advice at all times?
“The research project shows: AI can indeed be used successfully in clinical practice with today’s technology – but there is still an urgent need to discuss the social framework and clear legal rules,” says Clemens Hitzinger.