Collaborative intelligent decision systems for safe and reliable AI-assisted medical image diagnostics

Serge Dolgikh

Article ID: 5700
Vol 7, Issue 1, 2024

VIEWS - 2325 (Abstract) 1429 (PDF)

Abstract


The cost of diagnostic errors has been high in the developed world economics according to a number of recent studies and continues to rise. Up till now, a common process of performing image diagnostics for a growing number of conditions has been examination by a single human specialist (i.e., single-channel recognition and classification decision system). Such a system has natural limitations of unmitigated error that can be detected only much later in the treatment cycle, as well as resource intensity and poor ability to scale to the rising demand. At the same time Machine Intelligence (ML, AI) systems, specifically those including deep neural network and large visual domain models have made significant progress in the field of general image recognition, in many instances achieving the level of an average human and in a growing number of cases, a human specialist in the effectiveness of image recognition tasks. The objectives of the AI in Medicine (AIM) program were set to leverage the opportunities and advantages of the rapidly evolving Artificial Intelligence technology to achieve real and measurable gains in public healthcare, in quality, access, public confidence and cost efficiency. The proposal for a collaborative AI-human image diagnostics system falls directly into the scope of this program.


Keywords


image diagnostics; machine learning; transfer learning; collaborative Human-AI systems; intelligent decision systems; AIM

Full Text:

PDF


References


1. Sheikh A, Donaldson L, Westfall-Bates D, et al. Diagnostics errors. WHO Technical series on safer primary care. Available online: https://apps.who.int/iris/bitstream/handle/10665/252410/9789241511636-eng.pdf (accessed on 15 September 2019).

2. Kondro W. Canadian report quantifies cost of medical errors. Lancet. 2004; 363(9426): 2059.

3. Liu X, Faes L, Kale A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet. 2019; 1(6): 271-297.

4. The Guardian. AI equal with human experts in medical diagnosis, study finds | Artificial intelligence (AI). The Guardian News & Media; 2019.

5. Rodriguez RC, Alaniz S, Akata Z. Modeling conceptual understanding in image reference games. In: Advances in Neural Information Processing Systems, Vancouver, Canada. 2019. pp. 13155–13165.

6. Dolgikh S. Low-dimensional representations in unsupervised generative models. In: 20th International Conference Information Technologies-Applications and Theory (ITAT 2020), Slovakia CEUR-WS.org 2718. 2021. pp. 239–245.

7. Bartneck C, Lütge C, Wagner A, Welsh S. Trust and fairness in AI systems. In: An Introduction to Ethics in Robotics and AI, SpringerBriefs in Ethics. Springer, Cham; 2020.

8. Bogen M. All the ways hiring algorithms can introduce bias. Harvard Business Review. 2019.

9. Wang D, Churchill E, Maes P, et al. From Human-Human collaboration to Human-AI collaboration: designing AI systems that can work together with people. In: CHI Conference on Human Factors in Computing Systems (CHI ‘20). 2020. pp. 1-6.

10. Papadopoulos GTh, Antona M, Stephanidis C. Towards Open and Expandable Cognitive AI Architectures for Large-Scale Multi-Agent Human-Robot Collaborative Learning. IEEE Access. 2021; 9: 73890-73909. doi: 10.1109/access.2021.3080517

11. Dolgikh S. A Collaborative Model for Integration of Artificial Intelligence in Primary Care. Journal of Human, Earth, and Future. 2021; 2(4): 395-403. doi: 10.28991/hef-2021-02-04-07

12. Kulikowski CA. Beginnings of Artificial Intelligence in Medicine (AIM): Computational Artifice Assisting Scientific Inquiry and Clinical Art—with Reflections on Present AIM Challenges. Yearbook of Medical Informatics. 2019; 28(01): 249-256. doi: 10.1055/s-0039-1677895

13. Sidorov P. The burnout syndrome in communicative professional workers. Gigiena i San-itariia. 2008; 3: 29-33.

14. Ji Z, Lee N, Frieske R, et al. Survey of Hallucination in Natural Language Generation. ACM Computing Surveys. 2023; 55(12): 1-38. doi: 10.1145/3571730

15. Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care--a systematic review. Family Practice. 2008; 25(6): 400-413. doi: 10.1093/fampra/cmn071




DOI: https://doi.org/10.24294/irr5700

Refbacks

  • There are currently no refbacks.


License URL: https://creativecommons.org/licenses/by/4.0/

This site is licensed under a Creative Commons Attribution 4.0 International License.