THE EVIDENCE E-ISSN:3048-7870
Vol. 2 No. 4 (2024)
Review Article Digital health
www.evidencejournals.com
Cite this Article
Hussain DZ, Zaheer A, Akhtar A. Advancing cancer care with artificial intelligence: early detection initiatives. THE EVIDENCE. 2024:2(4):1-11. DOI:10.61505/evidence.2024.2.4.91
Available From
https://the.evidencejournals.com/index.php/j/article/view/91

Received: 2024-07-16
Revised: 2024-08-02
Accepted: 2024-08-08
Published: 2024-12-07

Evidence in Context

● AI enhances cancer care, especially in early detection using DL and ML.
● DL models like Sybil accurately predict lung cancer risk from single CT scans.
● CNNs diagnose and grade cancers using diverse imaging and histopathology data.

● CAD colonoscopy, Paige Prostate Alpha, and CuP-AI-Dx highlight AI's versatility in detecting cancers.
● Multimodal DL models integrating diverse data sources boost clinical applicability.

To view Article

Advancing cancer care with artificial intelligence: early detection initiatives

Daniyah Zehra Hussain1, Amna Zaheer1*, Ahmad Akhtar1,2

1 Darul Qalb, Knoxville, Tennessee, USA.

2Family and Community Medicine Residency at Mercy Health — Anderson Hospital, Cincinnati, Ohio, USA

*Correspondence:

Abstract

Background: Artificial intelligence (AI) is revolutionizing early cancer detection, with machine learning (ML) and deep learning (DL) at the forefront. While ML requires human guidance, DL autonomously learns from data using neural networks, excelling in image and speech recognition, pattern detection, and natural language processing, making it highly valuable in oncology. The aim of this research article is to review the application of Artificial Intelligence, particularly DL, in the early detection of cancer, evaluating various models and their effectiveness in improving diagnostic accuracy and prediction.

Methods: A diversified literature review was carried out using PubMed, Google Scholar, and ScienceDirect. Keywords included "Artificial Intelligence," "Machine Learning," "Deep Learning," and "Cancer Detection." Inclusion criteria encompassed articles in English published in the last twenty years, focused on original research. Excluded were non-research articles and those with inaccessible full texts.

Results: AI models under investigation include: Sybil: Accurately predicts lung cancer risk from a single low-dose CT scan. CNN-based models: Effective in classifying cancers from endoscopy, radiology, and histopathology images. CAD colonoscopy: Improves adenoma detection rates in real-time colonoscopy. Paige Prostate Alpha: Detects prostate cancer in whole slide images. CUP-AI-Dx: Recognizes the primary tissue of origin in cancers of unknown primary using RNA-based data. M3Net: Integrates CT images and biomarker data for cancer risk prediction. GANs: Enhance image-based diagnostics through denoising and completion.

Conclusion: DL algorithms demonstrate high accuracy in cancer detection, often matching or surpassing expert evaluations. Despite challenges in model robustness, multimodal data integration, and ethical considerations, AI holds significant promise in oncology. Future research should focus on enhancing model interpretability and clinical integration.

Keywords: Classification; standards; cancer; radiotherapy; ultrastructure; histology

Introduction

Artificial intelligence (AI) can be described as a system that has been programmed to learn and detect correlations and patterns between inputs and outputs, and utilize this information to decide on entirely new input data. Machine learning (ML) and deep learning (DL) are both main ways of implementing AI. ML is a subset of AI that uses human intervention to learn, while DL is a particular subfield of ML that learns

© 2024 The author(s) and Published by the Evidence Journals. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Download PDFBack To ArticleIntroductionMethodsResultsDiscussionConclusionReferences

on its own by using artificial neural networks that mimic the structures and functions of the human brain. Deep Learning (DL) use deep neural networks (DNNs), which are inspired by the neural architecture of the brain, to build intricate models with several invisible layers that evaluate diverse varieties of input and predict outcomes. The machine receives raw data from DL algorithms so that it may train itself to automatically identify the top deep features for the job. This capability most likely explains why DL algorithms have continuously improved at performing many popular AI tasks, including speech recognition, identification of images, pattern recognition, and natural language processing. As a result, DL is used extensively in AI research in the realm of cancer. Oncologists have a keen interest in the application of AI to diagnose malignancies accurately using radiological imaging and pathological slides, forecast patient results, and make the best possible treatment decisions [1] (Figure 1).

theevi-91-01.jpg

Figure 1: Artificial Intelligence Algorithms

Cancer ranks as the primary killer in both developed and developing countries and the death toll due to oncological diseases is expected to rise to 13.1 million by 2023. Delay in its detection often leads to poor prognosis and lowers the chances of survival. However, early detection of cancers can diminish its mortality rates [2]. Diagnosis of cancer can be divided into three categories: predicting the risk of occurrence of the disease, prediction of its recurrence and finally predicting the chances of survival of the cancer patients. Detection of cancers is most commonly done via medical imaging and histological methods. Different scanning techniques can be used to screen for oncology related diseases for example a computed tomography (CT) is used to detect the tumor shape and size. Similarly, nuclear scans such as gallium scans, positron emission tomography (PET scan), bone scans, multigated acquisition scans (MUGA) and thyroid scans help determine cancer metastasis. MRI is a commonly used tool to detect the occurrence and spread of the tumor. Mammograms and ultrasounds are widely used by specialists for screening breast cancers [3]. Interventional procedures including high definition white light endoscopy, traditional white light endoscopy, colonoscopy, endocytoscopy, confocal and endomicroscopy are used to detect polyps, lesions and to perform biopsies for confirmation of the type of cancer [4].

A main site of origin cannot be identified in metastatic cancer even after a routine diagnostic workup, which accounts for around 3-5% of all malignant illnesses. This condition is known as cancer of unknown primary (CUP). CUP patients suffer from substantial disadvantages and the most don't survive well since understanding a patient's original malignancy is still essential to their therapy, hence it is crucial to create reliable and approachable diagnostic techniques for determining the cancer's primary tissue of origin [5].

Methods

The following methodology (Figure 2) was adopted for the review.

theevi-91-04.jpg

Figure 2: Flow diagram

Search Strategy

Our review, titled "Advancing Cancer Care with AI: Early Detection Initiatives," focused on two main concepts: "Artificial Intelligence" and "Cancer Early Detection." To find relevant literature, we carefully chose specific keywords for each concept across various databases like PubMed, Google Scholar, and ScienceDirect.

For the "Artificial Intelligence" aspect, we used terms such as "Artificial Intelligence," "AI," "Machine Learning," and "Deep Learning." These keywords cover different aspects of AI relevant to cancer early detection.For the "Cancer Early Detection" concept, we employed keywords like "Cancer Detection," "Early Cancer Diagnosis," "Cancer Screening," "Tumor Detection," and "Early Cancer Detection Methods." These keywords were chosen to encompass a wide range of approaches related to early detection in cancer care.

We presented this search strategy in a table, ensuring transparency and clarity in our approach.

Criteria for Inclusion

We selected articles on the basis of the standards mentioned below:

  1. Articles were restricted to those written in English to maintain linguistic consistency and ensure thorough understanding.
  2. Articles published within the past twenty years were included to focus on recent advancements.

Criteria for Exclusion

We excluded articles based on the following criteria:

  1. Correspondence, perspectives, conference abstracts, editorials, and news items were excluded to prioritize original research.
  2. Inaccessible full texts were excluded.
  3. Articles not directly related to cancer early detection or lacking integration of artificial intelligence were excluded to maintain relevance.

By following these criteria, we aimed to thoroughly explore how artificial intelligence contributes to advancing cancer care through early detection initiatives.

Table 1: Search Strategy

ConceptDatabaseKeywords
CancerPubmed, Google Scholar, ScienceDirectCancer Detection, Early Cancer Diagnosis, Cancer Screening, Tumor Detection, and Early Cancer Detection Methods.
Artificial IntelligencePubmed, Google Scholar, ScienceDirectAI, artificial intelligence, machine learning, deep learning, neural networks, Natural language processing, support vector machine, AI in healthcare, cognitive computing

Results

Several AI models are under different stages of testing for their accuracy for diagnosis of different cancers, listed below are a few of them currently being assessed for detection of some of the commonly known types of cancers world-wide.

Sybil

Sybil is a recently developed reliable deep learning model trained on National Lung Screening Trial (NLST) that can accurately make predictions for the chances of patients developing any form of lung tumor over six years using a single Low-dose computed tomography (LDCT) scan, enabling more individualized screening [6].

Sybil can run in the background on a radiological reading station in real time; it just needs one LDCT and doesn't need clinical data or radiologist comments. In a recent study three separate data sets were used to verify Sybil: a held out set of six thousand LDCTs from NLST participants, eight thousand LDCTs from Massachusetts General Hospital (MGH), and tweleve thousand LDCTs from Chang Gung Memorial Hospital (CGMH), consisting of individuals having a variety of smoking histories, along with those who abstain from smoking. The outcomes confirmed the accuracy of prediction by Sybil, it obtained area under the receiver-operator curves for predicting lung tumors at 1 year of 0.92 (95% CI, 0.88 to 0.95) on NLST and concordance indices over 6 years were 0.75 (95% CI, 0.72 to 0.78) for NLST [6].

A previously trained 3D Resnet-18 encoder was used to draw out features from the input LDCT volume. By using two pooling layers, one for Max Pooling and the other for attention-guided pooling, these characteristics were utilized to construct a global feature vector for the volume. In order to create a cumulative likelihood of acquiring lung tumors during a span of 6 years, the resultant vectors were combined and sent via a hazard layer. These five algorithms combine to form Sybil whose risk predictions are aggregated after five training cycles of the same algorithm architecture. In order to direct the model's focus during training rather than testing, visible cancer nodules' bounding boxes were annotated [6].

Despite the fact that a number of models have been created to enhance LCS and detection, none

of them can be directly compared to Sybil since they are distinct from Sybil in terms of objective, scope, data input, and code accessibility. Unlike Sybil, several models call for either clinical information, hand-drawn nodule identification and characterization, several LDCTs, or radiologist-performed Lung-RADS assessments [6].

The study has several limitations. It is retrospective, lacks a true comparator model, and has inadequate population diversity. We could not evaluate Sybil's ability to detect malignancies outside of a screening program, as the cohorts studied were part of LCS. We also lack extensive smoking data from CGMH individuals, making it hypothetical to assess Sybil's lung cancer detection in non-smokers. Sybil's development must be cautious and transparent, with a critical evaluation of its flaws, as is essential for any AI system in healthcare [6].

Convolutional neural network based DL models

Convolutional neural networks (CNNs) are most often used architectures based on DL algorithms for DNN models. They've been applied to the categorization, segmentation, and detection of cancer lesions in medical pictures. Convolutional, pooling, and fully-connected layers make up the architecture of a standard CNN. CNNs make changes to the original pictures layer by layer, changing the pixel values into the final prediction scores. To create a changed feature map, convolutional layers combine the data that has been input with convolutional kernels (filters). To extract the most helpful features for a particular job, the filters in the convolutional layers are automatically modified depending on the learning parameters. However, there is a downside, which is referred to as the "black box": it is challenging to determine which properties are learnt by the CNNs [1] (Figure 3).

theevi-91-02.jpg

Figure 3: Working model of Convoluted Neural Network (CNN)

It has been widely reported that CNN-based DL models can predict cancers with precision and accuracy, classify them, and grade them with the help of images from endoscopy (such as esophagogastroduodenoscopy and colonoscopy), radiology (such as CT and MRI), and histopathology (such as whole slide imaging [WSI]). The majority of these models exhibit accuracy levels at least comparable to those of malignant tumors on histopathology slides have been identified with excellent accuracy using CNN-based DL models for the diagnosis of cancer. In a global competition (CAMELYON16) for WSI with hematoxylin-eosin (HE) stain diagnosis of metastatic breast cancer in lymph nodes, the finest CNN algorithm (a GooLeNet-based model) produced an AUC of 0.994, and outperformed the most skilled pathologist having an AUC of 0.884, in a more efficient and quicker way. Using six different hospitals with varying levels of care, a prospective research was conducted to assess the CNN-based model. While the diagnostic accuracies across the six hospitals ranged from 0.915 to 0.977, they were higher than those of non-experts and comparable to those of expert endoscopists, demonstrating the potential advantage of enhancing the diagnostic potency of community-based hospitals [7].

CAD colonoscopy

To increase the likelihood of finding adenoma during colonoscopy, AI, more especially computer-aided detection (CADe) software, is undergoing research. In 2003, research was released that used CADe to identify colorectal polyps for the first time. With a sensitivity of 93.6% and a specificity of

99.3%, Kulkarni et al. employed wavelet transformation technology to find polyps. DL networks were then used in CADe, opening the door for instantaneous and live analytical research [8].

More recently, real-time randomized controlled trials (RCTs) utilizing colorectal polyp detection technologies have been carried out. In a non-blinded research by Wang et al., 1058 individuals were included; 536 were randomly assigned to colonoscopy and 522 to colonoscopy with CADe. In comparison to the control group, the CADe identified more adenomas of size less than 10 mm. The authors also discovered no discernible difference in the two categories in the length of time taken to withdraw the endoscope (excluding time spent performing the biopsy). 1026 patients were sorted unsystematically to one of the two, the CADe or control groups by Liu et al. The ADR turned out to be substantially greater in the CADe set (39%) in comparison to the control set (23%). There were just 36 false positive alarms, and the CADe did not overlook any polyps. Additionally, it was discovered that the withdrawal times were comparable among groups (CADe 6.16 minutes vs. control). The withdrawal times were also shown to be comparable between groups (CADe 6.16 minutes vs. the 6.11 minutes of control). According to recent meta-analyses, CADe was effective at spotting adenomas. The possibility for high false positive alert rates with CADe technology is a serious drawback. Wang et al. and Liu et al. found modest rates, however Hassan et al. recorded 1092 false positive alarms in total, with an average of 27.3 per colonoscopy. Additionally, the majority of the research that evaluated CADe used control sets that had lower ADRs within 8% to 23%, which is below the advised goal ADR of 25% [9, 10,11] (Figure 4).

theevi-91-03.jpg

Figure 4: Machine Learning Algorithm

Paige Prostate Alpha

The gold standard for detecting PrCa is evaluated to be prostate needle core biopsies. Small, well-differentiated tumors can be hard to diagnose. Lately, machine learning approaches with excellent test accuracy have been developed to identify PrCa in full slide images. One of the models used is Paige Prostate Alpha, a PrCa detection system in WSIs of prostate core needle biopsies stained with H & E, based on ML and DL algorithms. However, how pathologic diagnosis will be impacted by these artificial intelligence systems is still unclear [12].

Paige Prostate Alpha was created based on Campanella et al.'s weakly-supervised deep learning algorithm. Each WSI is first divided into a set of 224 PX 224 PX tiles, all of which are excluded from examination since they have been flagged as backgrounds [13]. The likelihood of malignancy for each non-background tile is output during prediction by a ResNet-34 convolutional neural network. The top tiles with the highest probability are then taken from the convolutional neural network and converted into a 512-dimensional feature vector (embedding), which is then fed into a recurring neural network that combines data from several tiles to produce the final prediction. On 36,644 WSIs, 7,514 of which had malignant foci, Paige Prostate Alpha was trained.

The study's drawbacks include its small sample size and restriction to pathologists with low training in the genitourinary specialty, which may have affected the study's sensitivity in the absence of Paige Prostate Alpha. The dataset was also small and might have been enlarged to include more benign cancer imitators, a wider range of Gleason grades, and more, uncommon prostatic adenocarcinoma types [14].

CuP-AI-Dx

A main site of origin cannot be identified in metastatic cancer even after a routine

diagnostic workup, which accounts for around 3-5% of all malignant tumors. This condition is known as cancer of unknown primary (CUP). CUP-AI-Dx, is a classifier based on RNA that determines the main tissue of origin of a tumor by use of a CNN based model with 1D inception. With the help of The Cancer Genome Atlas project (TCGA) and the International Cancer Genome Consortium (ICGC), transcriptional profiles from 18,217 tumors of primary origin spanning 32 different types of cancers were used to train CUP-AI-Dx. Gene chromosomal coordinates were used to sort the gene expression data before being sent into the 1D-CNN model, which concurrently employs several convolutional kernels in various configurations to increase generality. Numerous hyper parameter parameters, such as various max-pooling layers and dropout configurations, were tuned extensively in order to improve the model. A random forest model was created for 11 different tumor types that may categorize a tumor's genetic subtype in accordance with earlier TCGA investigations. 394 metastatic samples from 11 different tumors given by the TCGA and 92 formalin-fixed paraffin-embedded (FFPE) samples characterizing about 18 different cancers given by two clinical labs were used to assess the improved CUP-AI-Dx tissue origin classifier. Additionally, separately assessed distinct breast and ovarian cancer microarray datasets was the CUP-AI-Dx molecular subtype. CUP-AI-Dx may be utilized to help diagnose malignancies with uncertain or unknown primary origin utilizing a commonly available genomic platform since it accurately predicts tumor's primary location and molecular subtype [15].

Multi-path model

The model known as the multi-path multi-modal missing network (M3Net) contains several routes and is made to handle missing biomarker and image modalities. The image-path, or network path, is divided into two basic components for learning from CT images. Nodule identification and feature extraction networks by Liao et al. are included in the Convolutional Neural Network (CNN). In our approach, a sub-net that is fed by the five nodule-features is designated as the AMIL, which stands for Attention-based Multi-Instance Learning. A 10-dimension vector including the nine biomarkers that are currently available plus the higher-level factor Mayo risk serves as the biomarker-path's input. When extracting features from biomarkers, two dense layers are used. The results of the image-path and biomarker-path are combined with two additional variables, blood and Mayo risk, which have been scientifically proven to be useful in calculating the chance of developing cancer. The combined-path's final cancer risk prediction is produced by the combined-path, which consists of two dense layers. The combined-path's final forecast serves as the foundation for the reported performance. Three places in the pipeline, for the image-path, biomarker-path, and combined path, respectively, are used to deploy the cross-entropy loss (CEL). The motivation is to take advantage of the participants with missing modalities, so the image-path and biomarker-path can be trained and learned independently [16].

Generative Adversarial Network (GAN)

It is the premise of generative adversarial networks (GANs) that two networks compete with one another. Because one of the networks has learned to approach the allocation that produced the genuine training data, it can produce "fake" training data that mimics actual training data. This network, designated G(z), is known as a generator because it converts a vector of random noise (z) into a created data point (in this case, an image) using a mapping algorithm. It is possible to tell created samples apart from real samples using the second network, often known as the discriminator or critic. The network uses sample x and produces the likelihood that x is a part of the actual dataset, hence the term D(x).

Deep convolutional GANs (DCGANs) are GANs with many convolutional layers, and they are among the techniques used to create images denoising, images translation, realistic images and picture completion. A latent representation of the data is learned when doing image completion utilizing architecture generators that are comparable to autoencoders [17]. Latent data representation is accomplished through the compression and decompression of incoming data while minimizing information loss.

An unreliable, hazy forecast for the absent area of the image is made in the first step of the generator using a coarse network made of dilated convolutional blocks. By combining relevant attention with dilated convolutional branches, the refinement network, which makes up the second component of the generator, enhances the quality of the finished region with finer features.

By comparing the inner product/cosine-similarity of characteristics within the generated absent area and traits found in the surroundings, the contextual attention branch, optimizes consistency between the inferred absent area and the rest of the surrounding image, subsequently the field of vision. Achieving uniformity between the finished masked region and the full image is the responsibility of local and global discriminators.

The benefit of this model is that it does not need to rely on scans of malignant tissue, which are scarce. Instead, it is trained on a large number of normal scans, adopting the same methodology as several other studies of a similar nature. However, there are some limitations; First off, the test set included a limited number of positive examples. Second, there were not annotations for all types of aberrant items in the dataset used for evaluation. Additionally, the field of vision parameter's tested sizes were constrained to those that we deemed appropriate and computationally practicable, while future research may take a wider range of values into account. The algorithm also had the drawback of detecting strange, non-cancerous areas in the photos [18, 19].

Discussion

Out of all the algorithms used in these models, accuracy of deep learning algorithm in prediction and detection of the disease is notable. The genesis of primary malignancies with unclear causes has also been predicted using DL algorithms, which is a challenge while diagnosing cancers. Aside from endoscopy, MRI, PET-CT scans, along with other techniques for imaging have all been used to diagnose malignant illnesses with consistent success using DL [1, 20]. Additionally, difficult cancer classifications and grading activities are carried out using DL models. Coudray et al. [22] categorized WSI into three groups for lung tissues (lung squamous cell carcinoma, lung adenocarcinoma and normal), by developing DeepPATH, an Inception-v3-based model, they reported an AUC of 0.97.13 Using MRI scans and a DL technique (based on DenseNet and SENet), Zhao et al. [15] found an AUC of 0.83 for their predicting liver tumors grade (low vs. high). These studies collectively demonstrate a potential usage of AI for classification and grading of cancers, with results on par with those of qualified professionals [1, 15, 21, 22,]. Clinical studies to improve the efficacy and precision of cancer detection have been made possible by preclinical evaluations of artificial intelligence techniques, but DL models still need to be made more robust and generalizable [17]. DL algorithms have also been applied to histopathology to describe the primary epigenetic and genetic variation.[11] Additionally, DL tools have been created using WSI for the prediction of gene changes for pan-cancer, localized amplifications and deletions, chromosomal arm gains and losses, and whole-genome duplications [23]. DL models have also been involved in detecting significant indicators like mutational footprints including tumor mutational burden (TMB) status for feedback to checkpoint immunotherapy [24]. With the application of DL algorithms, data can be automatically extracted from medical records in order to create models capable of effectively predicting the chances of recurrence of tumor along with the reactions of patients to therapies. These findings allow physicians to deliver more accurate and appropriate therapies [1].

Although the introduction of AI into the advancement of oncology research is quite active, further investigation specifically on DL models should be conducted for enhanced implementation. The generalization of its applications, the ability to interpret algorithms, availability of data, and medical ethics are hindrances to boosting physicians' willingness to accept clinically used DL based models. The performance level of DL models is seen to deteriorate while being used at various hospitals due to the significant heterogeneity in medical information shared amongst different institutions; consequently, external validation to check the satisfactory levels prediction models is required. Additionally, DL's disproportionately large number of parameters raises the possibility of over-fitting and limits its capacity to generalize to a range of populations. However, to make precise judgment in a clinical environment, oncologists must take into account a range of information to make accurate decisions, these may include clinical symptoms, lab results, data from imaging techniques, and epidemiological histories. But most of the current research only uses single type of data as the input model (for example, imaging). In future investigations, a multimodal DL model that integrates the previously mentioned data in addition with imaging data will need to be built in order to replicate real clinical scenarios. The oncology field has just created DL, and it's advancing quickly. DL approaches have a great deal of opportunity to improve the efficacy and accuracy of cancer detection and therapy as high-quality medical data and algorithms are developed.

Further increasing the likelihood of DL's useful application in cancer is the FDA's favorable stance toward AI medical devices [1,2]. Future studies should concentrate on repeatability and interpretability to increase the applicability of DL approaches in order to achieve clinical application.

Conclusion

Artificial intelligence (AI) plays a crucial role in advancing cancer care, particularly in early detection. Machine learning (ML) and Deep learning (DL) are fundamental modes within AI, with DL using neural networks to analyze diverse input data. In the realm of cancer research, DL's application, like Sybil, a lung cancer prediction model, showcases impressive accuracy in risk assessment. Various models, such as Convolutional Neural Networks (CNNs), CAD colonoscopy, Paige Prostate Alpha for prostate cancer, and CuP-AI-Dx for cancers of unknown primary (CUP), demonstrate the versatility of AI in different diagnostic scenarios. These models leverage data from diverse sources like medical imaging, histopathology, and genetic profiles, offering improved accuracy and efficiency.

Abbreviations

AI: Artificial intelligence

CNNs: Convolutional neural networks

CADe: Computer-aided detection

CUP: Cancer of unknown primary

CT: Computed tomography

DNNs: Deep neural networks

ML: Machine learning

Supporting information: None

Ethical Considerations: Not applicable

Acknowledgments: None

Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author contribution statement: All authors (DZH, AZ, AA) contributed equally and attest they meet the ICMJE criteria for authorship and gave final approval for submission.

Data availability statement: Data included in article/supp. material/referenced in article.

Additional information: No additional information is available for this paper.

Declaration of competing interest: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Clinical Trial: Not applicable

Consent for publication: Note applicable

References

[1] Chen Z-H, Lin L, Wu C-F, Li C-F, Xu R-H, Sun Y. Artificial intelligence for assisting cancer diagnosis and treatment in the era of precision medicine. Cancer Commun. 2021;41:1100–1115. [Crossref][PubMed][Google Scholar]

[2] Al-Azri MH. Delay in Cancer Diagnosis: Causes and Possible Solutions. Oman Med J. 2016;31(5):325–326. [Crossref][PubMed][Google Scholar]

[3] Kumar Y, Gupta S, Singla R, Hu YC. A Systematic Review of Artificial Intelligence Techniques in Cancer Prediction and Diagnosis. Arch Computat Methods Eng. 2022;29(4):2043–2070. [Crossref][PubMed][Google Scholar]

[4] Joseph J, LePage EM, Cheney CP, Pawa R. Artificial intelligence in colonoscopy. World J Gastroenterol. 2021;27(29):4802–4817. [Crossref][PubMed][Google Scholar]

[5] Zhao Y, Pan Z, Namburi S, Pattison A, Posner A, Balachander S et al. CUP-AI-Dx: A tool for inferring cancer tissue of origin and molecular subtype using RNA gene-expression data and artificial intelligence. EBioMedicine. 2020;61:103030. [Crossref][PubMed][Google Scholar]

[6] Mikhael PG, Wohlwend J, Yala A, Karstens L, Xiang J et al. Sybil: A Validated Deep Learning Model to Predict Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography. J Clin Oncol. 2023;41(12):2191–2200. [Crossref][PubMed][Google Scholar]

[7] Ehteshami Bejnordi B, Veta M, Johannes van Diest P, van Ginneken B, Karssemeijer N, Litjens G et al. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer. JAMA. 2017;318(22):2199–2210. [Crossref][PubMed][Google Scholar]

[8] Kulkarni PM, Robinson EJ, Pradhan JS, Gartrell-Corrado RD, Rohr BR, Trager MH et al. Deep Learning Based on Standard H&E Images of Primary Melanoma Tumors Identifies Patients at Risk for Visceral Recurrence and Death. Clin. Cancer Res. 2020;26:1126–1134 [Crossref][PubMed][Google Scholar]

[9] Liu WN, Zhang YY, Bian XQ, Wang LJ, Yang Q, Zhang XD et al. Study on detection rate of polyps and adenomas in artificial-intelligence-aided colonoscopy. Saudi J Gastroenterol. 2020;26(1):13–19. [Crossref][PubMed][Google Scholar]

[10] Milluzzo SM, Cesaro P, Grazioli LM, Olivari N, Spada C. Artificial Intelligence in Lower Gastrointestinal Endoscopy: The Current Status and Future Perspective. Clin Endosc. 2021;54(3):329–339. [Crossref][PubMed][Google Scholar]

[11] Hassan C, Afshinnekoo E, Li S, Wu S, Mason CE. Genetic and epigenetic heterogeneity and the impact on cancer relapse. Exp Hematol. 2017;54:26–30. [Crossref][PubMed][Google Scholar]

[12] Raciti P, Sue J, Ceballos R, Godrich R, Kunz JD, Kapur S et al. Novel artificial intelligence system increases the detection of prostate cancer in whole slide images of core needle biopsies. Mod Pathol. 2020;33(10):2058–2066. [Crossref][PubMed][Google Scholar]

[13] Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med. 2019;25:1301–1309 [Crossref][PubMed][Google Scholar]

[14] Parwani AV, Patel A, Zhou M, Cheville JC, Tizhoosh H, Humphrey P et al. An update on computational pathology tools for genitourinary pathology practice: A review paper from the Genitourinary Pathology Society (GUPS). J Pathol Inform. 2022;14:100177. [Crossref][PubMed][Google Scholar]

[15] Zhao Y, Pan Z, Namburi S, Pattison A, Posner A, Balachander S et al. CUP-AI-Dx: A tool for inferring cancer tissue of origin and molecular subtype using RNA gene-expression data and artificial intelligence. EBioMedicine. 2020;61:103030. [Crossref][PubMed][Google Scholar]

[16] Gao R, Tang Y, Xu K, Kammer MN, Antic SL, Deppen S et al. Deep Multi-path Network Integrating Incomplete Biomarker and Chest CT Data for Evaluating Lung Cancer Risk. Proceedings of SPIE-the International Society for Optical Engineering. 2021;11596:115961E. [Crossref][PubMed][Google Scholar]

[17] Sebastian AM, Peter D. Artificial Intelligence in Cancer Research: Trends, Challenges and Future Directions. Life (Basel). 2022;12(12):1991. [Crossref][PubMed][Google Scholar]

[18] Swiecicki A, Konz N, Buda M, Mazurowski MA. A generative adversarial network-based abnormality detection using only normal images for model training with application to digital breast tomosynthesis. Sci Rep. 2021;11(1):10276. [Crossref][PubMed][Google Scholar]

[19] Koshino K, Werner RA, Pomper MG, Bundschuh RA, Toriumi F, Higuchi T, Rowe SP. Narrative review of generative adversarial networks in medical and molecular imaging. Ann Transl Med. 2021;9(9):821. [Crossref][PubMed][Google Scholar]

[20] Hussain S, Mubeen I, Ullah N, Shah SSUD, Khan BA, Zahoor M, Ullah R, Khan FA, Sultan MA. Modern Diagnostic Imaging Technique Applications and Risk Factors in the Medical Field: A Review. Biomed Res Int. 2022;2022:5164970. [Crossref][PubMed][Google Scholar]

[21] Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging. 2020;6(11):121. [Crossref][PubMed][Google Scholar]

[22] Coudray N, Ocampo PS, Sakellaropoulos T, Narula N, Snuderl M, Fenyö D et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med. 2018;24:1559-1567. [Crossref][PubMed][Google Scholar]

[23] Zack TI, Schumacher SE, Carter SL, Cherniack AD, Saksena G, Tabak B et al. Pan-cancer patterns of somatic copy number alteration. Nat Genet. 2013;45(10):1134–1140. [Crossref][PubMed][Google Scholar]

[24] Klempner SJ, Fabrizio D, Bane S, Reinhart M, Peoples T, Ali SM et al. Tumor Mutational Burden as a Predictive Biomarker for Response to Immune Checkpoint Inhibitors: A Review of Current Evidence. Oncologist. 2020;25(1):e147–e159. [Crossref][PubMed][Google Scholar]

Disclaimer / Publisher’s NoteThe statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of Journals and/or the editor(s). Journals and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.