Page 2,322«..1020..2,3212,3222,3232,324..2,3302,340..»

Studying The Big Bang With Artificial Intelligence – Eurasia Review

It could hardly be more complicated: tiny particles whir around wildly with extremely high energy, countless interactions occur in the tangled mess of quantum particles, and this results in a state of matter known as quark-gluon plasma. Immediately after the Big Bang, the entire universe was in this state; today it is produced by high-energy atomic nucleus collisions, for example at CERN.

Such processes can only be studied using high-performance computers and highly complex computer simulations whose results are difficult to evaluate. Therefore, using artificial intelligence or machine learning for this purpose seems like an obvious idea. Ordinary machine-learning algorithms, however, are not suitable for this task. The mathematical properties of particle physics require a very special structure of neural networks. At TU Wien (Vienna), it has now been shown how neural networks can be successfully used for these challenging tasks in particle physics.

Simulating a quark-gluon plasma as realistically as possible requires an extremely large amount of computing time, says Dr. Andreas Ipp from the Institute for Theoretical Physics at TU Wien. Even the largest supercomputers in the world are overwhelmed by this. It would therefore be desirable not to calculate every detail precisely, but to recognise and predict certain properties of the plasma with the help of artificial intelligence.

Therefore, neural networks are used, similar to those used for image recognition: Artificial neurons are linked together on the computer in a similar way to neurons in the brain and this creates a network that can recognise, for example, whether or not a cat is visible in a certain picture.

When applying this technique to the quark-gluon plasma, however, there is a serious problem: the quantum fields used to mathematically describe the particles and the forces between them can be represented in various different ways. This is referred to as gauge symmetries, says Ipp. The basic principle behind this is something we are familiar with: if I calibrate a measuring device differently, for example if I use the Kelvin scale instead of the Celsius scale for my thermometer, I get completely different numbers, even though I am describing the same physical state. Its similar with quantum theories except that there the permitted changes are mathematically much more complicated. Mathematical objects that look completely different at first glance may in fact describe the same physical state.

If you dont take these gauge symmetries into account, you cant meaningfully interpret the results of the computer simulations, says Dr. David I. Mller. Teaching a neural network to figure out these gauge symmetries on its own would be extremely difficult. It is much better to start out by designing the structure of the neural network in such a way that the gauge symmetry is automatically taken into account so that different representations of the same physical state also produce the same signals in the neural network, says Mller. That is exactly what we have now succeeded in doing: We have developed completely new network layers that automatically take gauge invariance into account. In some test applications, it was shown that these networks can actually learn much better how to deal with the simulation data of the quark-gluon plasma.

With such neural networks, it becomes possible to make predictions about the system for example, to estimate what the quark-gluon plasma will look like at a later point in time without really having to calculate every single intermediate step in time in detail, says Andreas Ipp. And at the same time, it is ensured that the system only produces results that do not contradict gauge symmetry in other words, results which make sense at least in principle.

It will be some time before it is possible to fully simulate atomic core collisions at CERN with such methods, but the new type of neural networks provides a completely new and promising tool for describing physical phenomena for which all other computational methods may never be powerful enough.

See the rest here:
Studying The Big Bang With Artificial Intelligence - Eurasia Review

Read More..

Artificial intelligence in the management of NPC | CMAR – Dove Medical Press

Introduction

According to the International Agency for Research on Cancer, nasopharyngeal carcinoma (NPC) is the twenty-third most common cancer worldwide. The global number of new cases and deaths in 2020 were 133,354 and 80,008, respectively.1,2 Although it is not uncommon, it has a distinct geographical distribution where it is most prevalent in Eastern and South-Eastern Asia, accounting for 76.9% of global cases. It was also found that almost half of the new cases occurred in China.2 Because of its late symptoms and anatomical location, it makes it difficult to be detected in the early stages. Radiotherapy is the primary treatment modality, and concomitant/adjunctive chemotherapy is often needed for advanced locoregional disease.3 Furthermore, there are many organs-at-risk (OARs) nearby that are sensitive to radiation; these include the salivary glands, brainstem, optic nerves, temporal lobes and the cochlea.4 Hence, it is of interest whether the use of artificial intelligence (AI) can help improve the diagnosis, treatment process and prediction of outcomes for NPC.

With the advances of AI over the past decade, it has become pervasive in many industries playing both major and minor roles. This includes cancer treatment, where medical professionals search for methods to utilize it to improve treatment quality. AI refers to any method that allows algorithms to mimic intelligent behavior. It has two subsets, which are machine learning (ML) and deep learning (DL). ML uses statistical methods to allow the algorithm to learn and improve its performance, such as random forest and support vector machine. Artificial neural network (ANN) is an example of ML and is also a core part of DL.5 DL can be defined as a learning algorithm that can automatically update its parameters through multiple layers of ANN. Deep neural networks such as convolutional neural network (CNN) and recurrent neural network are all DL architectures.

Besides histological, clinical and demographic information, a wide range of data ranging from genomics, proteomics, immunohistochemistry, and imaging must be integrated by physicians when developing personalized treatment plans for patients. This has led to an interest in developing computational approaches to improve medical management by providing insights that will enhance patient outcomes and workflow throughout a patients journey.

Given the increased use of AI in cancer care, in this systematic literature review, papers on AI applications for NPC management were compiled and studied in order to provide an overview of the current trend. Furthermore, possible limitations discussed within the articles were explored.

A systematic literature search was conducted to retrieve all studies that used AI or its subfields in NPC management. Keywords were developed and combined using boolean logic to produce the resulting search phrase: (artificial intelligence OR machine learning OR deep learning OR Neural Network) AND (nasopharyngeal carcinoma OR nasopharyngeal cancer). Using the search phrase, a search of research articles from the past 15 years to March 2021 was performed on PubMed, Scopus and Embase. The results from the three databases were consolidated, and duplicates were removed. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) was followed where possible, and the PRISMA flow diagram and checklist were used as a guidelines to consider the key aspects of a systematic literature review.6

Exclusion and inclusion criteria were determined to assess the eligibility of the retrieved publications. The articles were first checked to remove those that were not within the exclusion criteria. These included book chapters, conference reports, literature reviews, editorials, letters to the editors and case reports. In addition, articles in languages other than English or Chinese and papers with inaccessible full-texts were also excluded.

The remaining studies were then filtered by reading the title and abstract to remove any articles that were not within the inclusion criteria (applications of AI or its subfield and experiments on NPC). A full-text review was further performed to confirm the eligibility of the articles based on both these criteria. The process was conducted by two independent reviewers (B.B & H.C.).

Essential information from each article was extracted and placed in a data extraction table (Table 1). These included the author(s), year of publication, country, sample type, sample size, AI algorithm used, application type, study aim, performance metrics reported, results, conclusion, and limitations. The AI model with the best performance metrics from each study was selected and included. Moreover, the performance results of models trained with the training cohort were obtained from evaluating the test cohort instead of the training cohort. This was to prevent overfitting by avoiding to train and test models with the same dataset.

The selected articles were assessed for risk of bias and applicability using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 tool in Table 2.7 Studies with more than one section rated high or unclear were eliminated. Further quality assessment was also completed to ensure the papers meet the required standard. This was performed using the guidelines for developing and reporting ML predictive models from Luo et al and Alabi et al (Table 3).8,9 The guideline was summarised, and a mark was given for each guideline topic followed. The threshold was set at half of the maximum marks, and the score was presented in Table 4.

Table 2 Quality Assessment via the QUADAS-2 Tool

Table 3 Quality Assessment Guidelines

The selection process was performed using the PRISMA flow diagram in Figure 1. 304 papers were retrieved from the three databases. After 148 duplicates were removed, one inaccessible article was rejected. The papers not meeting the inclusion (n=59) and exclusion (n=20) criteria were also filtered out. Moreover, two additional studies found in literature reviews were included after removing one for being duplicated and another that did not meet the exclusion criteria. Finally, 78 papers were then assessed for quality (Figure 1).

Figure 1 PRISMA flow diagram 2020.

Notes: Adapted from Page MJ, McKenzie JE, Bossuyt PM, et al.The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. Creative Commons license and disclaimer available from: http://creativecommons.org/licenses/by/4.0/legalcode.6

18 papers failed due to having more than one section with a high or unclear rating, leaving 60 studies to be further evaluated. The QUADAS-2 tool showed that 48.3% of articles showed an overall low risk of bias, while 98.3% of them had a low concern regarding applicability (Table 2).

An additional evaluation was performed based on Table 3, which was adapted from the guidelines by Luo et al and the modified version from Alabi et al8,9 Of the 60 relevant studies, 52 of them scored greater than 70% (Table 4). It should also be noted that 23 papers included the evaluation criteria items but did not fully follow the structure of the proposed guidelines.1032 However, this only affects the ease of reading and extracting information from the articles, but not the content and quality of the papers.

The characteristics of the 7articles finally included in the current study were shown in Table 1. The articles were published in either English (n=57)1066 or Chinese (n=3);6769 3 studies examined sites other than the NPC.10,17,34

When observing the origins of the studies, 45 were published in Asia, while Morocco and France contributed one study each. Furthermore, 13 papers were collaborated work from multiple countries. The majority of the studies were from the endemic regions.

The articles used various types of data to train the models. 66.7% (n=40) only used imaging data such as magnetic resonance imaging, computed tomography or endoscopic images.15,16,18,19,2124,2628,30,32,34,3739,4143,4556,5863,67,69 There were also four studies that included clinicopathological data as well as images for training models,25,31,36,40 while three other studies developed models using images, clinicopathological data, and plasma Epstein-Barr virus (EBV) DNA.29,33,35 Furthermore, 4 studies used treatment plans,6466,68 while proteins and microRNA expressions data were each extracted by one study.10,44 There were also four articles that trained with both clinicopathological and plasma EBV DNA/serology data,1214,17 while one article trained its model with clinicopathological and dosimetric data.57 Risk factors (n=2), such as demographic, medical history, familial cancer history, dietary, social and environmental factors, were also used to develop AI models.11,20

The studies could be categorized into 4 domains, which were auto-contouring (n=21),15,16,18,22,24,3032,4555,67,69 diagnosis (n=17),10,15,16,23,26,27,49,52,54,5663 prognosis (n=20)1214,17,19,25,28,29,3344 and miscellaneous applications (n=7),11,20,21,6466,68 which included risk factor identification, image registration and radiotherapy planning (Figure 2A). Five studies examined both diagnosis and auto-contouring simultaneously.15,16,49,52,54

Figure 2 Comparison of studies on AI application for NPC management. (A) Application types of AI and its subfields on NPC; (B) Main performance metrics of application types on NPC.

Abbreviations: AI, artificial intelligence; AUC, area under the receiver operating characteristic curve; DSC, dice similarity coefficient; ASSD, average symmetric surface distance; NPC, nasopharyngeal carcinoma.

Notes: aMore than one AI subfield (artificial intelligence, machine learning and deep learning) was used in the same study. bAuto-contouring and diagnosis accuracy values were found in the same study.54.

Analyses on the purpose of the application showed that, only in auto-contouring, DL is the most heavily used (with 19 out of 22 instances). For the rest of the categories (NPC diagnosis, prognosis and miscellaneous applications), ML is the most common technique (more than half of the publications in each category) (Figure 2A). In addition, studies applying DL models selected in this literature review were published from 2017 to 2021, where there was a heavier focus on experimenting with DL. It was observed that the majority of the papers applying DL models used various forms of CNN (n=30),15,18,19,2124,2834,36,4553,55,56,60,65,67,69 while the main ML method used was ANN (n=12).13,16,26,4244,54,6164,68

The primary metrics reported were the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, dice similarity coefficient (DSC) and average symmetric surface distance (ASSD), as shown in Figure 2B.

AUC was used to evaluate the models capabilities in 25 papers, with the majority measuring the prognostic (n=13)1214,19,28,3335,37,39,40,42,44 and diagnostic abilities (n=10).15,23,26,27,49,5660 Similarly, accuracy was the parameter most frequently reported in the diagnosis and prognosis application: 11 and 5 out of 20 articles respectively.10,12,15,2628,35,43,44,49,54,56,6063 Sensitivity was the most common studied parameter for diagnostic performance: 15 out of 23 papers.10,15,16,23,26,27,49,52,54,56,5963 The specificity was only reported for prognosis (n=7)12,14,28,34,39,40,43 and diagnosis (n=15).10,15,16,23,26,27,49,52,54,56,5963 In addition, the DSC (n=20)15,18,22,24,3032,4553,55,65,67,69 and ASSD (n=10)18,22,24,31,32,45,46,48,51,69 were the primary metrics reported in studies on auto-contouring (Figure 2B).

Performance metrics with five or more instances of each application method were presented in a boxplot (Figure 3). The median AUC, accuracy, sensitivity and specificity of prognosis were 0.8000, 0.8300, 0.8003 and 0.8070 respectively, while their range were 0.63300.9510, 0.75590.9090, 0.34400.9200 and 0.52001.000 respectively. For diagnosis, the AUCs median was 0.9300, while the median accuracy was 0.9150. In addition, the median sensitivity and specificity were 0.9307 and 0.9413, respectively. The range for diagnosis AUC, accuracy, sensitivity and specificity were 0.69000.9900, 0.65000.9777, 0.02151.000 and 0.80001.000, respectively. The median DSC value for auto-contouring was 0.7530, while the range was 0.62000.9340. Furthermore, the median ASSD for auto-contouring was 1.7350 mm, and the minimum and maximum values found in the studies were 0.5330 mm and 3.4000 mm, respectively.

Figure 3 Performance metric boxplots of AI application types on NPC. (A) Prognosis and diagnosis: accuracy, AUC, sensitivity and specificity metric; (B) Auto-contouring: DSC metric; (C) Auto-contouring: ASSD metric.

Abbreviations: AI, artificial intelligence; ASSD, average symmetric surface distance; AUC, area under the receiver operating characteristic curve; DSC, dice similarity coefficient; NPC, nasopharyngeal carcinoma.

Publications on auto-contouring experimented on segmenting gross tumor volumes, clinical target volume, OARs and primary tumor volumes. The target most delineated was the gross target volume (n=7),30,48,49,51,53,55,69 while the second most were the OARs (n=3).50,52,67 The clinical target volumes and the primary tumor volume were studied in two and one articles respectively.46,55,56 However, nine articles did not mention the specific target volume contoured.15,16,18,22,24,31,32,47,54 Two out of three articles reported that the DSC for delineating optic nerves was substantially lower than the other OARs.52,67 In contrast, for the remaining paper, although the segmentation of the optic nerve is not the worst, the paper reported that the three OARs it tested, which included optic nerves, were specifically more challenging to contour.50 This is because of the low soft-tissue contrast in computed tomography images and their diverse morphological characteristics. When analyzing the OARs, automatic delineation of the eyes yielded the best DSC. Furthermore, apart from the spinal cord, optic nerve and optic chiasm, the AI models have a DSC value greater than 0.8 when contouring OARs.50,52,67

As for the detection of NPC, six papers compared the performance of AI and humans. Two of them found that AIs had better diagnostic capabilities than humans (oncologists and experienced radiologists),15,49 while another two reported that AIs had similar performances to ear, nose and throat specialists.16,62 However, the last two papers found that it depends on the experience of the person. For example, senior-level clinicians performed better than the AI, while junior level ones were worse.23,60 This is because of the variations in possible sizes, shapes, locations, and image intensities of NPC, making it difficult to determine the diagnosis. These factors make it challenging for clinicians with less experience, and it showed that AI diagnostic tools could support junior-level clinicians.

On the other hand, within the 17 papers experimenting on the diagnostic application of AI, three articles analyzed radiation-induced injury diagnosis.27,57,58 Two of which were concerned with radiation-induced temporal lobe injury,57,58 while the remaining one predicted the fibrosis level of neck muscles after radiotherapy.27 It was suggested that through early detection and prediction of radiation-induced injuries, preventive measures could be taken to minimize the side effects.

For studies on NPC prognosis, 11 out of 20 publications focused on predicting treatment outcomes, with the majority including disease-free survival as one of the study objectives.12,13,17,19,29,33,36,3942 The rest studied treatment response prediction (n=2),35,43 predicting patients risk of survival (n=5),14,25,37,38,44 T staging prediction and the prediction of distant metastasis (n=2).28,34 Therefore, the versatility of AI in different functionalities was demonstrated. The performances of the models were reported in (Table 1) and the main metric analyzed was AUC with 13 out of 25 articles (Figure 2B).

In addition to the above aspects, AI was also used to study risk factor identification (n=2),11,20 image registration (n=1)21 and dose/dose-volume histogram (DVH) distribution (n=4).6466,68 In particular, dose/DVH distribution prediction was frequently used for treatment planning. A better understanding of the doses given to the target and OARs can help clinicians give a more individualized treatment plan with better consistency and a lower planning duration. However, further development is required to obtain similar plan qualities as created by people. This is because one papers model showed the same quality as manual planning by an experienced physicist,64 but another study using a different model was unable to achieve a similar plan quality designed by even a junior physicist.68

As evident in this systematic review, there is an exponential growth in interest to apply AI for the clinical management of NPC. A large proportion of the articles collected were published from 2019 to 2021 (n=45) compared to that from 2010 to 2018 (n=15).

A heavier focus is also placed on specific fields of AIs, such as ML and DL. There are only three reports on AI, while there are 31 studies on ML and 37 on DL. The choice of AI subfield sometimes depends on the task. For example, 86% of the papers focused on DL for NPC auto-contouring (n=19), while although the majority of the studies in the other applications used ML, they were more evenly distributed (Figure 2A). The reason why there is such a significant difference in the type of AI used in auto-contouring may be due to the capability of the algorithms and the nature of the data. The medical images acquired have many factors affecting the auto-contouring quality; these include the varying tumour sizes and shapes, image resolution, contrast between regions, noise and lack of consistency during data acquisition being collected from different institutions.70 Because of these challenges, ML-based algorithms have difficulty in performing automated segmentation on NPC as image processing before training is required, which is time-consuming. Furthermore, handcrafted features are necessary to precisely contour each organ or tumour as there are significant variations in size and shape for NPC. On the other hand, DL does not have this issue as they can process the raw data directly without the need for handcrafted features.70

ANN is the backbone of DL, as DL algorithms are ANNs with multiple (2 or more) hidden layers. In the development of AI applications for NPC, 80% of the studied articles incorporated either ANN or DL technique in their models12,13,1519,2126,2834,36,38,39,4256,6069 because neural networks are generally better for image recognition. However, one study cautioned that ANNs were not necessarily better than other ML models in NPC identification.61 Hence, even though DL-based models and ANNs should be considered the primary development focus, other ML techniques should still not be neglected.

Based on the literature collected, the integration of AI applications in each category is beneficial to the practitioner. Automated contouring by AIs not only can make contouring less time-consuming for clinicians,46,51,53,64 it can also help to improve the users accuracy.51 Similarly, AI can be used to reduce the treatment planning time for radiotherapy,64 thus improve the efficiency and effectiveness of the radiotherapy planning process.

For some NPC studies, additional features from images and parameters were extracted to further improve the performances of models. However, it should be noted that not all features are suitable as some features have a more significant impact on the models performance than others.40,57,58,61 Therefore, feature selection should be considered where possible.

At its current state, AI cannot yet replace humans to perform the most complex and time-consuming tasks. This is because multiple articles which compared the performance of their developed model with medical professionals showed conflicting results. The reason for this is that the experience of the clinician is an important factor that affects the resulting comparison. The models developed by Chuang et al and Diao et al performed better than junior-level professionals, but performed worse when compared to more experienced clinicians.23,60 One article even showed that an AI model had a lower capability than a junior physicist.68 Furthermore, the quality of the training data and the experience of the AI developers are critical.

The review revealed that AI at its current state still has several limitations. The first concern was the uncertainty regarding the generalizability of the models, because datasets of many studies are retrospective and single institutional in nature.15,19,28,33,3538,41,48,5759 The dataset may not represent the true population and may only represent a population subgroup or a region. Hence, this reduces the applicability of the models and affects their performance when applied to other datasets. Another reason was the difference in scan protocol between institutions. Variations in tissue contrasts or field of views may affect the performance as the model was not trained for the same condition.45,56 Therefore, consistency of scan protocols among different institutions is important to facilitate AI model training and validation.

Another limitation was the small amount of data used to train the models. 33% (n=20) of the articles chosen had 150 total samples for both training and testing the model. The reason for this was not only were the articles usually based on single-centre data, but also because NPC is less common compared to other cancers. This particularly affects DL-based models as they are more reliant on a much larger dataset to achieve their potential when compared to ML models; over-fitting will likely occur when there is only limited data; thus, data augmentation is often used to increase the dataset size. In addition, some studies had patient selection bias, while others had concerns about not implementing multi-modality inputs into the training model (Table 1).

Future work should address these issues when developing new models. Possible solutions include incorporating other datasets or cooperating with other institutions for external validation or to expand the dataset, which were lacking in most of the analysed papers in this review. The former suggestion can boost generalizability and avoid any patient selection bias, while the latter method can increase the capability of the AI models by providing more training samples. Other methods to expand dataset have also been explored, one of which is by using big data which can be done at a much larger scale. Big data can be defined as the vast data generated by technology and the internet of things, allowing easier access to information.71 In the healthcare sector, it will allow easier access to an abundance of medical data which will facilitate AI model training. However, with the large collection of data, privacy protection becomes a serious challenge. Therefore, future studies are required to investigate how to implement it.

The performances of the AI models could also be improved by increasing the amount of data and diversifying it with data augmentation techniques which were performed in some of the studies. However, it should be noted that with an increase in training samples, more data labelling will be required, making the process more time-consuming. Hence, one study proposed the use of continual learning, which it found to boost the models performance while reducing the labelling effort.47 However, continual learning is susceptible to catastrophic forgetting, which is a long-standing and highly challenging issue.72 Thus, further investigation into methods to resolve this problem would be required to make it easier to implement in other research settings.

There are several limitations in this literature review. The metric performance results extracted from the publications were insufficient to perform a meta-analysis. Hence, the insight obtained from this review is not comprehensive enough. The quality of the included studies was also not consistent, which may affect the analysis performed.

There is growing evidence that AI can be applied in various situations, particularly as a supporting tool in prognostic, diagnostic and auto-contouring applications and to provide patients with a more individualized treatment plan. DL-based algorithm was found to be the most frequently used AI subfield and usually obtained good results when compared to other methods. However, limited dataset and generalizability are key challenges that need to be overcome to further improve the performances and accessibility of AI models. Nevertheless, studies on AI demonstrated highly promising potential in supporting medical professionals in the management of NPC; therefore, more concerted efforts in swift development is warranted.

Dr Nabil F Saba reports personal fees from Merck, GSk, Pfizer, Uptodate, and Springer, outside the submitted work; and Research funding from BMS and Exelixis. Professor Raymond KY Tsang reports non-financial support from Atos Medical Inc., outside the submitted work. The authors report no other conflicts of interest in this work.

1. Sung H, Ferlay J, Siegel RL, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021;71(3):209249. doi:10.3322/caac.21660

2. Ferlay J, Ervik M, Lam F, et al. Global cancer observatory: cancer today; 2020. Available from: https://gco.iarc.fr/today. Accessed June 4, 2021.

3. Lee AWM, Ma BBY, Ng WT, Chan ATC. Management of nasopharyngeal carcinoma: current practice and future perspective. J Clin Oncol. 2015;33(29):33563364. doi:10.1200/JCO.2015.60.9347

4. Chan JW, Parvathaneni U, Yom SS. Reducing radiation-related morbidity in the treatment of nasopharyngeal carcinoma. Future Oncology. 2017;13(5):425431. doi:10.2217/fon-2016-0410

5. Shimizu H, Nakayama KI. Artificial intelligence in oncology. Cancer Sci. 2020;111(5):14521460. doi:10.1111/cas.14377

6. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. doi:10.1136/bmj.n71

7. Whiting PF, Rutjes AWS, Westwood ME. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529536. doi:10.7326/0003-4819-155-8-201110180-00009

8. Luo W, Phung D, Tran T, et al. Guidelines for developing and reporting machine learning predictive models in biomedical research: a multidisciplinary view. J Med Internet Res. 2016;18(12):e323. doi:10.2196/jmir.5870

9. Alabi RO, Youssef O, Pirinen M, et al. Machine learning in oral squamous cell carcinoma: current status, clinical concerns and prospects for futureA systematic review. Artif Intell Med. 2021;115:102060. doi:10.1016/j.artmed.2021.102060

10. Wang HQ, Zhu HL, Cho WCS, Yip TTC, Ngan RKC, Law SCK. Method of regulatory network that can explore protein regulations for disease classification. Artif Intell Med. 2010;48(2):119127. doi:10.1016/j.artmed.2009.07.011

11. Aussem A, de Morais SR, Corbex M. Analysis of nasopharyngeal carcinoma risk factors with Bayesian networks. Artif Intell Med. 2012;54(1):5362. doi:10.1016/j.artmed.2011.09.002

12. Kumdee O, Bhongmakapat T, Ritthipravat P. Prediction of nasopharyngeal carcinoma recurrence by neuro-fuzzy techniques. Fuzzy Sets Syst. 2012;203:95111. doi:10.1016/j.fss.2012.03.004

13. Ritthipravat P, Kumdee O, Bhongmakap T. Efficient missing data technique for prediction of nasopharyngeal carcinoma recurrence. Inf Technol J. 2013;12:11251133. doi:10.3923/itj.2013.1125.1133

14. Jiang R, You R, Pei X-Q, et al. Development of a ten-signature classifier using a support vector machine integrated approach to subdivide the M1 stage into M1a and M1b stages of nasopharyngeal carcinoma with synchronous metastases to better predict patients survival. Oncotarget. 2016;7(3):36453657. doi:10.18632/oncotarget.6436

15. Li C, Jing B, Ke L, et al. Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies. Cancer Commun. 2018;38(1):59. doi:10.1186/s40880-018-0325-9

16. Mohammed MA, Abd Ghani MK, Arunkumar N, Mostafa SA, Abdullah MK, Burhanuddin MA. Trainable model for segmenting and identifying Nasopharyngeal carcinoma. Comput Electr Eng. 2018;71:372387. doi:10.1016/j.compeleceng.2018.07.044

17. Jing B, Zhang T, Wang Z, et al. A deep survival analysis method based on ranking. Artif Intell Med. 2019;98:19. doi:10.1016/j.artmed.2019.06.001

18. Ma Z, Zhou S, Wu X, et al. Nasopharyngeal carcinoma segmentation based on enhanced convolutional neural networks using multi-modal metric learning. Phys Med Biol. 2019;64(2):025005. doi:10.1088/1361-6560/aaf5da

19. Peng H, Dong D, Fang M-J, et al. Prognostic value of deep learning PET/CT-based radiomics: potential role for future individual induction chemotherapy in advanced nasopharyngeal carcinoma. Clin Cancer Res. 2019;25(14):42714279. doi:10.1158/1078-0432.CCR-18-3065

20. Rehioui H, Idrissi A. On the use of clustering algorithms in medical domain. Int J Artifi Intell. 2019;17:236.

21. Zou M, Hu J, Zhang H, et al. Rigid medical image registration using learning-based interest points and features. Comput Mater Continua. 2019;60(2):511525. doi:10.32604/cmc.2019.05912

22. Chen H, Qi Y, Yin Y, et al. MMFNet: a multi-modality MRI fusion network for segmentation of nasopharyngeal carcinoma. Neurocomputing. 2020;394:2740. doi:10.1016/j.neucom.2020.02.002

23. Chuang W-Y, Chang S-H, Yu W-H, et al. Successful identification of nasopharyngeal carcinoma in nasopharyngeal biopsies using deep learning. Cancers (Basel). 2020;12(2):507. doi:10.3390/cancers12020507

24. Guo F, Shi C, Li X, Wu X, Zhou J, Lv J. Image segmentation of nasopharyngeal carcinoma using 3D CNN with long-range skip connection and multi-scale feature pyramid. Soft Comput. 2020;24(16):1267112680. doi:10.1007/s00500-020-04708-y

25. Jing B, Deng Y, Zhang T, et al. Deep learning for risk prediction in patients with nasopharyngeal carcinoma using multi-parametric MRIs. Comput Methods Programs Biomed. 2020;197:105684. doi:10.1016/j.cmpb.2020.105684

26. Mohammed MA, Abd Ghani MK, Arunkumar N, et al. Decision support system for nasopharyngeal carcinoma discrimination from endoscopic images using artificial neural network. J Supercomput. 2020;76(2):10861104. doi:10.1007/s11227-018-2587-z

27. Wang J, Liu R, Zhao Y, et al. A predictive model of radiation-related fibrosis based on the radiomic features of magnetic resonance imaging and computed tomography. Transl Cancer Res. 2020;9(8):47264738. doi:10.21037/tcr-20-751

28. Yang Q, Guo Y, Ou X, Wang J, Hu C. Automatic T staging using weakly supervised deep learning for nasopharyngeal carcinoma on MR images. Journal of Magnetic Resonance Imaging. 2020;52(4):10741082. doi:10.1002/jmri.27202

29. Zhong L-Z, Fang X-L, Dong D, et al. A deep learning MR-based radiomic nomogram may predict survival for nasopharyngeal carcinoma patients with stage T3N1M0. Radiother Oncol. 2020;151:19. doi:10.1016/j.radonc.2020.06.050

30. Bai X, Hu Y, Gong G, Yin Y, Xia Y. A deep learning approach to segmentation of nasopharyngeal carcinoma using computed tomography. Biomed Signal Process. 2021;64:102246. doi:10.1016/j.bspc.2020.102246

31. Cai M, Wang J, Yang Q, et al. Combining images and t-staging information to improve the automatic segmentation of nasopharyngeal carcinoma tumors in MR images. IEEE Access. 2021;9:2132321331. doi:10.1109/ACCESS.2021.3056130

32. Tang P, Zu C, Hong M, et al. DA-DSUnet: dual attention-based dense SU-net for automatic head-and-neck tumor segmentation in MRI images. Neurocomputing. 2021;435:103113. doi:10.1016/j.neucom.2020.12.085

33. Zhang L, Wu X, Liu J, et al. MRI-based deep-learning model for distant metastasis-free survival in locoregionally advanced Nasopharyngeal carcinoma. J Magn Reson Imaging. 2021;53(1):167178. doi:10.1002/jmri.27308

34. Wu X, Dong D, Zhang L, et al. Exploring the predictive value of additional peritumoral regions based on deep learning and radiomics: a multicenter study. Med Phys. 2021;48(5):23742385. doi:10.1002/mp.14767

35. Zhao L, Gong J, Xi Y, et al. MRI-based radiomics nomogram may predict the response to induction chemotherapy and survival in locally advanced nasopharyngeal carcinoma. Eur Radiol. 2020;30(1):537546. doi:10.1007/s00330-019-06211-x

36. Zhang F, Zhong L-Z, Zhao X, et al. A deep-learning-based prognostic nomogram integrating microscopic digital pathology and macroscopic magnetic resonance images in nasopharyngeal carcinoma: a multi-cohort study. Ther Adv Med Oncol. 2020;12:1758835920971416. doi:10.1177/1758835920971416

37. Xie C, Du R, Ho JWK, et al. Effect of machine learning re-sampling techniques for imbalanced datasets in 18F-FDG PET-based radiomics model on prognostication performance in cohorts of head and neck cancer patients. Eur J Nucl Med Mol Imaging. 2020;47(12):28262835. doi:10.1007/s00259-020-04756-4

38. Liu K, Xia W, Qiang M, et al. Deep learning pathological microscopic features in endemic nasopharyngeal cancer: prognostic value and protentional role for individual induction chemotherapy. Cancer Med. 2020;9(4):12981306. doi:10.1002/cam4.2802

39. Cui C, Wang S, Zhou J, et al. Machine learning analysis of image data based on detailed MR image reports for nasopharyngeal carcinoma prognosis. Biomed Res Int. 2020;2020:8068913. doi:10.1155/2020/8068913

40. Du R, Lee VH, Yuan H, et al. Radiomics model to predict early progression of nonmetastatic nasopharyngeal carcinoma after intensity modulation radiation therapy: a multicenter study. Radiology. 2019;1(4):e180075. doi:10.1148/ryai.2019180075

41. Zhang B, Tian J, Dong D, et al. Radiomics features of multiparametric MRI as novel prognostic factors in advanced nasopharyngeal carcinoma. Clin Cancer Res. 2017;23(15):42594269. doi:10.1158/1078-0432.CCR-16-2910

42. Zhang B, He X, Ouyang F, et al. Radiomic machine-learning classifiers for prognostic biomarkers of advanced nasopharyngeal carcinoma. Cancer Lett. 2017;403:2127. doi:10.1016/j.canlet.2017.06.004

43. Liu J, Mao Y, Li Z, et al. Use of texture analysis based on contrast-enhanced MRI to predict treatment response to chemoradiotherapy in nasopharyngeal carcinoma. J Magn Reson Imaging. 2016;44(2):445455.

44. Zhu W, Kan X, Calogero RA. Neural network cascade optimizes MicroRNA biomarker selection for nasopharyngeal cancer prognosis. PLoS One. 2014;9(10):e110537. doi:10.1371/journal.pone.0110537

45. Wong LM, Ai QYH, Mo FKF, Poon DMC, King AD. Convolutional neural network in nasopharyngeal carcinoma: how good is automatic delineation for primary tumor on a non-contrast-enhanced fat-suppressed T2-weighted MRI? Jpn J Radiol. 2021;39(6):571579. doi:10.1007/s11604-021-01092-x

46. Xue X, Qin N, Hao X, et al. Sequential and iterative auto-segmentation of high-risk clinical target volume for radiotherapy of nasopharyngeal carcinoma in planning CT images. Front Oncol. 2020;10:1134. doi:10.3389/fonc.2020.01134

47. Men K, Chen X, Zhu J, et al. Continual improvement of nasopharyngeal carcinoma segmentation with less labeling effort. Phys Med. 2020;80:347351. doi:10.1016/j.ejmp.2020.11.005

48. Wang X, Yang G, Zhang Y, et al. Automated delineation of nasopharynx gross tumor volume for nasopharyngeal carcinoma by plain CT combining contrast-enhanced CT using deep learning. J Radiat Res Appl Sci. 2020;13(1):568577. doi:10.1080/16878507.2020.1795565

49. Ke L, Deng Y, Xia W, et al. Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images. Oral Oncol. 2020;110:104862. doi:10.1016/j.oraloncology.2020.104862

50. Zhong T, Huang X, Tang F, Liang S, Deng X, Zhang Y. Boosting-based cascaded convolutional neural networks for the segmentation of CT organs-at-risk in nasopharyngeal carcinoma. Med Phys. 2019;46(12):56025611. doi:10.1002/mp.13825

51. Lin L, Dou Q, Jin Y-M, et al. Deep learning for automated contouring of primary tumor volumes by MRI for nasopharyngeal carcinoma. Radiology. 2019;291(3):677686. doi:10.1148/radiol.2019182012

52. Liang S, Tang F, Huang X, et al. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur Radiol. 2019;29(4):19611967. doi:10.1007/s00330-018-5748-9

Visit link:
Artificial intelligence in the management of NPC | CMAR - Dove Medical Press

Read More..

Frightening Reality of Meta-Built Artificial Intelligence That Can Think ‘the Way We Do’ – Newsweek

Meta founder Mark Zuckerberg was met with a mixed reaction after touting his company's "exciting breakthrough" towards creating an artificial intelligence system that thinks "the way we do."

In a Facebook post on Thursday, Zuckerberg hailed the development of Meta's data2vec, a new artificial intelligence algorithm that is capable of learning about several different types of information without supervision. Zuckerberg predicted that the development could eventually be used to more effectively help people perform common tasks like cooking.

"Exciting breakthrough: Meta AI research built a system that learns from speech, vision and text without needing labeled training data," Zuckerberg wrote in the post. "People experience the world through a combination of sight, sound and words, and systems like this could one day understand the world the way we do."

"This will all eventually get built into AR glasses with an AI assistant so, for example, it could help you cook dinner, noticing if you miss an ingredient, prompting you to turn down the heat, or more complex tasks," he added.

See posts, photos and more on Facebook.

Although previous artificial intelligence systems have also used self-supervised learning, they have not been able to learn more than one type of information effectivelyfor example, a system that is able to decipher text effectively may be unable to interpret information from images.

A blog post from Meta developers described data2vec as "the first high-performance self-supervised algorithm that works for multiple modalities." The developers noted that the data2vec performed better than multiple single-use algorithms and said that it "brings us closer to building machines that learn seamlessly about different aspects of the world around them."

"The core idea of this approach is to enable AI to learn more generally: AI should be able to learn to do many different tasks, including ones that are entirely unfamiliar," a Meta spokesperson said in a statement to Newsweek. "We want a machine to not only recognize animals shown in its training data but also adapt to recognize new creatures in an environment if we tell it what they look like."

"The hope is that algorithms like this one will lead to powerful multi-modal, self-learning AI models," the spokesperson continued. "Meaning AI that can make sense of the physical and virtual worlds around us using all the senses that humans do simultaneously."

While many responded to Zuckerberg's post by congratulating him and sharing in his excitement about the potential applications of data2vec, others responded to his post by expressing fears that the "creepy" development could lead to a "nightmarish dystopia."

"The potential benefits of this are far outweighed by the unimaginable nightmarish dystopia it will create," Facebook user Brendon Shapiro wrote in response to Zuckerberg's post. "If I forget the lemon zest, well, it'll just have to be ok."

"I've said this loads of times before.. I'm getting Skynet vibes.. we've all seen The Terminator," wrote actor Ritchi Edwards, referring to the film franchise's fictional artificial intelligence network that becomes sentient and begins to attack humanity.

"Ummmm... this kinda sounds creepy," Facebook user Rachel Miller wrote. "I am literally one of Facebooks most vocal fans... but this... an intuitive Alexa?? I don't know if I want it telling me to pick up the socks on the floor or telling me to add more cinnamon..."

Meta's new development does bring artificial intelligence closer to the goal of replicating human-like learning and thinking. However, the algorithm is still far removed from the creation of an autonomous system that could represent any kind of realistic threat to people if left unchecked.

Update 01/21/22, 6:50 p.m. ET:This article has been updated to include a statement from a Meta spokesperson.

Original post:
Frightening Reality of Meta-Built Artificial Intelligence That Can Think 'the Way We Do' - Newsweek

Read More..

Worldwide Open-source Intelligence Industry to 2028 – Integration of Artificial Intelligence with Open-Source Intelligence Presents Opportunities -…

DUBLIN--(BUSINESS WIRE)--The "Global Open-Source Intelligence Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Technique (Text Analytics, Video Analytics, Social Media Analytics, Geospatial Analytics, Security Analytics, and Others) and End-User" report has been added to ResearchAndMarkets.com's offering.

The global open-source intelligence market was valued at Euro 3,422.74 million in 2021 and is projected to reach Euro 10,858.24 million by 2028; it is expected to grow at a CAGR of 17.9% from 2021 to 2028.

Over a few years, social media has gained momentum across the world. More than 80% of the global population has at least one social media account. Apart from using social media account for communication, individuals are using it for earning. Social media has also become a marketing platform for both individuals and businesses. Social media networks provide several options for internet investigations because a large amount of important information is available at one location.

For example, any personal information can be obtained from anywhere across the world by checking a person's Facebook profile. The information gathered from social media websites is referred to as social media intelligence (SOCMINT), which is a subbranch of open-source intelligence (OSINT). Social media platforms can have both public posts and private posts. Without the creator's consent, private information-such as materials shared with friend circles-cannot be accessed. However, with the rise in the use of social media, content and data theft have also experienced a surge over the years.

Therefore, the adoption of social media intelligence is increasing across businesses to protect every data published on their social media pages. For instance, MEDUSA offers a platform to analyze digital data from social media, dark web, forums, and closed databases to help organizations fight against serious crimes. Thus, the above-mentioned factors are expected to fuel the growth of the open-source intelligence market in the future.

The global open-source intelligence market is segmented on the basis of technique, end user, and geography. Based on technique, the market is segmented into text analytics, video analytics, social media analytics, geospatial analytics, security analytics, and others. Based on end user, the open-source intelligence market is segmented into government intelligence agencies, military and defense intelligence agencies, cyber security organizations, law enforcement agencies, private specialized business, financial services, and others. Geographically, the market is segmented into North America (the US, Canada, and Mexico), Europe (France, Germany, Italy, the UK, Russia, and the Rest of Europe), Asia Pacific (Australia, China, India, Japan, South Korea, and the Rest of APAC), Middle East & Africa (Saudi Arabia, South Africa, the UAE, and the Rest of MEA), and South America (Brazil, Argentina, and the Rest of SAM).

Reasons to buy

Market Dynamics

Drivers

Restraint

Opportunities

Future Trends

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/pjo203

View original post here:
Worldwide Open-source Intelligence Industry to 2028 - Integration of Artificial Intelligence with Open-Source Intelligence Presents Opportunities -...

Read More..

Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology – MIT News

A machine-learning expert and a psychology researcher/clinician may seem an unlikely duo. But MITs Rosalind Picard and Massachusetts General Hospitals Paola Pedrelli are united by the belief that artificial intelligence may be able to help make mental health care more accessible to patients.

In her 15 years as a clinician and researcher in psychology, Pedrelli says it's been very, very clear that there are a number of barriers for patients with mental health disorders to accessing and receiving adequate care. Those barriers may include figuring out when and where to seek help, finding a nearby provider who is taking patients, and obtaining financial resources and transportation to attend appointments.

Pedrelli is an assistant professor in psychology at the Harvard Medical School and the associate director of the Depression Clinical and Research Program at Massachusetts General Hospital (MGH). For more than five years, she has been collaborating with Picard, an MIT professor of media arts and sciences and a principal investigator at MITs Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) on a project to develop machine-learning algorithms to help diagnose and monitor symptom changes among patients with major depressive disorder.

Machine learning is a type of AI technology where, when the machine is given lots of data and examples of good behavior (i.e., what output to produce when it sees a particular input), it can get quite good at autonomously performing a task. It can also help identify patterns that are meaningful, which humans may not have been able to find as quickly without the machine's help. Using wearable devices and smartphones of study participants, Picard and Pedrelli can gather detailed data on participants skin conductance and temperature, heart rate, activity levels, socialization, personal assessment of depression, sleep patterns, and more. Their goal is to develop machine learning algorithms that can intake this tremendous amount of data, and make it meaningful identifying when an individual may be struggling and what might be helpful to them. They hope that their algorithms will eventually equip physicians and patients with useful information about individual disease trajectory and effective treatment.

We're trying to build sophisticated models that have the ability to not only learn what's common across people, but to learn categories of what's changing in an individuals life, Picard says. We want to provide those individuals who want it with the opportunity to have access to information that is evidence-based and personalized, and makes a difference for their health.

Machine learning and mental health

Picard joined the MIT Media Lab in 1991. Three years later, she published a book, Affective Computing, which spurred the development of a field with that name. Affective computing is now a robust area of research concerned with developing technologies that can measure, sense, and model data related to peoples emotions.

While early research focused on determining if machine learning could use data to identify a participants current emotion, Picard and Pedrellis current work at MITs Jameel Clinic goes several steps further. They want to know if machine learning can estimate disorder trajectory, identify changes in an individuals behavior, and provide data that informs personalized medical care.

Picard and Szymon Fedor, a research scientist in Picards affective computing lab, began collaborating with Pedrelli in 2016. After running a small pilot study, they are now in the fourth year of their National Institutes of Health-funded, five-year study.

To conduct the study, the researchers recruited MGH participants with major depression disorder who have recently changed their treatment. So far, 48 participants have enrolled in the study. For 22 hours per day, every day for 12 weeks, participants wear Empatica E4 wristbands. These wearable wristbands, designed by one of the companies Picard founded, can pick up information on biometric data, like electrodermal (skin) activity. Participants also download apps on their phone which collect data on texts and phone calls, location, and app usage, and also prompt them to complete a biweekly depression survey.

Every week, patients check in with a clinician who evaluates their depressive symptoms.

We put all of that data we collected from the wearable and smartphone into our machine-learning algorithm, and we try to see how well the machine learning predicts the labels given by the doctors, Picard says. Right now, we are quite good at predicting those labels.

Empowering users

While developing effective machine-learning algorithms is one challenge researchers face, designing a tool that will empower and uplift its users is another. Picard says, The question were really focusing on now is, once you have the machine-learning algorithms, how is that going to help people?

Picard and her team are thinking critically about how the machine-learning algorithms may present their findings to users: through a new device, a smartphone app, or even a method of notifying a predetermined doctor or family member of how best to support the user.

For example, imagine a technology that records that a person has recently been sleeping less, staying inside their home more, and has a faster-than-usual heart rate. These changes may be so subtle that the individual and their loved ones have not yet noticed them. Machine-learning algorithms may be able to make sense of these data, mapping them onto the individuals past experiences and the experiences of other users. The technology may then be able to encourage the individual to engage in certain behaviors that have improved their well-being in the past, or to reach out to their physician.

If implemented incorrectly, its possible that this type of technology could have adverse effects. If an app alerts someone that theyre headed toward a deep depression, that could be discouraging information that leads to further negative emotions.Pedrelli and Picard are involving real users in the design process to create a tool thats helpful, not harmful.

What could be effective is a tool that could tell an individual The reason youre feeling down might be the data related to your sleep has changed, and the data relate to your social activity, and you haven't had any time with your friends, your physical activity has been cut down. The recommendation is that you find a way to increase those things, Picard says. The team is also prioritizing data privacy and informed consent.

Artificial intelligence and machine-learning algorithms can make connections and identify patterns in large datasets that humans arent as good at noticing, Picard says. I think there's a real compelling case to be made for technology helping people be smarter about people.

Continue reading here:
Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology - MIT News

Read More..

Assistant/Associate Professor in Data Science / Artificial Intelligence job with COLLEGE OF THE NORTH ATLANTIC- QATAR | 279203 – Times Higher…

The beautiful and culturally progressive State of Qatar is home to the world-class post-secondary institution, College of the North Atlantic-Qatar (CNA-Q). Internationally recognized as a comprehensive technical college, CNA-Q is committed to high quality, student-centered education. This commitment is reflected through state-of-the-art facilities, accessible and responsive technology programs and strong partnerships with industry. CNA-Q will soon be transformed into a National University.

With more than 600 staff and over 5,000 students, CNA-Q is one of Qatars largest post-secondary institutions offering over 50 different level program (Diploma, Bachelor, and Masters), through student-centred learning. By providing training in a range of technical areas including Engineering Technology, Health Sciences, Industrial Trades, Business Studies and Computing & Information Technology, CNA-Q brings the State closer to the goals of Qatar National Vision 2030.

The School of Computing and Information Technology (SCIT) invites applications for positions at the level of Assistant / Associate Professor in Data Science and Artificial Intelligence. The School offers several trendy programs such Bachelor of Applied Science in Data Science and AI (AI & Data Analytics, AI & IoT), Bachelor of Applied Science in Data & Cyber Security (Ethical Hacking, Cyber Defense, Industrial Control Systems Security, Cyber Security Policy and Governance), Bachelor of Applied Science in Information Systems (Software Development, Mobile & Web Development, and Database Design & Administration), and Bachelor of Applied Science in Information Technology (Computer Systems, Network Systems, and Cloud Computing & Big Data).

Duties & Responsibilities:

The primary role of the faculty members at the School of Computing and IT is to promote high-quality innovative teaching, applied research, and services. Besides, he/she should collaborate with the Head of Department, School's Dean, and the colleagues to achieve the department's and School's mission, mentor junior colleagues, and teaching assistants, and support the department and the School with several administrative and academic services.

Reporting to the Department Chair, the successful candidate will be responsible for the development, delivery and evaluation of a broad range of courses within Data Science and Artificial Intelligence. Particular areas of interest include Machine Learning, Deep Learning, Visualization and Intelligent Interaction, Industrial and Business Analytics, IoT Software and System, and IoT Intelligence and Automation, but candidates with strong expertise in other areas of Data Science and Artificial Intelligence will also be considered. Other duties include evaluation of student progress and management of resources of the learning environment.The successful candidate will liaise with industry and other educational institutions; participate in industry advisory committees and coordinate, manage and control projects within the specified program area. Faculty members will keep course portfolio documents required for accreditation processes and engage in instructional development/improvement plans. All employees are expected to contribute to professional and community life within the College and beyond.

Required Qualifications:

For Assistant Professor

A PhD degree in Data Science and Artificial Intelligence or closely related field. Three years teaching experience in a higher-education environment, along with three years of employment experience as a practitioner/professional within a relevant discipline is preferred. Candidates should also be recognized in the following criteria:

For Associate Professor

A PhD degree in Data Science and Artificial Intelligence or closely related field. A minimum of 5 years teaching experience in a higher-education environment is required. Also, 5 years of Industrial experience as a practitioner/professional within a relevant discipline is preferred. Candidates should also be recognized in the following criteria:

Preferred Qualifications:

Other Required Skills:

How to Apply:

Applications should be submitted via our online application portal.

You must meet all essential qualifications in order to be appointed to the position. Other qualifications may be a deciding factor in choosing the person to be appointed. Some essential and other qualifications will be assessed through your application, which may include (but need not be limited to) curricula vitae, cover letters, references, teaching dossiers, and sample publications. It is your responsibility to provide appropriate examples that illustrate how you meet each qualification. Failing to do so could result in your application being rejected.

We thank all those who apply. Only those selected for further consideration will be contacted.

See the article here:
Assistant/Associate Professor in Data Science / Artificial Intelligence job with COLLEGE OF THE NORTH ATLANTIC- QATAR | 279203 - Times Higher...

Read More..

Artificial Intelligence in the Transportation Market overview, With the Best Scope, Trends, Benefits, Opportunities to 2030 – Taiwan News

Artificial Intelligence in the Transportation Market report contains detailed information on factors influencing demand, growth, opportunities, challenges, and restraints. It provides detailed information about the structure and prospects for global and regional industries. In addition, the report includes data on research & development, new product launches, product responses from the global and local markets by leading players. The structured analysis offers a graphical representation and a diagrammatic breakdown of the Artificial Intelligence in the Transportation Market by region.

Request To Download Sample of This Strategic Report:-https://reportocean.com/industry-verticals/sample-request?report_id=Pol31

The global artificial intelligence in the transportation market size was US$ 1.45 billion in 2020. The global artificial intelligence in the transportation market is forecast to reach the value of US$ 17.9 billion by 2030 by growing at a compound annual growth rate (CAGR) of 18.5% during the forecast period from 2021-2030.

COVID-19 Impact Analysis

The COVID-19 outbreak has majorly affected the transportation industry mainly because of the shortage of laborers, raw materials, and decline in trade activities. Artificial Intelligence (AI) witnessed significant growth across various verticals. Artificial intelligence has helped the healthcare sector and scientists to track the pattern of the vaccine. However, the transportation sector witnessed a significant decline which hampered the growth of global artificial intelligence in the transportation market.

Factors Influencing

The stringent government regulation mainly to enhance vehicle safety and security would primarily contribute to the market growth. Moreover, the increasing adoption and demand for advanced driver assistance systems are forecast to drive market growth.

The global artificial intelligence in the transportation market would gain traction, owing to the growing demand for traffic management and increasing deployment of self-driving vehicles among the population.

Due to rising demand for enhanced logistics, the market players are forecast to witness various favorable opportunities.

Advancements in autonomous vehicles with the implementation of safety features, including collision warning, adaptive cruise control (ACC), advanced driver assistance system (ADAS), and lane-keep assist are forecast to fuel the market growth. These features reduce the risk associated with drug-impaired drivers.

The high cost associated with the implementation of artificial intelligence systems may hamper the growth of global artificial intelligence in the transportation market.

Request To Download Sample of This Strategic Report:-https://reportocean.com/industry-verticals/sample-request?report_id=Pol31

Geographic Analysis

Geographically, North America is dominating the global artificial intelligence in the transportation market and is forecast to remain dominant in terms of revenue during the forecast period. It is due to the trending integration of self-driving vehicles and government funding to boost the safety of vehicles. In addition, the presence of prominent companies in the region is forecast to fuel the industry expansion in the coming years. Furthermore, the shortage of truck drivers and growing investment in autonomous trucks may create significant growth opportunities for the market players in the region.

The Asia-Pacific region is forecast to emerge as a rapidly growing region due to the increasing population and growing adoption of self-driving vehicles. Moreover, government policies pertaining to robust economic growth are propelling the growth of Asia-Pacific artificial intelligence in the transportation market.

Competitors in the Market

Volvo Group

Scania Group

Man SE

Daimler AG

PACCAR Inc.

Magna

Robert Bosch GmbH

Continental AG

Valeo SA

Alphabet Inc.

NVIDIA

Microsoft Corporation

ZF Friedrichshafen AG

Intel Corporation

Other prominent players

Market Segmentation

By Application

Autonomous Trucks

HMI in Trucks

Semi-Autonomous Trucks

By Offering

Hardware

Software

By Machine Learning Technology

Deep Learning

Computer Vision

Context Awareness

Natural Language Processing

By Process

Signal Recognition

Object Recognition

Data Mining

Get a Sample PDF copy of the report:-https://reportocean.com/industry-verticals/sample-request?report_id=Pol31

By Region

North America

The U.S.

Canada

Mexico

Europe

Western Europe

The UK

Germany

France

Italy

Spain

Rest of Western Europe

Eastern Europe

Poland

Russia

Rest of Eastern Europe

Asia Pacific

China

India

Japan

Australia & New Zealand

ASEAN

Rest of Asia Pacific

Middle East & Africa (MEA)

UAE

Saudi Arabia

South Africa

Rest of MEA

South America

Brazil

Argentina

Rest of South America

What is the goal of the report?The market report presents the estimated size of the ICT market at the end of the forecast period. The report also examines historical and current market sizes.During the forecast period, the report analyzes the growth rate, market size, and market valuation.The report presents current trends in the industry and the future potential of the North America, Asia Pacific, Europe, Latin America, and the Middle East and Africa markets.The report offers a comprehensive view of the market based on geographic scope, market segmentation, and key player financial performance.

Access full Report Description, TOC, Table of Figure, Chart, etc.-https://reportocean.com/industry-verticals/sample-request?report_id=Pol31

About Report Ocean:

We are the best market research reports provider in the industry. Report Ocean believes in providing quality reports to clients to meet the top line and bottom line goals which will boost your market share in todays competitive environment. Report Ocean is a one-stop solution for individuals, organizations, and industries that are looking for innovative market research reports.

Get in Touch with Us:Report Ocean:Email:sales@reportocean.comAddress: 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611 UNITED STATESTel: +1 888 212 3539 (US TOLL FREE)Website:https://www.reportocean.com/

View original post here:
Artificial Intelligence in the Transportation Market overview, With the Best Scope, Trends, Benefits, Opportunities to 2030 - Taiwan News

Read More..

Univfy and PracticeHwy Announce Partnership to Expand Access to Artificial Intelligence/Machine Learning Platform for Fertility Providers and Patients…

SAN FRANCISCO BAY AREA, Calif.and IRVING, Texas, Jan. 25, 2022 /PRNewswire/ -- Univfy, a global leader in using artificial intelligence (AI) and machine learning (ML) to personalize counseling and increase access to fertility care, announced today a strategic partnership with PracticeHwy's electronic medical records solution, eIVF, a pioneer in the fertility industry and a portfolio company of Atlantic Street Capital "ASC". The goal of the partnership is to bring a higher level of insight to fertility care by making fertility treatments more affordable, predictable, and successful, while supporting growth of the fertility market.

Univfy's highly scalable AI/ML platform delivers validated IVF success prediction models using each fertility center's own data to support more effective patient counseling. Patients counseled with the Univfy Report are 2-3 times more likely to use their doctor's recommended treatment, maximizing their chances of having a healthy baby. The partnership with eIVF will support data transmission between the Univfy AI Platform and the EMR, enabling easy and automated generation of the Univfy PreIVF Report and suite of Univfy AI/ML-assisted tools.

"This partnership with eIVF will accelerate and automate generation of Univfy reports at the point of care, making the reports accessible to many more fertility patients," said Mylene Yao, CEO and Cofounder of Univfy. "Providers have been asking for the Univfy AI/ML products to become a standard part of patient care. Patients and providers value having transparency, personalization, and accuracy regarding their IVF success rates, so that they can make the best decisions about their treatment."

"PracticeHwy's collaboration with Univfy furthers our commitment to clients to grow eIVF's suite of offerings with innovative partners," said Nimesh Shah, PracticeHwy CEO. "We are excited that our partnership will create a streamlined experience for eIVF's growing network of fertility centers by providing an easier and more efficient workflow for physicians and their teams."

One in eight, or 7 million, reproductive age couples in the US and more than 80 million couples worldwide struggle to conceive on their own. IVF is the most effective treatment for most women and couples, but it is vastly underutilized, most likely due to financial risk and physical and emotional stress. The Univfy AI Platform is designed to meet the medical, emotional, and financial needs of fertility patients.

Last month, Univfy announced that it raised$6 millionof its Series B funding round, which will enable expanded access of Univfy AI/ML services to more patients through fertility centers, employers, benefits programs and health plans.

About Univfy

Univfy makes fertility care more affordable, successful, and predictable for women and couples navigating their family-building options. Combining technology and expert client support, we provide our scientifically validated AI platform, actionable business insights and financial modeling to help fertility centers grow. Univfy's transparent and personalized counseling reports empower patients with personalized information and counseling to inform smarter spending and decision-making. Based on proprietary technology developed at Stanford University, Univfy uses a rigorous scientific process to develop and validate prediction models to support patient counseling at fertility centers. Our methods have been published in top, peer-reviewed research publications. Univfy's personalized prognostics enables fertility centers to offer patients value-based pricing programs that cap financial risk for patients and make multiple IVF cycles more affordable. Univfy is commercialized in the US, Canada, and EU. Univfy's compliance includes HIPAA and GDPR. We have a global IP portfolio. The Univfy PreIVF Report is CE-marked as a medical device software in the EU. Visitwww.univfy.comfor more information.

About PracticeHwy

PracticeHwy is a leader in healthcare software which serves over 140 fertility practices and clinics worldwide.In 2002, they launched eIVF, one of the first electronic medical record (EMR) platforms focused on Reproductive Endocrinology and Infertility (REI) practices. Since then, eIVF has supported data entry for over a million cycles.Known for its pursuit of excellence in the fertility industry, PracticeHwy.com has seen a consistent growth trajectory throughout the years by continuously developing innovative solutions which support all aspects of a fertility center's operations. For more information on eIVF, visit http://www.eivf.org.

About Atlantic Street Capital

ASC is a private equity firm that invests in lower middle market companies poised for the next level of growth.The firm targets entrepreneurial management partners and fundamentally sound companies between $4 million and $25 million of EBITDA that will benefit from capital investment and ASC's value-added strategic and operational support.As a result, ASC works closely with management to unlock their business' underlying value and help them succeed. For more information, visit http://www.atlanticstreetcapital.com.

Contact UnivfyInvestors:[emailprotected]Media:Heather HollandDirector of CommunicationsTel: +1 (646) 400-2745[emailprotected]

Contact eIVF Media: Jera Sangworn Director of Marketing and Communications Tel: +1 (469) 472-9891 [emailprotected]

SOURCE Univfy Inc.

Originally posted here:
Univfy and PracticeHwy Announce Partnership to Expand Access to Artificial Intelligence/Machine Learning Platform for Fertility Providers and Patients...

Read More..

Worldwide Edge Artificial Intelligence Industry to 2026 – Emergence of the 5G Network to Bring IT and Telecom Together Presents Opportunities -…

Country

United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe

Read more from the original source:
Worldwide Edge Artificial Intelligence Industry to 2026 - Emergence of the 5G Network to Bring IT and Telecom Together Presents Opportunities -...

Read More..

Artificial intelligence M&A deals grew by 33.5% in the past 12 months GlobalData – Creamer Media’s Engineering News

Artificial intelligence (AI) sector deals have grown by 33.5% in the past 12 months, with the average deal valued at $159-million, says data and analytics company GlobalData.

It adds that the number of AI deals is likely increasing as the technology becomes more advanced and more companies see market opportunities.

The current market growth in deals has largely occurred in the Asia-Pacific region but, in the last 12 months, deals in the South and Central American market increased by 200%.

This increase in mergers and acquisitions (M&As) further signals that defence primes are advancing their AI capabilities organically and through acquisitions, says GlobalData associate analyst William Davies.

Historically AI has been the purview of the technology sector; however, these [defence] companies are increasingly attempting to bring operations in-house and develop their own capabilities without relying on larger tech companies, he says.

AI will play an integral role on and off the battlefield. Applications range from autonomous weapons, drone swarms and manned-unmanned teaming to other functions, such as intelligence, surveillance and reconnaissance, logistics and cyber operations.

The amount of information being created by modern militaries is often referred to as a data deluge, says GlobalData, noting that the problem is significant, vexing and, given the current pace of acceleration, technologically intimidating.

AI-assisted intelligence analysis can help to ease this pressure by accurately analysing and proving insights from the information contained within large datasets.

AI is a transformative technology in defence because of its ability to enable militaries to gather and make use of large amounts of data, potentially providing a competitive edge over their adversaries. AI will be of particular use in the development of unmanned vehicles, which have the potential to reduce operations costs while exposing personnel in the field to less risks, says Davies.

AI technologies are rapidly evolving. The US and China are developing their AI capabilities for a range of military functions that will have a significant impact on the defence sector. GlobalDatas April 2021 report 'AI in Aerospace and Defense' highlights the transformative potential of AI, and its ability to create more efficient and effective military operations.

Potential AI applications in the defence industry are numerous and appealing. AI is not only about speed, but also the precision and efficiency of military decision-making. It is a race to develop, procure and field AI solutions faster than the competition, the company says in the report.

The report also signals that a race for AI dominance between the US, Russia and China will lead to significant government investment in the technology, providing more contract opportunities and more of a necessity for defence primes to integrate AI into future platforms.

The impact of AI in defence is enormous. Those looking to get ahead must recognise not only the benefits it will bring, but the challenges it will create, and perhaps more importantly, how to adapt to overcome these challenges. As AI in defence increases, so doesthe number of ethical questions, particularly around autonomous weapon systems, says GlobalData.

Additionally, the complexity of the defence acquisition process is a deterrent for some commercial companies to partner with governments, and cooperation on both sides is vital for technology procurement.

The Chinese and Russian governments have detailed their plans to dominate AI, and AI's rapid progress makes it a powerful tool from economic, political and military standpoints, GlobalData says.

As with any military technology, the prospect of falling behind may put those who do not recognise the potential that AI offers at a clear disadvantage. Finding the right structural shift to accelerate AI adoption is crucial for governments, the company says.

View post:
Artificial intelligence M&A deals grew by 33.5% in the past 12 months GlobalData - Creamer Media's Engineering News

Read More..