Page 222«..1020..221222223224..230240..»

Google supercharges Chrome’s omnibox address bar with machine learning – TechSpot

Why it matters: Google is supercharging the address bar of its popular web browser with machine-learning capabilities. Known as the "omnibox" since it pulls double duty as both a URL entry field and search box, this unassuming text field is about to get a major upgrade.

The omnibox has evolved well beyond its humble beginnings as a place to type website addresses. It can now handle all sorts of queries and tasks by leveraging Google's vast search prowess. However, the suggestions and results it surfaces have been driven by a relatively rigid "set of hand-built and hand-tuned formulas." That's all about to change.

In a recent post on the Chromium blog, Justin Donnelly, the engineering lead for Chrome's omnibox, revealed that his team has been hard at work adapting machine learning models to drastically improve the omnibox's "relevance scoring" capabilities. In other words, omnibox will get much better at understanding the context behind your queries and providing more useful suggestions tailored to your needs.

According to Donnelly, when he surveyed colleagues on how to enhance the omnibox experience, improving the scoring system topped the wishlist. While the current rule-based approach works for a vast number of cases, it lacks flexibility and struggles to adapt to new scenarios organically. Enter machine learning.

By analyzing massive datasets of user interactions, browsing patterns, and historical data points like how frequently you visit certain sites, the new AI models can generate far more nuanced relevance scores. For instance, it learned that if you swiftly leave a webpage, chances are it wasn't what you were looking for, so suggestions for that URL get demoted.

As you use the smarter omnibox over time across Windows, Mac, and ChromeOS, it will continue refining and personalizing its suggestions based on your evolving interests and habits. Donnelly's team also plans to explore incorporating time-of-day awareness, specialized models for different user groups like mobile or enterprise, and other contextual signals.

Of course, enabling such deep personalization requires handing over more personal browsing data to Google's machine-learning models. How comfortable you are with that trade-off is a personal decision.

Google has been gradually rolling out these omnibox improvements over recent Chrome updates, with the machine learning models really flexing their muscles starting with version M124 expected in the coming months. And while not mentioned in the blog post, it's safe to assume the update would trickle down to mobile as well eventually.

See the original post:
Google supercharges Chrome's omnibox address bar with machine learning - TechSpot

Read More..

Navigating the black box AI debate in healthcare – HealthITAnalytics.com

May 01, 2024 -Artificial intelligence (AI) is taking the healthcare industry by storm as researchers share breakthroughs and vendors rush to commercialize advanced algorithms across various use cases.

Terms like machine learning, deep learning and generative AI are becoming part of the everyday vocabulary for providers and payers exploring how these tools can help them meet their goals; however, understanding how these tools come to their conclusions remains a challenge for healthcare stakeholders.

Black box software in which an AIs decision-making process remains hidden from users is not new. In some cases, the application of these models may not be an issue, but in healthcare, where trust is paramount, black box tools could present a major hurdle for AI deployment.

Many believe that if providers cannot determine how an AI generates its outputs, they cannot determine if the model is biased or inaccurate, making them less likely to trust and accept its conclusions.

This assertion has led stakeholders to question how to build trust when adopting AI in diagnostics, medical imaging and clinical decision support. Doing so requires the healthcare industry to explore the nuances of the black box debate.

In this primer, HealthITAnalytics will outline black box AI in healthcare, alternatives to the black box approach and the current AI transparency landscape in the industry.

One of the major appeals of healthcare AI is its potential to augment clinician performance and improve care, but the black box problem significantly inhibits how well these tools can deliver on those fronts.

Research published in the February 2024 edition of Intelligent Medicine explores black box AI within the context of the do no harm principle laid out in the Hippocratic Oath. This fundamental ethical rule reflects a moral obligation clinicians undertake to prevent unnecessary harm to patients, but black box AI can present a host of harms unbeknownst to both physicians and patients.

[Black box AI] is problematic because patients, physicians, and even designers do not understand why or how a treatment recommendation is produced by AI technologies, the authors wrote, indicating that the possible harm caused by the lack of explainability in these tools is underestimated in the existing literature.

In the study, the researchers asserted that the harm resulting from medical AI's misdiagnoses may be more serious, in some cases, than that caused by human doctors misdiagnoses, noting that the unexplainability feature of such systems limits patient autonomy in shared decision-making and black box tools can create significant psychological and financial burdens for patients.

Questions of accountability and liability that come from adopting black box solutions may also hinder the proliferation of healthcare AI.

To tackle these concerns, many stakeholders across the healthcare industry are calling for the development and adoption of explainable AI algorithms.

Explainable AI (XAI) refers to a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms, according to IBM. [Explainability] is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making.

Having insights into these aspects of an AI algorithm, particularly in healthcare, can help ensure that these solutions meet the industrys standards.

Explainability can be incorporated into AI in a variety of ways, but clinicians and researchers have outlined a few critical approaches to XAI in healthcare in recent years.

A January 2023 analysis published in Sensors indicates that XAI techniques can be divided into categories based on form, interpretation type, model specificity and scope. Each methodology has pros and cons depending on the healthcare use case, but applications of these approaches have seen success in existing research.

A research team from the University of Illinois UrbanaChampaigns Beckman Institute for Advanced Science and Technology writing in IEEE Transactions on Medical Imaging demonstrated that a deep learning framework could help address the black box problem in medical imaging.

The researchers approach involved a model for identifying disease and flagging tumors in medical images like X-rays, mammograms and optical coherence tomography (OCT). From there, the tool generates a value between zero and one to denote the presence of an anomaly, which can be used in clinical decision-making.

However, alongside these values, the model also provides an equivalency map (E-map) a transformed version of the original medical image that highlights medically interesting regions of the image which helps the tool explain its reasoning and enables clinicians to check for accuracy and explain diagnostic findings to patients.

Other approaches to shed light on AIs decision-making have also been proposed.

In a December 2023 Nature Biomedical Engineering study, researchers from Stanford University and the University of Washington outlined how an auditing framework could be applied to healthcare AI tools to enhance their explainability.

The approach utilizes a combination of generative AI and human expertise to assess classifiers an algorithm used to categorize data inputs.

When applied to a set of dermatology classifiers, the framework helped researchers identify which image features had the most significant impact on the classifiers decision-making. This revealed that the tools relied on both undesirable features and features leveraged by human clinicians.

These insights could aid developers looking to determine whether an AI relies too heavily on spurious data correlations and correct those issues before deployment in a healthcare setting.

Despite these successes in XAI, there is still debate over whether these tools effectively solve the black box problem or whether black box algorithms are a problem.

While many in the healthcare industry maintain that black box algorithms are a major concern and discourage their use, some have raised questions about the nuances of these assertions. Others posit that the black box problem is an issue but indicate that XAI is not a one-size-fits-all solution.

One central talking point in these debates revolves around the use of other tools and technologies in healthcare that could be conceptualized as black box solutions.

Although [the black box AI] discussion is ongoing, it is worth noting that the mechanism of action of many commonly prescribed medications, such as Panadol, is poorly understood and that the majority [of] doctors have only a basic understanding of diagnostic imaging tools like magnetic resonance imaging and computed tomography, explained experts writing in Biomedical Materials & Devices.

While not all healthcare tools are necessarily well-understood, such solutions can be contentious in evidence-based medicine, which prioritizes the use of scientific evidence, clinical expertise and patient values to guide care.

Some have suggested that the black-box problem is less of a concern for algorithms used in lower-stakes applications, such as those that arent medical and instead prioritize efficiency or betterment of operations, the authors noted.

However, AI is already being used for various tasks, including decision support and risk stratification, in clinical settings, raising questions about who is responsible in the event of a system failure or error associated with using these technologies.

Explainability has been presented as a potential method to ease concerns about responsibility, but some researchers have pointed out the limitations of XAI in recent years.

In a November 2021 viewpoint published in the Lancet Digital Health, researchers from Harvard, the Massachusetts Institute of Technology (MIT) and the University of Adelaide argued that assertions about XAIs potential to improve trust and transparency represent false hope for current explainability methods.

The research team asserted that black box approaches are unlikely to achieve these goals for patient-level decision support due to issues like interpretability gaps, which characterize an aspect of humancomputer interaction wherein a model presents its explanation, and the human user must interpret said explanation.

[This method] relies on humans to decide what a given explanation might mean. Unfortunately, the human tendency is to ascribe a positive interpretation: we assume that the feature we would find important is the one that was used, the authors explained.

This is not necessarily the case, as there can be many features some invisible to humans that a model may rely on that could lead users to form an incomplete or inaccurate interpretation.

The research team further indicated that model explanations have no performance guarantees, opening the door for other issues.

[These explanations] are only approximations to the model's decision procedure and therefore do not fully capture how the underlying model will behave. As such, using post-hoc explanations to assess the quality of model decisions adds another source of error not only can the model be right or wrong, but so can the explanation, the researchers stated.

A 2021 article published in Science echoes these sentiments, asserting that the current hype around XAI in healthcare both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

The authors underscored that for many applications in medicine, developers must use complicated machine learning models that require massive datasets with highly engineered features. In these cases, a simpler, interpretable AI (IAI) model couldnt be used as a substitute. XAI provides a secondary alternative, as these models can approach the high level of accuracy achieved by black box tools.

But here, users still face the issue of post-hoc explanations that may make them feel as though they understand the models reasoning without actually shedding light on the tools inner workings.

In light of these and other concerns, some have proposed guidelines to help healthcare stakeholders determine when it is appropriate to use black box models with explanations rather than IAI such as when there is no meaningful difference in accuracy between an interpretable model and black box AI.

The debate around the use of black box solutions and the role of XAI is not likely to be resolved soon, but understanding the nuances in these conversations is vital as stakeholders seek to navigate the rapidly evolving landscape of AI in healthcare.

The rest is here:
Navigating the black box AI debate in healthcare - HealthITAnalytics.com

Read More..

Google adds Machine Learning to power up the Chrome URL bar – Chrome Unboxed

The Chrome URL bar, also known as the Omnibox, is an absolute centerpiece of most peoples web browsing experience. Used quite literally billions billions of times a day, Chromes URL bar helps users quickly find tabs, bookmarks, revisit websites, and discover new information. With the latest release of Chrome (M124), Google has integrated machine learning (ML) models to make the Omnibox even more helpful, delivering precise and relevant web page suggestions. Soon, these same models will enhance the relevance of search suggestions too.

In a recent post on the Chromium Blog, the engineering lead for the Chrome Omnibox team shared some insider perspectives on the project. For years, the team wanted to improve the Omniboxs scoring system the mechanism that ranks suggested websites. While the Omnibox often seemed to magically know what users wanted, its underlying system was a bit rigid. Hand-crafted formulas made it difficult to improve or adapt to new usage patterns.

Machine learning promised a better way, but integrating it into such a core, heavily-used feature was obviously a complex task. The team faced numerous challenges, yet their belief in the potential benefits for users kept them driven.

Machine learning models analyze data at a scale humans simply cant. This led to some unexpected discoveries during the project. One key signal the model analyzes is the time since a user last visited a particular website. The assumption was: the more recent the visit, the more likely the user wants to go there again.

While this proved generally true, the model also detected a surprising pattern. When the time since navigation was extremely short (think seconds), the relevance score decreased. The model was essentially learning that users sometimes immediately revisit the omnibox after going to the wrong page, indicating the first suggestion wasnt what they intended. This insight, while obvious in hindsight, wasnt something the team had considered before.

With ML models now in place, Chrome can better understand user behavior and deliver increasingly tailored suggestions as time goes on for users. Google plans to explore specialized models for different use contexts, such as mobile browsing or enterprise environments, too.

Most importantly, the new system allows for constant evolution. As peoples browsing habits change, Google can retrain the models on fresh data, ensuring the Omnibox remains as helpful and intuitive as possible moving forward. Its a big step up from the earlier, rigid models used before, and it will be increasingly interesting to keep an eye on the new suggestions and tricks that well see in the Omnibox as these ML models find their stride.

See the rest here:
Google adds Machine Learning to power up the Chrome URL bar - Chrome Unboxed

Read More..

Machine learning vs deep learning vs neural networks: What’s the difference? – ITPro

The terms machine learning and deep learning can seem interchangeable to most people, but they arent. Both considered subdivisions within the world of artificial intelligence (AI), the two have many differences, especially in the architecture and use cases.

Machine learning, for instance, uses structured data and algorithms to train models, with the more data at disposal generally equating with more accurate and better trained models. The idea is to eliminate the need for human intervention. Deep learning, on the other hand, is a subset of machine learning and uses neural networks to imitate the way humans think, meaning the systems designed require even less human intervention.

Differentiating the two, in this way, is crucial to AI research and practical application of both, particularly as businesses attempt to integrate such technologies into their core processes, and recruit for skilled individuals to fill technical roles.

The chances are that youve probably used an application or system built on machine learning. Whether youve interacted with a chatbot, utilized predictive text, or gone to watch a show after Netflix recommended it to you, machine learning was likely at the core of these systems. Machine learning is a subset of AI, and a blanket term used to define machines that learn from datasets.

Using structured data that comes in the form of text, images, numbers, financial transactions, and many other things, machine learning can replicate the process of human learning. Collected data is used as training material to direct the machine learning model. Theoretically, the greater the volume of data that is used, the higher the quality of the model. Machine learning is all about allowing computers to self-program via training datasets and infrequent human interventions.

Supervised learning, semi-supervised learning, unsupervised learning, and reinforcement learning are all differing strands of machine learning processes.

The first of these techniques, supervised learning, involves machine learning scientists feeding labeled training data into algorithms to clearly define variables. This is so that the algorithm can start to understand connections. In contrast, unsupervised learning uses unlabelled data and allows the algorithms to actively search for relationships and connections. Acting as the logical midpoint between these processes, semi-supervised learning aids the models own comprehension of the data. Reinforcement learning, on the other hand, works by letting a machine complete a set of decisions for the purpose of achieving an objective in an unknown environment.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.

A subset of machine learning, deep learning deploys systems of artificial neural networks to mimic the cognitive operations of the human mind.

A lack of sufficient compute power has, until now, created barriers to neural network learning capabilities. Huge strides in big data analytics have changed the landscape significantly, with larger and more complex neural networks now able to take form. This means that machines can now understand, learn, and react to complex scenarios quicker than human beings.

These neural networks are constructed in layers and designed to enable the transmission of data from node to node, much like neurons in the brain. Vast datasets are required to build these models but, once theyve been constructed, they can give users instant results with little needed in the way of human intervention. There are many, varied ways in which deep learning can be performed.

Convolutional Neural Networks (CNNs): These comprise multiple layers and are mostly used for image processing and object detection.

Recurrent Neural Networks (RNNs): These are types of artificial neural network that use sequential data or time series data. They are frequently used in problems, such as language translation, natural language processing (NLP), speech recognition, and image captioning.

Long Short-Term Memory Networks (LSTMs): These are types of Recurrent Neural Network (RNN) that can learn and remember long-term dependencies. They can be useful for complex problem domains like machine translation, speech recognition, and more.

Generative Adversarial Networks (GANs): These are generative deep learning algorithms that produce new data instances that look like the training data. It comprises two parts; a generator, which learns to generate false data, and a discriminator, which learns from that fake information. These networks have been used to produce fake images of people who have never existed as well as new and unique music.

Radial Basis Function Networks (RBFNs): These networks have an input layer, a hidden layer, and an output layer and are typically used for classification, regression, and time-series predictions.

Multilayer Perceptrons (MLPs): These are a type of feedforward (this means information moves only forward in the network) neural networks. These have an input layer and an output layer that are fully connected. There may also be hidden layers. These are used in speech-recognition, image-recognition, and machine-translation software.

Deep Belief Networks (DBNs): This looks like another feedforward neural network with hidden layers, but isnt. These are a sequence of restricted boltzmann machines which are sequentially connected. These are used to identify, gather and generate images, video sequences and motion-capture data.

Despite the frequent confusion about their similarities, deep learning is very much a subset of machine learning. Deep learning, however, is differentiated from its counterpart by the data types it interacts with and the ways in which it can learn.

Machine learning uses structured, labelled data to predict outcomes. This means a machine learning models input data defines specific features and is organised into tables. While it gets progressively better at carrying out the task in hand, there still requires there to be a human to intervene at points to ensure the model is working in the required way. In other words, if the predictions are not accurate, an engineer will make any adjustments needed to get back on track.

RELATED RESOURCE

A five step blueprint for master data management success

How to create a strategic plan for deploying your MDM initiative

FREE DOWNLOAD

That being said, deep learning systems involve algorithms that can autonomously decide on the accuracy of their predictions. This works via the presence of neural networks in deep learning models.

Another difference is that where machine learning can use small amounts of data to make predictions, deep learning needs much, much more data to make more accurate predictions.

While machine learning needs little time to train typically a few seconds to a few hours deep learning takes far longer as the algorithms used here involve many layers.

Outputs also differ between the two. Machine learning tends to output numerical values, such as a score or classification, while deep learning can output in multiple formats, such as text, scores, or even sounds and images.

Machine learning is already in use in a variety of areas that are considered part of day-to-day life, including on social media, on email platforms and, as mentioned, on streaming services like Netflix. These types of applications lend themselves well to machine learning, because theyre relatively simple and dont require vast amounts of computational power to process complicated decision-making.

Among some of the more complex uses of machine learning are computer vision, such as facial recognition, where technology can be used to recognise people in crowded areas. Handwriting recognition, too, can be used to identify an individual from documents that are scanned en masse. This would apply, for example, to academic examinations, police records, and so on. Speech recognition, meanwhile, such as those used in voice assistants are another application of machine learning.

Because of the nature of deep learning, on the other hand, this technology allows for far more complex decision-making, and near-fully autonomous systems, including robotics and autonomous vehicles.

Deep learning also has its uses in image recognition, where massive amounts of data is ingested and used to help the model tag, index, and annotate images. Such models are currently in use for generating art, in systems like DALLE. Similarly to machine learning, deep learning can be used in virtual assistants, in chat bots, and even in image colorisation. Deep learning has also had a particularly exciting impact in the field of medicine, such as in the development of personalised medicines created for somebodys unique genome.

Read more here:
Machine learning vs deep learning vs neural networks: What's the difference? - ITPro

Read More..

Preclinical identification of acute coronary syndrome without high sensitivity troponin assays using machine learning … – Nature.com

In the present study we developed multiple machine learning models to predict major adverse cardiac events and acute coronary artery occlusion with preclinically obtained data. We then compared the performance of these models to a modified established risk score. We found that all of the ML models were superior to the TropOut score with the LR and the VC demonstrating the best performance in identifying MACE (AUROC 0.78). For ACAO the VC also comprised the best performance (AUROC 0.81). This is not surprising since it combines and weights the output of multiple models to optimize the predictive performance.

Quick decision-making is of utmost importance in preclinical diagnostic and treatment of patients with suspected ACS. Not every medical facility is equipped with a 24-h catheter laboratory. Therefore, a qualified assessment of early need for coronary revascularization is important in order to decide which hospital to admit the patient to and thereby guarantee optimal patient care and improve prognosis2,3,4.

Several studies have been undertaken to evaluate the predictive value of the established HEART score in an emergency setting5,13. Sagel et al. 6 even modified the score to predict MACE in a preclinical setting, thus creating the preHEART score. However, one of the HEART score components is the analysis of troponin levels. Even though the authors of the preHEART score used rapid, visual point of care immunoassays, these are unfortunately not available for emergency care providers in most areas. In order to test the performance of this score without troponin, we retrospectively calculated the TropOut score, a version of the preHEART score comprising medical history, ECG, age and risk factors but without troponin analysis. Unfortunately, this the TropOut score showed poor discriminatory power to identify MACE and ACAO in preclinical patients with chest pain within our study cohort.

With the use of ML algorithms, we were able to create models with vastly improved performance. As mentioned above, the VC model showed an AUROC value of 0.78 for prediction of MACE and 0.81 for ACAO. Even though this performance cannot quite hold up to the original preHEART score (AUROC=0.85) for predicting MACE, the performance is remarkable, especially when considering that the driving key biomarker troponin was excluded in our proposed model. Since cardiac troponin has a high sensitivity for myocardial cell loss, it is very likely that its addition would have also significantly improved our models performance. Therefore, the addition of troponin essays in the preclinical setting would likely help identifying patients with ACAO or at risk for MACE even further.

We noted a significantly higher specificity compared to sensitivity for predicting both MACE and ACAO. Apparently, the model makes very reliable predictions the majority of the time but there seem to be cases which are wrongly classified as non-MACE and non-ACAO. This might be due to unspecific symptoms or atypical ECG findings which do not meet the established STEMI criteria14,15.

Multiple authors have used ML models for risk stratification in cardiology9,16. ML has been shown to identify and safely rule-out MI in an inner clinical cohort suspected of NSTEMI using multiple variables including cardiac troponin17,18,19. However, ML algorithms display limited ability to predict mortality in patients with MI20. To our knowledge, there have been two studies which used machine learning models to predict ACS in a purely preclinical setting. Al-Zaiti et al. tried to predict ACS only using data from a preclinical 12-lead ECG whereas Takeda et al. used vital signs, history and a 3-lead ECG to predict ACS and myocardial infarction21,22. Our approach is novel and different in that we chose a different secondary endpoint. MACE was chosen in order to directly compare our model to established, non-ML scores. For the preclinical management, our secondary endpoint, acute coronary artery occlusion, could be even more relevant. Myocardial infarction can be caused by different underlying pathophysiologies. Myocardial cell loss secondary to a demandsupply mismatch in oxygen not related to atherosclerotic plaque instability is known as a type II myocardial infarction3. However, those patients do not necessarily need immediate interventional revascularization and the broad definition of myocardial infarction therefore might be an improper endpoint. In the 2022 Expert Consensus Decision Pathway on the Evaluation and Disposition of Acute Chest Pain, the American College of Cardiology also notes that up to 40% of patients with ACAO are not correctly identified by using the STEMI criteria14,23. Therefore, ACAO could be a superior parameter to help decide on where to admit the patient to and whether or not to preclinically administer antiplatelet drugs. Patients with NSTEMI but especially with acute coronary artery occlusion without ST elevations on ECG have been shown to receive delayed PCI when compared to patients suffering from ST-elevation myocardial infarction and have worse outcomes24,25. As mentioned above, our model showed especially good predictive capabilities for ACAO.

Even though ML algorithms clearly have high potential to support decision making, our model heavily relies on medical expertise by healthcare providers. As seen in Fig.5, the feature ST-Elevation as assessed by the emergency physician still is paramount for predicting both endpoints in our models. Not surprisingly, similar findings have been reported by Takeda et al.21.

SHAP analyses provides interesting insights into predictive value of symptoms, patient history and vital signs. While some features like ECG changes, age, sex and risk factor are easy to interpret, others seem more complex. In our model, diaphoresis was associated with both high and low risk for MACE and ACAO. This might be in part explained by our retrospective study design. Even though notes from the emergency protocol provide clear, dichotomous information, we cannot say if the treating physician associated the symptom diaphoresis with an ACS since the symptom can have a vastly different Clinical Gestalt. This could explain that our model performed worse when compared to Takeda et al. An alternative, provocative explanation could be a higher diagnostic skill level (like ECG interpretation and history taking) of paramedics when compared to physicians in a preclinical setting. Also, the patient collective could be different since the study by Takeda et al. was carried out in Japan.

Sensitivities for our model ranged from 0.70 to 0.77 for predicting MACE and 0.760.88 for predicting ACAO. In comparison, a meta analyzes including over 44,000 patients demonstrated a sensitivity of 0.96 for predication of MACE when a cutoff of4 points of the heart score was used. As expected, this resulted in a rather poor specificity of 0.45%26.

The ideal model would demonstrate both high sensitivities and specificities. Unfortunately, in a condition like ACS and a setting were laboratory diagnostics like troponin is not available, this seems difficult to achieve. However, we have to admitted that in a life-threatening condition like ACS, false positives (i.e. poor sensitivity) are more acceptable then false negatives (i.e. poor specificity). In our models, patients were classified as positive if the predicted probability was great or equal to 0.5, and negative if otherwise. In order to enhance sensitivity, programming of our models could be adapted. Naturally, this would result in a decline in specificity. Most importantly, clinicians using tools like the one developed in our study need to be aware of the models strengths and limitations. As of right now, our model is not suitable for excluding ACAO or patients at risk of MACE in a preclinical collective suspected of ACS. However, it could increase emergency physicians confidence in preclinically activating the coronary catheter laboratory for suspected ACAO.

In our district, preclinical documentation is carried out digitally with the use of tablets. Since patient history, vitals and ECG interpretation need to be inputted for documentation anyways, it would be feasible to integrate ML models. This way, the software could automatically calculate variables like sensitivities and specificities for endpoints like ACAO and MACE. Furthermore, ML has been used in ECG interpretation in a preclinical setting22,27. Combining those ML algorithms could potentially show a better performance and present a powerful tool in aiding preclinical health care providers on site even further.

Even in the absence of direct integration of our models into preclinical ACS diagnostics, our study has important clinical implications. Unsupervised analyses show that preclinical ACS patients are a heterogenous collective and desired endpoints are not easily identified. Even when using supervised machine learning, a high level of diagnostic skill will always be necessary since the models rely on high quality data. As mentioned before, SHAP analyses shows that out of all investigated parameters, ST-elevation is still the most important marker for properly identify ACAO and patients at risk of MACE. This highlights the necessity for a high clinical expertise and ECG interpretation skills in professionals diagnosing and treating patients with suspected ACS in a preclinical setting.

Our study has several limitations. For ECG interpretation, we had to rely on the emergency physicians documentation and were not able to manually interpret the preclinical 12-lead ECG ourselves. Therefore, the quality and accuracy of this documentation might vary. Our study design relied on retrospective data collection. A predetermined questionnaire would likely improve the quality of the data and also the models predictive power.

Since patients could present to the emergency department on their own or in rare cases might be transferred by other providers than the cooperating rescue stations, we cannot exclude missing some cases of ACS in our study. Therefore, selection bias cannot be fully excluded.

In line with common machine learning methodology, we did validate our findings on the validation cohort. However, our algorithm has not yet been validated on external data. Especially the lack of a prospective validation cohort is the biggest limitation of our study and further analysis is needed. To our knowledge, the only comparable study which used prospectively recorded data was carried out by Takeda et al. and achieved slightly better AUROC for the endpoint ACS then our study did for MACE and ACAO (0.86 versus 0.78 and 0.81 respectively)21. However, because of the different preclinical emergency systems in Japan and Germany (paramedics versus emergency medicine physician), the studies are only partially comparable. Since most countries rely on paramedics for preclinical emergency medicine, our findings might not be directly transferable to other settings. At the moment, our study can only be viewed as hypothesis generating until the algorithms are prospectively validated on another patient cohort.

Excerpt from:
Preclinical identification of acute coronary syndrome without high sensitivity troponin assays using machine learning ... - Nature.com

Read More..

Improving inclusion and accessibility through automated document translation with an open source app using Amazon … – AWS Blog

Organizations often offer support in multiple languages, saying contact us for translations. However, customers who dont speak the predominant language often dont know that translations are available or how to request them. This can lead to poor customer experience and lost business. A better approach is proactively providing information in multiple languages so customers can access it directly. This leads to more informed, satisfied, and included customers.

In this post, we share how we identified these challenges and overcame them through our work with Swindon Borough Council. We developed the Document Translation app, which uses Amazon Translate, to address these issues. The app is a business user app for self-serve translations. The app is created in partnership with Swindon Council and released as open source code freely available for your organization to use.

We identified three key challenges:

Translation accuracy and quality are critical, because the results must be accurate and understood. As quoted in the Swindon Borough Council case study:

The council ran small-scale trials with the main digital translation providers that can support the different languages spoken by Swindons citizens. It recruited local bilingual volunteers to assess the quality of the machine translations against their first languages, and Amazon Translate came out on top.

The Document Translation app uses Amazon Translate for performing translations. Amazon Translate provides high-quality document translations for contextual, accurate, and fluent translations. It supports many languages and dialects, providing broad coverage for customers worldwide. Custom terminology, a feature of Amazon Translate,is dynamically utilized by the app workflow when a language has matching custom terminology available.

High costs of manual translation can prohibit organizations from supporting multiple languages, straining already tight budgets. Balancing language inclusivity and budget limitations poses a significant challenge when relying solely on traditional translation methods.

Swindon Borough Council paid around 159.81 ($194.32 USD) per single-page document, limiting them to providing translation only where legally required. As discussed in the case study, Swindon Borough Council slashed 99.96% of translation costs using Amazon Translate:

Such dramatic savings mean that its no longer limited to translating only documents it is legally required to provideit can offer citizens wider access to content for minimal extra cost.

Customers report third-party translation services fees as a major cost. The neural machine translation technology of Amazon Translate dramatically lowers these costs.

Following the Cost Optimization pillar of the AWS Well-Architected Framework further led to implementing an AWS Graviton architecture using AWS Lambda and an infrequently accessed Amazon DynamoDB table class. With no server management overhead or continually running systems, this helps keep costs low.

Manual translation delays that lower customer satisfaction also include internal processes, approvals, and logistics arrangements in place to control costs and protect sensitive and private content. Swindon Borough Council stated that turnaround times could take up to 17 days:

First, it was slow. The internal process required manual inputs from many different people. On average, that process took up to 12 days, and the time required by the translation agency was 35 days. That meant total translation time for a document was up to 17 days.

This app offers a business user self-serve portal for document translations. Users can upload documents and download translations for sharing without slow manual intervention. Amazon Translate can perform translations in about 10 minutes.

The apps business user portal is a browser-based UI that has been translated into all languages and dialects supported by Amazon Translate. The dynamic React UI doesnt require server software. To accelerate development, UI components such as buttons and input boxes come from the AWS Cloudscape Design library. For interacting with AWS services, the AWS Amplify JS library for React simplifies the authentication, security, and API requests.

The backend uses several serverless and event-driven AWS services, including AWS Step Functions for low-code workflows, AWS AppSync for a GraphQL API, and Amazon Translate. This architecture enables fast development and reduces ongoing management overhead, as shown in the following diagram.

The app is built with Infrastructure as Code (IaC) using the AWS Cloud Development Kit (AWS CDK). The AWS CDK is an open source software development framework used to model and provision cloud applications. Using the Typescript CDK provides a reliable, repeatable, and extensible foundation for deployments. Paired with a consistent continuous integration and delivery (CI/CD) pipeline, deployments are predictable. Reusable components are extracted into constructs and imported where needed, providing consistency and best practices such as AWS Identity and Access Management (IAM) roles, Amazon CloudWatch logging, and AWS X-Ray tracing for all Lambda functions.

The app is effortless to deploy using the AWS CDK. The AWS CDK allows modeling of the entire stack, including frontend React code, backend functions and workflows, and cloud infrastructure definitions packaged together.

Before deployment, review any prerequisites you may want to use, such as connecting this to your organizations single sign-on with the SAML provider.

The installation wizard provides the necessary commands. AWS CloudShell allows you to run these commands without installing anything locally. The app documentation covers all advanced options available. Installation takes 3060 minutes and is monitored from AWS CodePipeline.

A self-paced Immersion Day is available for your technical teams to get hands-on experience with the services and build core components. Alternatively, your AWS account team can provide personalized guidance through the workshop.

This app is designed with multiple features (as of this writing, Document Translation and Simply Readable). Simply Readable enables you to create Easy Read documents with generative artificial intelligence (AI) using Amazon Bedrock. The app can be installed with or without this feature.

The Document Translation app provides translations in your customers native languages. Amazon Translate enables accurate translation at scale. Communicating in customers languages shows respect, improves understanding, and builds trust.

Translation capabilities should be core to any growth strategy, building loyalty and revenue through superior localized experiences.

Business leaders should evaluate solutions like Amazon Translate to overcome language barriers and share their brand. Enabling multilingual communication conveys We value you, we hear you, and we want your experience with us to be positive.

To learn more about the app, see the FAQ.

Philip Whiteside is a Solutions Architect (SA) at Amazon Web Services. Philip is passionate about overcoming barriers by utilizing technology.

Read the original:
Improving inclusion and accessibility through automated document translation with an open source app using Amazon ... - AWS Blog

Read More..

Chrome’s Omnibox address bar is now powered by machine learning – 9to5Google

With Chrome 124 on Mac, Windows, and ChromeOS, Google has updated the address bar, or Omnibox, with ML models to offer better suggestions.

Previously, Chrome leveraged a set of hand-built and hand-tuned formulas that were difficult to improve or to adapt to new scenarios. For example, one signal is time since last navigation:

The expectation with this signal is that the smaller it is (the more recently youve navigated to a particular URL), the bigger the contribution that signal should make towards a higher relevance score.

Google says the scoring system responsible for showing/ranking URLs and suggested queries went largely untouched for a long time.

For most of that time, an ML-trained scoring model was the obvious path forward. But it took many false starts to finally get here. Our inability to tackle this challenge for so long was due to the difficulty of replacing the core mechanism of a feature used literally billions of times every day.

This new ML system should result in the Chrome address bar returning page suggestions that are more precise and relevant to you. It will allow Google to collect fresher signals, re-train, evaluate, and deploy new models periodically over time. One improvement the model made with time since last navigation was:

when the time since navigation was very low (seconds instead of hours, days or weeks), the model was decreasing the relevance score. It turns out that the training data reflected a pattern where users sometimes navigate to a URL that was not what they really wanted and then immediately return to the Chrome omnibox and try again. In that case, the URL they just navigated to is almost certainly not what they want, so it should receive a low relevance score during this second attempt.

Looking ahead, Google is looking at incorporating new signals, like differentiating between time of the day to improve relevance.

The team is also exploring training specialized versions of the model for particular environments. This new approach is currently live on desktop, but future iterations could target mobile, enterprise, and education usage.

FTC: We use income earning auto affiliate links. More.

Go here to read the rest:
Chrome's Omnibox address bar is now powered by machine learning - 9to5Google

Read More..

Optimized model architectures for deep learning on genomic data | Communications Biology – Nature.com

Hyperparameter search space

The hyperparameter space used for optimization is listed in Table1 and described in more detail here.

The first part of the model constructed by GenomeNet-Architect consists of a sequence of convolutional blocks (Fig.1), each of which consists of convolutional layers. The number of blocks (Ncb) and the number of layers in each block (scb) is determined by the HPs ncb and nc in the following way: Ncb is directly set to ncb unless nc (which relates to the total number of convolutional layers) is less than that. Their relation is therefore

$${N}_{{cb}}=left{begin{array}{cc}{n}_{c},&{{if} , {n}_{c}le {n}_{{cb}}}\ {n}_{{cb}}, &{otherwise}end{array}right.$$

scb is calculated by rounding the ratio of the nc hyperparameter to the actual number of convolutional blocks Ncb:

$${s}_{{cb}}={round}left(frac{{n}_{c}}{{N}_{{cb}}}right).$$

This results in nc determining the approximate total number of convolutional layers while satisfying the constraint that each convolutional block has the same (integer) number of layers. The total number of convolutional layers is then given by

$${N}_{c}={N}_{{cb}}times {s}_{{cb}}.$$

f0 and fend determine the number of filters in the first or last convolutional layers, respectively. The number of filters in intermediate layers is interpolated exponentially. If residual blocks are used, the number of filters within each convolutional block needs to be the same, in which case the number of filters changes block-wise. Otherwise, each convolutional layer can have a different number of filters. If there is only one convolutional layer, f0 is used as the number of filters in this layer. Thus, the number of filters for the ith convolutional layer is:

$${f}_{i}=leftlceil {f}_{0}times {left(frac{{f}_{{end}}}{{f}_{0}}right)}^{jleft(iright)}rightrceil,,jleft(iright)=left{begin{array}{cc}leftlfloor frac{i}{{s}_{{cb}}}rightrfloor times frac{1}{{N}_{{cb}}-1}, & {if} , res_block\ frac{i}{{N}_{c}-1}, & {otherwise}end{array}right..$$

The kernel size of the convolutional layers is also exponentially interpolated between k0 and kend. If the model has only one convolutional layer, the kernel size is set to k0. The kernel size of the convolutional layer i is:

$${k}_{i}=leftlceil{k}_{0}times {left(frac{{k}_{{end}}}{{k}_{0}}right)}^{frac{i}{{N}_{c}-1}}rightrceil.$$

The convolutional layers can use dilated convolutions, where the dilation factor increases exponentially from 1 to dend within each convolutional block. Using rem as the remainder operation, the dilation factor for each layer is then:

$${d}_{i}=leftlceil{d}_{{end}}^{,left(leftlfloor i,{{{{{boldsymbol{rem}}}}}},{s}_{{cb}}rightrfloor right)/left({s}_{{cb}}-1right)}rightrceil.$$

We apply max-pooling after convolutional layers, depending on the total max-pooling factor pend. Max pooling layers of stride and a kernel size of 2 or the power of 2 are inserted between convolutional layers so that the sequence length is reduced exponentially along the model. pend represents the approximate value of total reduction in the sequence length before the output of the convolutional part is fed into the last GAP layer or into the RNN layers depending on the model type.

For CNN-GAP, outputs from multiple convolutional blocks can be pooled, concatenated, and fed into a fully connected network. Out of Ncb outputs, the last min(1, (1 rs) Ncb) of them are fed into global average pooling layers, where rs is the skip ratio hyperparameter.

GenomeNet-Architect uses the mlrMBO software38 with a Gaussian process model from the DiceKriging R package39 configured with a Matrn-3/2 kernel40 for optimization. It uses the UCB31 infill criterion, sampling from an exponential distribution as a batch proposal method32. In our experiment, we proposed three different configurations simultaneously in each iteration.

For both tasks, we trained the proposed model configurations for a given amount of time and then evaluated them afterwards on the validation set. For each architecture (CNN-GAP and CNN-RNN) and for each sequence length of the viral classification task (150nt and 10,000nt), the best-performing model configuration found within the optimization setting (2h, 6h) was saved and considered for further evaluation. For the pathogenicity detection task, we only evaluated the 2h optimization. For each task and sequence length value, the first t = t1 (2h) optimization evaluated a total of 788 configurations, parallelized on 24 GPUs, and ran for 2.8 days (wall time). For the viral classification task, the warm-started t = t2 (6h) optimization evaluated 408 more configurations and ran for 7.0 days for each sequence length value.

During HPO, the number of samples between model validation evaluations was set dynamically, depending on the time taken for a single model training step. It was chosen so that approximately 20 validation evaluations were performed for each model in the first phase (t = 2h), and approximately 100 validation evaluations were performed in the second phase (t = 6 hours). In the first phase, the highest validation accuracy found during model training was used as the objective value to be optimized. In the second phase, the second-highest validation accuracy found in the last 20 validation evaluations was used as the objective value. This was done to avoid rewarding models with a very noisy training process with performance outliers.

The batch size of each model architecture is chosen to be as large as possible while still fitting into GPU memory. To do this, GenomeNet-Architect performs a binary search to find the largest model that still fits in the GPU and subtracts a 10% safety margin to avoid potential training failures.

For the viral classification task, the training and validation samples are generated by randomly sampling FASTA genome files and splitting them into disjoint consecutive subsequences from a random starting point. A batch size that is a multiple of 3 (the number of target classes) is used, and each batch contains the same number of samples from each class. Since we work with datasets that have different quantities of data for each class, this effectively oversamples the minor classes compared to the largest class. The validation set performance was evaluated at regular intervals after training on a predetermined number of samples, set to 6,000,000 for the 150 nt models and 600,000 for the 10,000 nt models. The evaluation used a subsample of the validation set equal to 50% of the training samples seen between each validation. During the model training, the typical batch size was 1200 for the 150 nt models, and either 120, 60, or 30 for the 10,000 nt models. Unlike during training and validation, the test set samples were not randomly generated by selecting random FASTA files. Instead, test samples were generated by iterating through all individual files, and using consecutive subsequences starting from the first position. For the pathogenicity detection task, the validation performance was evaluated at regular intervals on the complete set, specifically once after training on 5,000,000 samples. The batch size of 1000 was used for all models, except for GAP-RNN, as it was not possible with the memory of our GPU. For this model, a batch size of 500 was used.

For both tasks, we chose a learning rate schedule that automatically reduced the learning rate by half if the balanced accuracy did not increase for 3 consecutive evaluations on the validation set. We stopped the training when the balanced accuracy did not increase for 10 consecutive evaluations. This typically corresponds to stopping the training after 40/50 evaluations for the 150 nt models, 25/35 evaluations for the 10,000 nt models for the viral classification tasks, and 5/15 evaluations for the pathogenicity detection task.

To evaluate the performance of the architectures and HP configurations, the models proposed by GenomeNet-Architect were trained until convergence on the training set; convergence was checked on the validation set. The resulting models were then evaluated on a test set that was not seen during optimization.

For the viral classification task, we downloaded all complete bacterial and viral genomes from GeneBank and RefSeq using the genome updater script (https://github.com/pirovc/genome_updater) on 04-11-2020 with the arguments -d genbank,refseq -g bacteria/viral -c all and -l Complete Genome. To filter out possible contamination consisting of plasmids and bacteriophages, we removed all genomes from the bacteria set with more than one chromosome. Filtering out plasmids due to their inconsistent and poor annotations in databases avoids introducing substantial noise in sequence and annotation since they can be incorrectly included or excluded in genomes. We used the taxonomic metadata to split the viral set into eukaryotic or prokaryotic viruses. Overall this resulted in three subgroups: bacteria, prokaryotic bacteriophages, and eukaryotic viruses (referred to as non-phage viruses, Table2). To assess the models generalization performance, we subset the genomes into training, validation, and test subsets. We used the date of publishing metadata to split the data by publication time, with the training data consisting mostly of genomes published before 2020, and the validation and test data consisting of more recently published genomes. Thus, when applied to newly sequenced DNA, the classification performance of the models on yet unknown data is estimated. For smaller datasets, using average nucleotide identity information (ANI) generated with tools such as Mashtree41 to perform the splits can alternatively be used to avoid overlap between training and test data.

The training data was used for model fitting, the validation data was used to estimate generalization performance during HPO and to check for convergence during final model training, and the test data was used to compare final model performance and draw conclusions. The test data was not seen by the optimization process. The training, validation and test sets represent approximately 70%, 20%, and 10% of the total data, respectively.

The number of FASTA files in the sets and the number of non-overlapping samples in sets of the viral classification task are listed in Table2. Listed is the number of different non-overlapping sequences that could theoretically be extracted from the datasets, were they split into consecutive subsequences. However, whenever the training process reads a file again, e.g. in a different epoch, the starting point of the sequence to be sampled is randomized, resulting in a much larger number of possible distinct (though overlapping) samples. Because the size of the test set is imbalanced, we report class-balanced measures, i.e. measures calculated for each class individually and then averaged over all classes.

For the pathogenicity classification task, we downloaded the dataset from https://zenodo.org/records/367856313. Specifically, the used training files are nonpathogenic_train.fasta.gz, pathogenic_train.fasta.gz, the used validation files are pathogenic_val.fasta.gz, nonpathogenic_val.fasta.gz, and the used test files are nonpathogenic_test_1.fasta.gz, nonpathogenic_test_2.fasta.gz, pathogenic_test_1.fasta.gz, pathogenic_test_2.fasta.gz.

Further information on research design is available in theNature Portfolio Reporting Summary linked to this article.

Go here to see the original:
Optimized model architectures for deep learning on genomic data | Communications Biology - Nature.com

Read More..

Agora Introduces Adaptive Video Optimization Technology | TV Tech – TV Technology

SANTA CLARA, Calif.Agora today unveiled its Adaptive Video Optimization (AVO) technology that uses machine learning to adjust parameters dynamically at every step from capture to playback for the delivery of an enhanced live video streaming experience.

AVO, which includes support for the AV1 video codec, uses a series of advanced machine learning algorithms to address common issues, such as unstable network conditions, packet loss and limited bandwidth, and their impact on streaming video like freezes, stutters, dropped connections and grainy images, the company said.

By optimizing video quality in real time based on network conditions, device capabilities and available bandwidth, AVO ensures the highest video quality possible, it said.

"Reliable and high-quality live streaming is essential in todays video-dominated media landscape," stated Tony Zhao, CEO and co-founder of Agora. "Our Adaptive Video Optimization technology enables smooth delivery of every call and livestreamdespite network variability, users location or device. These improvements increase engagement and empower Agoras customers to provide the highest-quality live video user experience.

Machine learning is used to ensure video streaming is optimized from pre-processing, encoding, and transmission to decoding and post-processing, the company said.

AVO supports advanced video codecs like AV1 and VP9 and dynamically switches to the codec most suitable for an exceptional video experience despite device limitations or streaming constraints. By employing advanced techniques and compression methods, the technology adapts, configuring parameters dynamically to ensure crisp visuals, efficient bandwidth use and a consistent, high-quality experience, it said.

More information is available on the companys website.

The professional video industry's #1 source for news, trends and product and tech information. Sign up below.

View post:
Agora Introduces Adaptive Video Optimization Technology | TV Tech - TV Technology

Read More..

Bridging the Gap: Keys to Embracing AI in 2024 – AiThority

The emergence of ChatGPT in November of 2022 caused a sonic boom across every tech industry in the world. Since then, companies of all sizes have explored use cases for implementing AI into their operations. However, some sectors have been slower with adoption, namely compliance.

Recommended; Googles New Courses in GenAI, Cybersecurity, and Data Analytics

In fact, a Moodys report that surveyed compliance and risk leaders and professionals shows that only9%of firms actively use AI, and 21% arent considering using it at all.

This isnt shocking news. A recentE&Yreport said, Historically, compliance professionals have treated technological innovation with skepticism. Its easy to understand why an industry rooted in transparency, honesty, and ethics is reluctant to adopt an early-stage technology that is particularly riddled with biases andethical dilemmas. Plus, theregulatory and legaloutlook for AI is still quite murky, making it harder to implement it confidently.

AI is uncharted territory for most companies, and some in compliance might prefer to avoid it while the technology is still new. However, the more it evolves, the more it becomes clear that AI adds critical business value when used responsibly for certain tasks especially within professional services firms that have struggled to implement digital transformation and continue to rely on manual, outdated processes.

Lets explore how auditors can embrace AI, enabling them to create process efficiencies and free up time to add value to customers.

Several compliance practices can be streamlined with AI. Whether auditors are using compliance software that has recently rolled out AI functionalities or general tools like ChatGPT, theres value in automating certain processes.

For starters, theres surface-level automation like virtual meeting summaries and transcription on demand. This is a simple yet significant first step for firms to dip their toes into AI without fearing biased outcomes. Tools like Fathom AI Notetaker simply process the information and present it in a summarized and organized way. They can do this accurately and quickly, with a built-in human approval layer to eliminate the occasional mistranslation, and the details are searchable to find the exact moment in a call you want to remember.

Dropping the need to feverishly take notes (and the accompanying anxiety of forgetting a follow-up) clears your head to identify opportunities to add value and build stronger relationships with your clients.

Many of the mundane and frustrating aspects of compliance, such as performing vendor reviews, inspecting policies, and completing security questionnaires, are coming into the AI fold to speed up and simplify these processes. These tasks may give security professionals nightmares, but theyre not the risks that keep the CISO up at night.

By taking advantage of AI tools, companies are able to get rid of repetitive checklist procedures and focus on the areas of their security program that truly matter.

Top News: Aligneds AI Co-pilot Helps B2B Sales Teams Close Deals Even in Their Sleep

When its done right, the time saved from this technology is re-invested in adding value and strengthening other areas of the business. Speed and turnaround time are the obvious benefits, but firms can also drive improvements in quality and general engagement amongst their team. The energy wasted trying to write that perfect paragraph in a client deliverable that may or may not get read at all is suddenly accessible to deploy toward your passions.

Advocating for AI in the industry doesnt mean blindly trusting it.

Firms must carefully vet the tools they use and those of their business partners so they are comfortable with their safety and accuracy. The technology is breathtaking, but it still makes many mistakes, meaning auditors still need to participate in the inputs and outputs.

Moodys report also revealed that 9 out of 10 companies using AI for compliance have seen a positive impact on risk management. In todays digital age, staying competitive is made possible through emerging technologies.

Besides saving time by streamlining paperwork, chatting with a bot to answer compliance questions, and automating document classification, AI tools make it easy to delegate client-facing tasks like simpler form completion. Time and accuracy are of the essence. So, its evident why clients would prefer a firm that uses and values these new technologies instead of one with more traditional approaches.

Moreover, the key for firms in adopting AI and other new technologies is education. Employees need to understand the why and how behind each technology, how to use it, and the expected results. Once stakeholders decide AI is right for them, they can roll out dedicated training sessions for employees to get acquainted with the specific tools they need and maximize their daily operations.

Human input is critical to quality AI output, so this step cant be overlooked.

Compliance software companies are well aware of the high level of transparency their industry requires. As a result, they build trust with their customers by addressing data privacy concerns and being open about the inner workings of their AI tools avoiding the infamousblack-boxaspect of many AI-powered services.

The compliance management platforms that have implemented AI tools have delivered comprehensive information explaining their AI philosophy in tandem with the release of their AI tools. They discuss processes like encryption to ensure sensitive data stays secure, monitoring outcomes to reduce biases, and extensive testing to provide clients with a reliable and secure tool.

The keys to taking the leap into AI include understanding the technology behind each tool, how and where the information comes from, and how it is protected. If a software company isnt transparent about this information, it should be a red flag. Making informed decisions on which tools to implement will set companies up for success they must know theyre not cutting corners with this technology but rather adopting more efficient, agile, and precise methodologies into their compliance examinations.

Feeling comfortable experimenting with new technologies gives compliance firms a competitive advantage in an industry that is slow to change. AI tools arent the immediate end-all-be-all for the profession, but it wouldnt be surprising to see use cases expand in the near future. Given the current trajectory, this technology may assist in report writing and evidence review in the same way a calculatorassistsin preparing a tax return.

From what weve seen with AI in the past year, its safe to say it is an everchanging technology that will probably never be fully developed; it will take on new forms as technology advances. Those who choose to stay on the sidelines, waiting for innovations to develop into their final form, might never find the right time to engage with AI tools. The truth is, the time is now.

Read this article:
Bridging the Gap: Keys to Embracing AI in 2024 - AiThority

Read More..