Page 44«..1020..43444546..5060..»

AI and Big Data Governance: Challenges and Top Benefits – AiThority

Artificial intelligence (AI) and big data share a symbiotic relationship. One of the primary challenges in implementing big data governance is ensuring data awareness and understanding across the organization. Data governance initiatives often fail when stakeholders are not aware of the importance of data governance or lack the knowledge to implement it effectively. Automation plays a pivotal role in modern data governance, significantly enhancing cost-effectiveness. By automating processes, organizations can streamline governance efforts and allocate resources more efficiently. Machine learning further advances these efforts by accelerating metadata collection and improving categorization accuracy, highlighting its critical role in optimizing data governance practices.

Also Read: How AI Is Transforming Big Data?

AI relies heavily on vast datasets for enhancing model training, enabling more precise predictions. Concurrently, big data leverages AI tools to bolster its analytical capabilities. AIs effectiveness hinges on data availability. Without sufficient data, AI functions merely as a theoretical concept. This interplay becomes increasingly crucial as data accessibility expands, facilitating machine learning and iterative processes that drive improved accuracy and operational efficiency autonomously.

A recent report from Drexel Universitys LeBow Center for Business Analytics reveals that 77% of data and analytics professionals prioritize data-driven decision-making within their data programs. However, less than half of survey respondents express high or very high levels of trust in their data. This lack of confidence is largely attributed to poor data quality, which not only obstructs the success of data programs but also undermines data integration efforts and compromises data integrity, presenting significant challenges for big data governance.

Big data governance refers to the management framework implemented within an organization to ensure the proper handling, integrity, usability, and security of big data sets. This framework includes policies, procedures, and standards that govern data access, data quality, compliance with data-related regulations, and data protection.

The AIGA AI Governance and Auditing, led by the University of Turku in Finland, collaborates with academic and industry partners, akin to Google, to offer guidance on the responsible development and deployment of AI.

The AIGA AI Governance Framework serves as a practical manual for organizations aiming to implement ethical and responsible AI systems. Its primary objectives include:

At the core of the discussion, there is an overlaying of ethical AI, data policies, and current data governance. Besides mere algorithmic technical acquaintance, ethical AI encompasses efforts to imbue fairness, transparency, and accountability in the conduction and implementation of AI. Conversely, data governance avails the scaffold for dealing responsibly in the management, protection, and usage of data assets.

Fairness ensures that AI systems do not reflect bias or result in discrimination. This is the principle that will assure stakeholders about the operations of the AI algorithms. Accountability creates liability for developers and operators in AI systems decisions and results. These principles make AI applications greater in lowering ethical risks and increasing trust from users and society.

It addresses the interplay between a concern for ethics in AI and data governance by identifying a series of challenges and opportunities. It emphasizes the requirement to establish a basis for a culture of conscientiousness and responsibility concerning ethical AI. Companies engaging with such matters of ethics will be able to maximize AI transformation and guard individual rights and aspirations.

Also Read: The Rise of AI in Data Collection: Implications and Opportunities for Businesses

Organizations are nowadays using AI more and more to strengthen their data analytics ability and maintain an advantage in the market. When AI is combined with data governance rules, companies can maximize the ROI by measuring ineffective practices and boosting successful strategies:.

Their use is different in different organizational departments for varied data sources that are used in their respective industriesfor example, sales departments that analyze consumer trends. This use has been quite populous with the use of predictive analytics, which increases operational efficiency.

The manufacturing departments in organizations base their investment in AI on analytics to meet their industry needs, targeting the betterment of productive processes, when hardly anything else is. Root causes for quality issues are identified, and then management is equipped to make decisions, and just maybe, those issues are prevented through predictive maintenance strategies.

AI is important for the detection of anomalies and cybersecurity. Machine learning makes AI perform the detection and response of threats timely, especially those concerning data breaches. This proactive approach ensures data integrity and compliance through continuous monitoring and rapid response capabilities.

The democratization of data governance is greatly increased, with AI providing secure data access and not intercepted by cybercriminalsmeaning sophisticated tactics such as Man-In-The-Middle or ransomware. By automating privacy, compliance, and security measures, AI acts as a 24/7 safeguard against cyber threats, thus enhancing data protection.

Moreover, AI also enables the automated discovery of processes, while being able to analyze behavioral data and develop digital records with ease, hence effectively streamlining processes for data management.

AI systems heavily rely on extensive datasets for learning and operational tasks. However, ensuring data accuracy and fairness poses challenges when dealing with incomplete, outdated, inconsistent, or biased data. Organizations must establish stringent data standards, validate sources rigorously, and continually monitor and audit data quality throughout the AI lifecycle to mitigate these issues effectively.

The processing of sensitive data by AI systems, such as health records or financial information, exposes organizations to significant risks like breaches and misuse. Securing data through robust encryption, access controls, and anonymization techniques is crucial. Moreover, compliance with data protection regulations and ethical principles is essential to safeguard against unauthorized access and ensure data privacy.

Integrating diverse data types (structured, unstructured, streaming) from various sources (internal, external, cloud-based) presents significant challenges in data consistency and compatibility. Adopting standardized data models, schemas, and formats, along with leveraging integration tools and platforms, helps organizations achieve seamless data exchange and interoperability across systems.

Effective utilization of AI requires a workforce equipped with strong data literacy skills and a supportive data-driven organizational culture. Enhancing data literacy involves enabling employees to understand, analyze, and effectively utilize data. Fostering a data-driven culture encourages informed decision-making and innovation. Organizations should invest in comprehensive data education, training, and collaborative initiatives to build trust and maximize the adoption of AI technologies among stakeholders.

Improve Data Quality

Data quality is fundamental to any effective data strategy. AI enhances data quality by automating error detection and correction within datasets, thereby reducing inconsistencies and inaccuracies. AI algorithms also standardize data structures, facilitating easier comparison and analysis while uncovering hidden trends and patterns.

Automate Data Compliance

In todays landscape of escalating cyber threats, maintaining data compliance is crucial. AI plays a pivotal role in ensuring continuous compliance by monitoring data flows in real-time. It detects anomalies, unauthorized access attempts, and potential violations of data regulations, triggering alerts and recommendations for corrective actions. Additionally, AI automates the classification and labeling of sensitive data and generates compliance reports, thereby reducing administrative burdens.

Strengthen Data Security

AI enhances data security by proactively analyzing data access patterns to detect suspicious activities such as intrusions or unauthorized access attempts. Leveraging machine-learning-based malware detection systems, AI identifies and mitigates both known and unknown threats by analyzing behavioral patterns. Moreover, AI automates security patch management and monitors adherence to security policies, bolstering overall cybersecurity measures.

Democratize Data

Central to effective data strategy is fostering a data-driven culture within organizations. AI facilitates this by simplifying data access and analysis. AI-powered search engines swiftly extract relevant information from extensive datasets, enabling employees to efficiently retrieve necessary data. Furthermore, AI automates data aggregation and presentation through interactive dashboards, enhancing data accessibility and facilitating seamless information sharing across teams.

The volume of data is growing exponentially, projected to reach 180 zettabytes by 2025. To navigate this vast landscape effectively, artificial intelligence (AI) plays a pivotal role in extracting actionable insights.

AI utilizes machine learning and deep learning tools that leverage big data to learn and evolve over time. These algorithms iteratively refine models to optimize solutions and generate valuable insights for informed decision-making.

Traditionally, data analysis provided a snapshot of current conditionsThis is what has occurred. With AI and machine learning, predictive capabilities extend to forecasting future scenarios and prescribing optimal strategies for sustainable outcomes.

Moreover, AI has revolutionized data analysis by automating complex tasks that were once labor-intensive. Previously, analysts relied on SQL queries and manual statistical modeling, which could take weeks to yield insights. Today, AI-driven analytics processes data swiftly, reducing analysis times to just one or two days.

This section illustrates how AI enhances data insights by harnessing advanced technologies to derive deeper, faster, and more accurate business intelligence from expansive datasets.

The future of data governance is intricately intertwined with the evolution of Artificial Intelligence (AI). In response to escalating data complexity and volume, AI is poised to become an indispensable tool, elevating data governance to a more sophisticated, agile, and proactive level.

AIs capacity for learning, adaptation, and prediction will revolutionize compliance, security processes, and policy adjustments in real time, introducing a forward-thinking approach to governance. By leveraging predictive capabilities, organizations can anticipate challenges and capitalize on opportunities, ensuring that data remains a secure and reliable asset for informed decision-making.

Looking ahead, the integration of AI into data governance transcends mere enhancement; it is essential for unlocking the full potential of data while upholding compliance and strategic integrity. This transformation towards AI-enhanced governance represents a crucial adaptation to a digital landscape where data plays a pivotal role in driving business operations forward.

[To share your insights with us as part of editorial or sponsored content, please write topsen@itechseries.com]

See the original post here:
AI and Big Data Governance: Challenges and Top Benefits - AiThority

Read More..

GovCon Expert Joe Paiva Finds AI at a CrossroadsAmplifying Biases or Empowering All – GovCon Wire

By Joe Paiva, Chief Operating Officer at Fearless

The digital divide of the 1990s exacerbated long-standing inequities in our society.

As broadband internet and personal computers proliferated, they reached affluent neighborhoods and households first. This left economically disadvantaged communities, disproportionately communities of color, on the wrong side of the divide. The impacts on education, job skills development and economic opportunity further widened existing disparities.

Today, we face an even more dangerous new digital divide one fueled by the rapid rise of artificial intelligence and machine learning.

Algorithms are increasingly used to make high-stakes decisions that impact peoples livelihoods and quality of life from college admissions to job candidate screening, to home mortgage approvals to the allocation of government services.

The fundamental problem is this: most of these AI and ML models are trained on historical datasets that reflect centuries of systemic bias and discrimination. Theres redlining in housing, legacy admissions in higher education and underinvestment in schools and businesses in minority neighborhoods. These and countless other inequities are baked into the data by which AI based applications learn how to make predictions.

For example, a 2017study by researchers at the University of Virginia and the University of Washington found that AI algorithms used by major online platforms to target job ads were significantly less likely to show opportunities in engineering, computing and other high-paying fields to women compared to men. The algorithms had learned to optimize ad placement based on past engagement data, perpetuating long-standing gender disparities in STEM careers.Research articles have found similar issues in AI used for hiring, where models trained on historical employment records can entrench racial and gender biases in selection processes. Equally insidious but more difficult to document examples permeate.

Without intentional effort to identify and mitigate these biases, AI will continue to amplify past inequities and erect new barriers to opportunity for underrepresented groups.

And because of the digital divide that began in the 90s, underserved communities and people of color have faced significant barriers to developing digital skills, pursuing education and job opportunities, and participating in the digital economy. As a result, these groups are less likely to be developing and implementing the AI tools and practices that are threatening to further divide.

A 2020 study by the National Skills Coalition, Applying a racial equity lens to digital literacy, reveals stark disparities in digital skill attainment between white workers and their Black, Latino and Asian American and Pacific Islander peers.

The study found that while 41 percent of white workers have advanced digital skills, only 13 percent of Black workers, 17 percent of Latino workers and 28 percent of AAPI workers have attained this level. These gaps in advanced digital skills are the product of structural inequities deeply rooted in our society, from uneven access to quality education and training to biased hiring practices and lack of diversity in the tech sector.

As a result, rather than being the great equalizer we once hoped for, AI threatens to systematize and amplify the biases of the past, affecting access to opportunity for generations to come.

Only by building teams as diverse as the public we serve can we design AI and digital services that work for all.

There are promising examples of AI being deployed thoughtfully to identify bias and the social issues that are present in disparity. TheVeterans Administration is utilizing AI in many ways. TheSocial Determinants of Health Extractor, or SDOH, is an AI-powered tool that analyzes clinical notes in electronic health records to identify key social factors, such as a patients economic status, education, housing situation and social support networks, that may influence their health outcomes.

By using natural language processing and deep learning techniques, the system can automatically surface SDOH information. The extracted SDOH variables can then be used by researchers to examine how these social factors contribute to health disparities and impact clinical outcomes for veterans from minority or underserved communities.

Understanding these relationships is a critical step toward designing more targeted interventions and equitable care delivery practices that address the root social drivers of health.

In the criminal justice system, AI is being leveraged to address racial disparities in sentencing.Researchers at the Stanford Computational Policy Lab developed a machine learning model to identify bias in risk assessment tools used by judges to inform sentencing decisions.

By analyzing data from over 100,000 criminal cases in Broward County, Florida, the team found that Black defendants were nearly twice as likely as white defendants to be misclassified as high risk of recidivism.

Armed with this insight, policymakers and judges can take steps to mitigate the bias, such as adjusting risk thresholds or supplementing the algorithms with additional contextual information.

While AI alone cannot solve systemic inequities, these examples demonstrate its potential as a tool for diagnosing and beginning to address bias in high-stakes government decisions and actions.

To disrupt the cycle and close the digital divide, diversity and inclusion must become a strategic imperative. Not only within government agencies, but also the contracting community that serves them and the technology sector as a whole. Only by building teams as diverse as the public we serve can we design AI and digital services that work for all.

Failing to act will allow the new digital divide to calcify, further concentrating wealth and power in the hands of the few at the expense of the many.

The call to action is clear. As leaders in government and the technology ecosystem, we must:

The path ahead is clear. By embracing diversity, equity and inclusion as core values in the development and deployment of AI, we have the power to create a future where technology truly serves all.

When we harness the talents and perspectives of our nations full diversity, we can create AI systems that are more innovative, more equitable and more impactful. Realizing this vision will require sustained commitment and collaboration across government, industry, academia and communities. It will demand courageous leadership, honest introspection and a willingness to break from the status quo. But the potential rewardsa society where AI narrows opportunity gaps instead of widening them, where technology is a source of empowerment rather than exclusionare too great to ignore.

So let us seize this moment, and work together to build a future where the power of AI lifts up the full diversity of the American people. In this future, the digital divide gives way to digital dignity and innovation drives not just prosperity, but justice. This is the future we must build, and the future we will build, together.

Link:
GovCon Expert Joe Paiva Finds AI at a CrossroadsAmplifying Biases or Empowering All - GovCon Wire

Read More..

Advancing transparency, fairness in AI to boost health equity – TechTarget

The use of race in clinical algorithms has increasingly come under fire as healthcare organizations have begun to pursue health equity initiatives and push back against the practice of race-based medicine. While the recognition that race is a social construct, rather than a biological one, is not new, the move toward race-conscious medicine has gained traction in recent years.

At the same time, evidence pointing to the real and potential harms of race-based algorithms has created significant concerns about how these tools -- many of which are currently in use -- will widen existing health disparities and perpetuate harm.

These worries are exacerbated by the rising use of AI and machine learning (ML), as these technologies are often black box models that remain inscrutable to human users despite the potential for bias.

At the recent "Together to Catalyze Change for Racial Equity in Clinical Algorithms" event -- hosted by the Doris Duke Foundation, the Council of Medical Specialty Societies and the National Academy of Medicine -- healthcare leaders came together to discuss how the industry can embrace the shift away from the use of race as a biological construct in clinical algorithms.

A selection of featured panelists gathered to detail race's use in clinical algorithms to date, with an eye toward addressing its harmful use and advancing health equity. To that end, multiple speakers presented ongoing work to mitigate potential harms from AI and ML tools by prioritizing transparency and fairness.

The pursuit of health equity has led many to question the transparency and fairness strategies needed to ensure that clinical algorithms reduce, rather than promote, disparities.

Rapid advances in AI technology have made these considerations critical across the industry, with public and private stakeholders rushing to catch up, as evidenced by guiding principles for ML-enabled devices recently issued by FDA Center for Devices and Radiological Health (CDRH).

"The FDA put out a call for transparency for machine learning-enabled medical devices," explained Tina Hernandez-Boussard, MD, Ph.D., MPH, associate dean of research and associate professor of biomedical informatics at Stanford University. "They're looking at the who, the why, the what, the where and the how for machine learning practices, so when we talk about transparency: transparency for who? Why do we need it to be transparent? What needs to be transparent?"

Much of this work, she indicated, is centered on how transparency can be embedded into clinical algorithms via automated methods to produce information on a tool's training data, the metrics used to validate it and the population to which it is designed to be applied.

However, Hernandez-Boussard emphasized that integrating transparency in this way requires the development of rigorous standards.

"We need standards and tools for transparency because when I say transparency, my definition might be completely different from somebody else's," she noted. "Industry has a different definition of transparency than other entities. So, we need to think about standards and tools for systematically generating this [transparency] information."

She also underscored the need for distributed accountability in order to drive responsible data and model use. Under such a framework, model developers would be responsible for reporting information about the tools they are building, while model implementers would be responsible for determining how to set up continuous monitoring for their clinical AI.

Further, Hernandez-Boussard indicated that assessing the role of patient outcomes in this accountability framework is essential. She also pointed out a need to require participation in the framework to systematically ensure that algorithms are transparent.

She explained that the recently issued final rule under Section 1557 of the Affordable Care Act (ACA) -- which "prohibits discrimination on the basis of race, color, national origin, age, disability, or sex (including pregnancy, sexual orientation, gender identity, and sex characteristics), in covered health programs or activities," per the U.S. Department of Health and Human Services (HHS) -- is key to these efforts, as its mandates require covered entities to identify and mitigate discrimination related to the use of AI or clinical decision support algorithms.

Hernandez-Boussard highlighted that the ongoing efforts to promote transparency and tackle discrimination are crucial for not only creating accountability but also spreading it across multiple stakeholders rather than just model developers.

"Broad scoping rules on discrimination set the stage for where we're going and how we think about these clinical decision support tools, how we need to evaluate them and how we think about deploying them across populations," she stated. "We need to be promoting health."

Sharing the responsibility of AI transparency also creates an environment in which industry stakeholders can collaborate, instead of compete, to advance the use of equitable clinical tools.

Currently, experts pursuing transparency and accountability efforts for clinical algorithms are challenged by a lack of consensus around what responsible AI looks like in healthcare.

The Coalition for Health AI (CHAI) is working to develop this consensus by bringing together roughly 2,500 clinical and nonclinical member organizations from across the industry, according to its president and CEO, Brian Anderson, MD.

"There's a lot of good work being done behind closed doors in individual organizations [to develop] responsible AI best practices and processes, but not at a consensus level across organizations," Anderson stated. "In a consequential space like healthcare, where people's lives are on the line that's a real problem."

He explained that the health systems that founded CHAI saw this as an opportunity to bring collaborators from every corner of the industry to develop a definition for responsible healthcare AI. However, willingness to collaborate on a responsible AI framework does not mean that defining concepts like fairness, bias and transparency are straightforward.

While there is agreement on metrics like area under the curve, for example, it's not easy to come to full consensus. This is because the stakes are high, Anderson said. Not only do providers, payers and model developers need to come together, he said, but patients' perspectives must also be part of the conversation, adding another layer of complexity.

As part of these consensus-building efforts, CHAI is homing in on a technical framework to help inform developers about what responsible AI looks like throughout the development, deployment, maintenance and monitoring steps of a model's life cycle.

Alongside these technical standards, the coalition is pursuing a national network of AI assurance labs. These labs would serve to bridge the gap between the development of clinical AI evaluation metrics and the application of such metrics to assess current and future tools, Anderson noted. The results of these evaluations would then be added to a national registry that anyone could use to gauge the fairness and performance of a clinical AI tool.

"I am a Native American, I live in the Boston area, I go to [Massachusetts General Hospital (MGH)], and I want to be able to go to this registry and look at the models that are deployed at MGH and see how they perform on Native Americans," Anderson said. "I want to be empowered to have a conversation with my provider and say, 'Maybe you shouldn't use that model because look at its AUC score on people like me.' That's what we're trying to enable with this kind of transparency."

He indicated that being able to engage with such a national registry could help overcome the lack of education for both healthcare stakeholders and the public around the industry's use of AI.

When asked how a patient could take advantage of CHAI's registry without being aware of what specific models were being applied to them by their healthcare provider, Anderson explained that part of CHAI's work to build its assurance labs involves requiring that each model's entry in the national registry lists the health systems at which the tool is deployed.

CHAI recently sought public feedback on a draft framework presenting assurance standards to evaluate AI tools across the lifecycle in the wake of Congressional criticism regarding the coalition's relationship with the FDA.

These efforts might be further hampered by additional challenges posed by efforts to measure AI fairness.

Despite the rapid development of AI and work to build consensus around fairness in algorithms, Shyam Visweswaran, MD, Ph.D., vice chair of clinical informatics and director of the Center for Clinical Artificial Intelligence at the University of Pittsburgh, warned that it might be premature to focus on AI tools -- many of which won't be ready for clinical use for some time -- rather than existing statistical algorithms used for clinical decision-making.

He asserted that performance metrics must be developed for both current statistical algorithms and future AI tools, particularly those that utilize race variables in their calculations. Visweswaran stated that efforts like CHAI's move the needle, but the struggle to define algorithmic fairness goes beyond agreeing on a one-size-fits-all approach.

He emphasized that the main difference between a statistical algorithm and an AI tool is the number of data points and variables used to develop each. AI and ML tools typically require vast amounts of data, whereas statistical models can be developed using a significantly smaller pool of information.

Further, derivation and performance data for statistical algorithms are typically published, and the tools themselves are in extensive clinical use. With AI, information about the model might be largely unavailable.

"There are over 500 FDA-certified health AI algorithms out there, and I don't think I can get my hands on any one of them in terms of their performance metrics," Visweswaran said. "So, as a core tenet of transparency, we have to be able to fix that going forward. [AI tools] are currently not in extensive clinical use, but they will be as we go forward, and the efforts to evaluate bias in them are just beginning."

He further underscored that currently, it's unclear how many existing healthcare algorithms are racially biased, aside from the handful that have been researched recently. To address this, Visweswaran and colleagues developed an online database to catalog information about currently deployed race-based algorithms.

He noted that when looking at which of these tools might be biased, starting with those that already incorporate race or ethnicity as an input variable is a good first step, as these explicitly produce different outputs for different racial categories.

However, he indicated that continually updating the online database and evaluating algorithms that don't explicitly incorporate race is necessary to reduce disparities and improve outcomes.

"There are devices which are biased in terms of racial categories, [like] pulse oximetry it was noticed that for darker-skinned people, the tool was not well-calibrated," Visweswaran stated. "By the time patients came to the hospital, they were actually pretty sick."

The same is true for devices like infrared thermometers and electroencephalograms (EEGs), which he noted do not work as well on patients with thick hair. This causes a disproportionate number of poor-quality readings for Black patients, which often leads to diagnostic issues down the line.

Further, poor-quality EEG readings cannot be used to develop algorithms, meaning that marginalized patient data might not be incorporated into a clinical decision support tool.

"Almost all the EEG data sets out there for research purposes don't have African-American data in them because it gets thrown out," Visweswaran explained, leading to the potential development of biased models.

This problem is exacerbated by the fact that the version history of an algorithm typically isn't available for researchers looking to assess a model's performance and fairness over time.

"When a new version of an algorithm comes, the old version disappears, [but] we need to track all these versions as we go along," he asserted. "We need a story for each of these algorithms -- which is freely available -- so that when researchers or developers go in, they don't have to start from scratch: they can go and look at versions of the algorithm, see the problems with a previous version and why the new version was developed. Sometimes, it's not quite clear that the newer version is actually better than the older version."

Alongside the need to track information about clinical algorithms, Visweswaran stated that stakeholders need to be mindful of how they conceptualize fairness. As part of the ongoing work to enhance its algorithm-tracking database, his team is developing "fairness profiles," which use fairness metrics -- like differences in sensitivity between groups -- found in the literature to assess each tool.

However, these are group fairness metrics, which evaluate measures across groups or populations.

"These are statistical measures, and they're in common use, but they don't guarantee that for a particular person, the algorithm actually is doing a good job," Visweswaran said. "All it guarantees is that for that particular group, on average, it does okay."

This knowledge has contributed to growing conversations around the role of individual fairness, which posits that similar individuals should receive similar treatments, and in turn, experience similar outcomes.

"The problem is that defining similarity between individuals is tricky, and right now, we don't have any standard measures which are available to measure individual fairness The key challenge is to derive the appropriate similarity metric by which to decide who is the peer group that we are going to use for this particular person," Visweswaran noted.

A focus on coming up with one fairness approach that everyone can agree on might undercut the possibility that there is no single set of fairness metrics that will work well for each patient.

"Having this grand idea of getting to a fairer situation is great, but some of the devil is going to be in the details, and there might be math out there which says you can't do some of these things that you actually want to do," Visweswaran cautioned.

Shania Kennedy has been covering news related to health IT and analytics since 2022.

See more here:
Advancing transparency, fairness in AI to boost health equity - TechTarget

Read More..

Ancoris named Leader for Data Analytics and Machine Learning in ISG Provider Lens Google Cloud Partner … – PR Newswire UK

LONDON, July 9, 2024 /PRNewswire/ -- Ancoris, a UK-based Google Cloud services provider, has been named a Leader for Data Analytics and Machine Learning in ISG Provider Lens Google Cloud Partner Ecosystem2024 report. The report released by ISG today provides a comprehensive independent overview of the Google partner landscape, alongside analysis of the strengths and capabilities of each individual provider.

"The Data Analytics and Machine Learning quadrant represents the most dynamic, innovative and competitive part of the Google Ecosystem" where only 9 of the 43 companies assessed for this quadrant were awarded Leader positions. "Ancoris is making significant investments in GenAI skills and assets" as it pivots to help Enterprise and Public Sector organisations embed AI across the organisation to help solve their biggest challenges.

"We are so thrilled to have been recognised as a Leader in the Data Analytics and Machine Learning quadrant this year," says Andre Azevedo, Ancoris CEO. "When we made the decision to invest in Generative AI and launch a dedicated practice in June 2023, we knew we had the capability to be successful, but how the market and our customers would respond was unknown. Ancoris has had a robust and mature data practice for a long time, but the introduction of Generative AI last year opened up customers' imaginations to how AI could transform their organisation," Azevedo continues. "As a result, the last 12 months have been transformative for our business - we've started valuable relationships with many new Enterprise and Public Sector customers, expanded existing relationships, and are doing more innovative work than ever before."

ISG recognises Ancoris' innovative AI-Native approach to helping customers overcome business challenges and its rapid prototyping capability as an accelerator in helping customers see tangible benefit from AI quickly. "The reality of Generative AI adoption is there's still a lot of hype, limited public references, and very little benchmark data to help customers build financial cases for Generative AI investment," says Matt Frank, Ancoris Chief AI and Innovation Officer.

"It's why we focus on meeting customers wherever they are on their AI adoption journey," Frank continues. "It's important that we get something tangible and actionable in front of the customer as soon as possible, whether that's developing and prioritising use cases through our Actionable AI Framework consulting services, or taking use cases from prototype to production with our Simple methodology and rapid prototyping. We find demonstrating value quickly and aligning with strategic outcomes accelerates adoption and sets customers on a more meaningful AI adoption path."

Ancoris is also recognised as a Product Challenger across the three other quadrants it responded to: Implementation and Integration, Managed Services, and Workspace. "To be the only Google-dedicated Partner to feature across these 4 key quadrants is a testament to our focus and our methodology for solving customer problems," Azevedo comments. "It's the combination of our capabilities across data and AI, software engineering, and cloud infrastructure that make us different.. Data & AI capabilities are a key skill, but to bring AI-native solutions to life you need to build the user experience and integrate it across systems and processes. Our ability to embed AI into existing or new applications, business systems, or processes - and do it all on secure and robust Google infrastructure - is a true differentiator against the other pure data players in the ecosystem."

To download the full report, visit https://www.ancoris.com/isg-leader-data-2024

About AncorisAncoris is a leading Google Cloud Services Provider, headquartered in the UK, on a mission to become the most innovative Google Cloud partner in the ecosystem. Ancoris leverages its strong problem solving skills and continuous improvement approach to help customers become AI Native and stay ahead of their competition. Ancoris has extensive experience in Google Cloud technologies helping enterprises integrate AI-native solutions into their business through expertise in Data & AI, Application and Infrastructure Modernisation, Workspace, and Maps. Ancoris was recognized as a Leader for Data, Analytics, and Machine Learning in the ISG Provider Lens for Google Cloud Partner Ecosystem in 2024, and a Rising Star in 2022 and 2023. Ancoris was awarded Google Cloud's 2024 EMEA Public Sector Partner of the Year award. Ancoris employs the best in the business and was named in the Top 10 Sunday Times Best Places to Work 2023, and a Top Place to Work in 2024.

ContactHolly Normanhnorman@ancoris.com

Image - https://mma.prnewswire.com/media/2457182/Ancoris.jpg Logo - https://mma.prnewswire.com/media/2099949/4802382/Ancoris_Logo.jpg

Read more:
Ancoris named Leader for Data Analytics and Machine Learning in ISG Provider Lens Google Cloud Partner ... - PR Newswire UK

Read More..

Synergetics.ai: UnifyGPT Announces New Brand Name to Reflect Synergistic AI Mission – AiThority

UnifyGPT Inc, a leading innovator in artificial intelligence solutions, is thrilled to announce its rebranding to Synergetics.ai. This strategic name change reflects the companys dedication to providing AI technologies that harmoniously integrate with the unique needs and goals of enterprise customers.

Also Read:Niva, Backed by Gradient, Googles AI Fund, Emerges to Tackle Global Business Verification

The name Synergetics.ai was chosen to encapsulate the companys mission: to utilize AI in ways that are synergistically aligned with the operational, safety, and privacy requirements of users, enterprises, and organizations. This rebranding underscores the companys commitment to developing responsible AI solutions that prioritize the safety and privacy concerns of all stakeholders.

Our new brand name, Synergetics.ai, perfectly aligns with our corporate mission to create AI solutions that not only enhance but also integrate seamlessly with our clients operations, saidRaghu Bala, Founder and CEO of Synergetics.ai. We believe in the power of AI to drive innovation and efficiency, but we are equally committed to ensuring that these technologies are used responsibly and ethically.

The rebranding includes a new logo, website, and overall visual identity that reflects the companys forward-thinking approach and its core values of integrity, responsibility, and innovation.

Also Read:Quali Uses AI to Simplify Infrastructure as Code and Automate Application Environment Orchestration

Synergetics stands at the forefront of agentic AI platforms, transforming enterprise operations across diverse verticals including financial services, healthcare, e-commerce, and more. By managing both AI bots and autonomous agents, Synergetics seamlessly integrates advanced machine learning and robust automation capabilities to optimize processes, enhance decision-making, and foster innovation. Its intuitive interface and scalable solutions ensure easy adoption and significant impact across industries. Trusted by leading enterprises, Synergetics redefines efficiency and productivity, setting new standards for the future ofAIinbusiness.

Also Read:Revolutionizing Customer Interactions: Introducing Converse AI by Qwary

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Read more from the original source:
Synergetics.ai: UnifyGPT Announces New Brand Name to Reflect Synergistic AI Mission - AiThority

Read More..

Machine learning-based decision support model for selecting intra-arterial therapies for unresectable hepatocellular … – Nature.com

Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram L, Jemal A, et al. Global Cancer Statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021;71:20949. https://doi.org/10.3322/caac.21660.

Article CAS PubMed Google Scholar

Marrero JA, Kulik LM, Sirlin CB, Zhu AX, Finn RS, Abecassis MM, et al. Diagnosis, staging, and management of hepatocellular Carcinoma: 2018 Practice Guidance by the American Association for the Study of Liver Diseases. Hepatology. 2018;68:72350. https://doi.org/10.1002/hep.29913.

Article PubMed Google Scholar

Villanueva A. Hepatocellular Carcinoma. N. Engl J Med. 2019;380:145062. https://doi.org/10.1056/NEJMra1713263.

Article CAS PubMed Google Scholar

Park JW, Chen M, Colombo M, Roberts LR, Schwartz M, Chen PJ, et al. Global patterns of hepatocellular carcinoma management from diagnosis to death: the BRIDGE Study. Liver Int. 2015;35:215566. https://doi.org/10.1111/liv.12818.

Article PubMed PubMed Central Google Scholar

Ghanaati H, Mohammadifard M, Mohammadifard M. A review of applying transarterial chemoembolization (TACE) method for management of hepatocellular carcinoma. J Fam Med Prim Care. 2021;10:355360. https://doi.org/10.4103/jfmpc.jfmpc_2347_20.

Article Google Scholar

Sidaway P. HAIC-FO improves outcomes in HCC. Nat Rev Clin Oncol. 2022;19:150. https://doi.org/10.1038/s41571-022-00599-0.

Article CAS PubMed Google Scholar

He M, Li Q, Zou R, Shen JX, Fang WQ, Tan GS, et al. Sorafenib plus hepatic arterial infusion of Oxaliplatin, Fluorouracil, and Leucovorin vs Sorafenib alone for hepatocellular carcinoma with portal vein invasion: a randomized clinical trial. JAMA Oncol. 2019;5:95360. https://doi.org/10.1001/jamaoncol.2019.0250.

Article PubMed PubMed Central Google Scholar

Zhang Z, Li C, Liao W, Huang Y, Wang Z A Combination of Sorafenib, an Immune Checkpoint Inhibitor, TACE and Stereotactic Body Radiation Therapy versus Sorafenib and TACE in Advanced Hepatocellular Carcinoma Accompanied by Portal Vein Tumor Thrombus. Cancers. 2022;14. https://doi.org/10.3390/cancers14153619.

Lencioni R, Llovet JM, Han G, Tak WY, Yang JM, Alfredo G, et al. Sorafenib or placebo plus TACE with doxorubicin-eluting beads for intermediate stage HCC: The SPACE trial. J Hepatol. 2016;64:10908. https://doi.org/10.1016/j.jhep.2016.01.012.

Article CAS PubMed Google Scholar

McGlynn KA, Petrick JL, El-Serag HB. Epidemiology of Hepatocellular Carcinoma. Hepatology. 2021;73:413. https://doi.org/10.1002/hep.31288.

Article CAS PubMed Google Scholar

An C, Zuo M, Li W, Chen Q, Wu P. Infiltrative Hepatocellular Carcinoma: Transcatheter arterial chemoembolization versus hepatic arterial infusion chemotherapy. Front Oncol. 2021;11:747496. https://doi.org/10.3389/fonc.2021.747496.

Article CAS PubMed PubMed Central Google Scholar

An C, Yao W, Zuo M, Li W, Chen Q, Wu P Pseudo-capsulated hepatocellular carcinoma: hepatic arterial infusion chemotherapy versus Transcatheter Arterial Chemoembolization. Acad Radiol. 2023. https://doi.org/10.1016/j.acra.2023.06.021.

Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J. 2015;13:817. https://doi.org/10.1016/j.csbj.2014.11.005.

Article CAS PubMed Google Scholar

An C, Yang H, Yu X, Han Z, Cheng Z, Liu F, et al. A machine learning model based on health records for predicting recurrence after microwave ablation of hepatocellular carcinoma. J Hepatocell Carcinoma. 2022;9:67184. https://doi.org/10.2147/JHC.S358197.

Article PubMed PubMed Central Google Scholar

Uche-Anya E, Anyane-Yeboa A, Berzin TM, Ghassemi M, May FP. Artificial intelligence in gastroenterology and hepatology: how to advance clinical practice while ensuring health equity. Gut. 2022;71:190915. https://doi.org/10.1136/gutjnl-2021-326271.

Article PubMed Google Scholar

Moons KGM, Altman DG, Reitsma JB, Ioannidis JPA, Macaskill P, Steyerberg EW. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med. 2015;162:W173. https://doi.org/10.7326/M14-0698. Jan 6

Article PubMed Google Scholar

EASL Clinical Practice Guidelines: Management of hepatocellular carcinoma. J Hepatol. 2018;69:182-236. https://doi.org/10.1016/j.jhep.2018.03.019.

Cardella JF, Kundu S, Miller DL, Millward SF, Sacks D. Society of Interventional Radiology Clinical Practice Guidelines. J Vasc Inter Radio. 2009;20:S189191. https://doi.org/10.1016/j.jvir.2009.04.035.

Article Google Scholar

Liu W, Wei R, Chen J, Li Y, Pang H, Zhang W, et al. Prognosis prediction and risk stratification of transarterial chemoembolization or intraarterial chemotherapy for unresectable hepatocellular carcinoma based on machine learning. Eur Radiol. 2024 Jan 30. https://doi.org/10.1007/s00330-024-10581-2.

Article PubMed Google Scholar

Wang K, Tian J, Zheng C, Yang H, Ren J, Liu Y, et al. Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and SHAP. Comput Biol Med. 2021;137:104813. https://doi.org/10.1016/j.compbiomed.2021.104813.

Article PubMed Google Scholar

Ma M, Liu R, Wen C, Xu W, Xu Z, Wang S, et al. Predicting the molecular subtype of breast cancer and identifying interpretable imaging features using machine learning algorithms. Eur Radiol. 2022;32:165262. https://doi.org/10.1007/s00330-021-08271-4.

Article CAS PubMed Google Scholar

Li QJ, He MK, Chen HW, Fang WQ, Zhou WM, Liu X, et al. Hepatic arterial infusion of Oxaliplatin, Fluorouracil, and Leucovorin versus transarterial chemoembolization for large hepatocellular carcinoma: a randomized Phase III trial. J Clin Oncol. 2022;40:15060. https://doi.org/10.1200/JCO.21.00608.

Article CAS PubMed Google Scholar

Jin ZC, Zhong BY, Chen JJ, Zhu HD, Sun JH, Yin GW, et al. Real-world efficacy and safety of TACE plus camrelizumab and apatinib in patients with HCC (CHANCE2211): a propensity score matching study. Eur Radiol. 2023. https://doi.org/10.1007/s00330-023-09754-2.

Johnson PJ, Berhane S, Kagebayashi C, Shinji S, Mabel T, Helen LR, et al. Assessment of liver function in patients with hepatocellular carcinoma: a new evidence-based approach-the ALBI grade. J Clin Oncol. 2015;33:5508. https://doi.org/10.1200/JCO.2014.57.9151.

Article PubMed Google Scholar

Reig M, Forner A, Rimola J, Joana F, Marta B, ngeles G, et al. BCLC strategy for prognosis prediction and treatment recommendation: The 2022 update. J Hepatol. 2022;76:68193. https://doi.org/10.1016/j.jhep.2021.11.018.

Article PubMed Google Scholar

Song S, Bai M, Li X, Guo S, Yang W, Li C, et al. Early predictive value of circulating biomarkers for sorafenib in advanced hepatocellular carcinoma. Expert Rev Mol Diagn. 2022;22:36178. https://doi.org/10.1080/14737159.2022.2049248.

Article CAS PubMed Google Scholar

Hiraoka A, Ishimaru Y, Kawasaki H, Aibiki T, Okudaira T, Toshimori A, et al. Tumor Markers AFP, AFP-L3, and DCP in Hepatocellular Carcinoma Refractory to Transcatheter Arterial Chemoembolization. Oncology. 2015;89:16774. https://doi.org/10.1159/000381808.

Article CAS PubMed Google Scholar

Zhou H, Song T. Conversion therapy and maintenance therapy for primary hepatocellular carcinoma. Biosci Trends. 2021;15:15560. https://doi.org/10.5582/bst.2021.01091.

Article CAS PubMed Google Scholar

Fan J, Tang ZY, Yu YQ, Wu ZQ, Ma ZC, Zhou XD, et al. Improved survival with resection after transcatheter arterial chemoembolization (TACE) for unresectable hepatocellular carcinoma. Dig Surg. 1998;15:6748. https://doi.org/10.1159/000018676.

Article CAS PubMed Google Scholar

Shi F, Lian S, Mai Q, Mo ZQ, Zhuang WH, Cui W, et al. Microwave ablation after downstaging of hepatocellular carcinoma: outcome was similar to tumor within Milan criteria. Eur Radiol. 2020;30:245462. https://doi.org/10.1007/s00330-019-06604-y.

Article PubMed Google Scholar

Binnewies M, Roberts EW, Kersten K, Vincent C, Douglas FF, Miriam M, et al. Understanding the tumor immune microenvironment (TIME) for effective therapy. Nat Med. 2018;24:54150. https://doi.org/10.1038/s41591-018-0014-x.

Article CAS PubMed PubMed Central Google Scholar

Cao J, Su B, Peng R, Tang H, Tu DY, Tang YH, et al. Bioinformatics analysis of immune infiltrates and tripartite motif (TRIM) family genes in hepatocellular carcinoma. J Gastrointest Oncol. 2022;13:194258. https://doi.org/10.21037/jgo-22-619.

Article PubMed PubMed Central Google Scholar

Liu F, Liu D, Wang K, Xie XH, Su LY, Kuang M, et al. Deep learning radiomics based on contrast-enhanced ultrasound might optimize curative treatments for very-early or early-stage hepatocellular carcinoma patients. Liver Cancer. 2020;9:397413. https://doi.org/10.1159/000505694.

Article PubMed PubMed Central Google Scholar

Ding W, Wang Z, Liu FY, Cheng ZG, Yu XL, Han ZY, et al. A hybrid machine learning model based on semantic information can optimize treatment decision for nave single 3-5-cm HCC patients. Liver Cancer. 2022;11:25667. https://doi.org/10.1159/000522123.

Article CAS PubMed PubMed Central Google Scholar

Read the original here:
Machine learning-based decision support model for selecting intra-arterial therapies for unresectable hepatocellular ... - Nature.com

Read More..

60 Growing AI Companies & Startups (July 2024) – Exploding Topics

You may also like:

Artificial intelligence has the potential to transform industries ranging from medicine to sales to software development. And this potential is finally being realized.

The AI industry is poised to grow to an estimated $305.9 billion in 2024. Today, AI has become essential for an increasing number of businesses as remote work and reliance on technology are the new daily norm.

Read below for our picks for some of the most promising AI startups with a broad range of use cases across different industries.

5-year search growth: 469%

Search growth status: Exploding

Year founded: 2009

Location: Cologne, Germany

Funding: $400M (Series Unknown)

What they do: DeepL is a neural machine translation platform that uses advanced algorithms to translate text from one language to another with exceptional accuracy and fluency. With support for over 30 languages, DeepL's technology combines neural network models, deep learning techniques, and natural language processing (NLP) to provide high-quality translations for a wide range of content types, including websites, documents, and emails.

With its intuitive interface and powerful API, DeepLenables businesses and individuals to communicate and collaborate across different languages and cultures with ease. In May 2024, DeepL raised $300 million at a $2 billion valuation.

5-year search growth: 2,250%

Search growth status: Exploding

Year founded: 2016

Location: New York City, New York

Funding: $17.9M (Series B)

What they do: Frame is building one of the leading customer success platforms by providing leading artificial intelligence software around a robust solutions framework aimed at solving numerous customer challenges.

By building The Voice of the Customer engine, teams can use Frame to detect themes among customers, identify patterns for retention or acquisition of customers, and turn qualitative feedback into quantitative data for leadership.

5-year search growth: 2,233%

Search growth status: Regular

Year founded: 2018

Location: Copenhagen, Denmark

Funding: $18.6M (Series A)

What they do: Uizard is an AI-powered platform that helps users create professional-looking designs for websites and mobile apps with minimal coding or design experience. Uizard's proprietary technology uses machine learning algorithms to translate sketches and wireframes into functional code and designs, reducing the time and effort required to create a prototype.

Users can also create responsive and customizable designs that can be shared and tested with stakeholders.

5-year search growth: 476%

Search growth status: Exploding

Year founded: 2016

Location: Mountain View, CA

Funding: $305M (Series C)

What they do: Moveworks is an AI platform that helps employers create a better workplace. By using natural language understanding (NLU), conversational AI and probabilistic machine learning, the platform is able to support employees issues end-to-end. Examples of AI in action include troubleshooting common questions, such as getting access to software and routing document approvals to the correct person.

5-year search growth: 614%

Search growth status: Exploding

Year founded: 2013

Location: San Francisco, California

Funding: $4B (Series Unknown

What they do: Databricks is a data and AI company offering a unified analytics platform for integrating AI and machine learning. Customers can use the platform to analyze large-scale data, generate real-time analytics, build and deploy ML applications, and more. In September 2023, Databricks raised $500 million in Series I funding at a valuation of $43 billion.

5-year search growth: 4,400%

Search growth status: Regular

Year founded: 2017

Location: London, England

Funding: $156.6M (Series C)

What they do: Synthesia is an AI-powered platform that enables businesses to create and personalize video content at scale. The platform can generate realistic and engaging videos with human-like avatars, making it ideal for a variety of applications, from e-learning and marketing to news reporting and virtual events.

Synthesia's customization options including language support, voiceover selection, and scene creation, enable users to create and deploy video content quickly and efficiently.

5-year search growth: 99x+

Search growth status: Exploding

Year founded: 2021

Location: San Jose, California

Funding: $93M (Series B)

What they do: Codeium is an AI-powered coding assistant that provides users with real-time code suggestions, code search, IDE integration, and more. The tool includes a Codeium Chat feature, which acts as a chatbot to write new code and answer coding-related questions. In January 2024, the startup raised $65 million in Series B funding at a $500m valuation.

5-year search growth: 1,138%

Search growth status: Exploding

Year founded:

Location:

Funding:

What they do: Cohere develops advanced AI and large language models for businesses to understand and generate human-like text. The company's three primary models are Command (generative AI text generation), Embed (text embeddings for analysis), and Rerank (improving search relevance). Cohere recently raised $450 million at a $5 billion valuation.

5-year search growth: 4,600%

Search growth status: Regular

Year founded: 2019

Location: San Diego, California

Funding: $4.5M (Seed)

What they do: Soundful is an AI-powered platform that enables businesses to create and customize high-quality soundtracks for their digital content, such as videos, podcasts, and advertisements. Soundful's technology uses deep learning algorithms to analyze the emotions, tone, and context of the content and generate soundtracks that complement and enhance the viewer's experience.

The platform allows users to easily adjust the mood, tempo, and style of the soundtracks to align with their brand identity and message.

5-year search growth: 9,400%

Search growth status: Exploding

Year founded: 2011

Location: San Francisco, California

Funding: $450M (Undisclosed)

What they do: Dialpad is a customer intelligence platform that offers various AI tools for customer engagement, sales intelligence, and team collaboration. The platform provides a central communications hub for businesses that includes video meetings, contact syncing, call recording, automated speech recognition, conversational chatbots, instant call summaries, and more. Dialpad has over 30,000 customers, including big brands like WeWork and Xero.

5-year search growth: 4,200%

Search growth status: Regular

Year founded: 2021

Location: San Francisco, California

Funding: $2.6M (Seed)

What they do: Writesonic enables users to generate content (such as blog posts, ad copy, and product descriptions) in a fraction of the time it would take to do so manually. Using natural language processing (NLP) and machine learning (ML) algorithms, Writesonic's technology can analyze a user's prompts and generate human-like text that matches the desired tone, style, and structure.

The platform also allows users to fine-tune and edit the generated text to meet their specific needs, while also providing suggestions for improvements.

5-year search growth: 689%

Search growth status: Exploding

Year founded: 2020

Location: San Francisco, California

Funding: $42M (Series A)

What they do: Atomic AI operates a content intelligence platform that leverages AI to facilitate the efficient development of new molecules and medicines. Through its proprietary platform, the company collects and analyzes engagement data from client analytics platforms, generating predictive reports on topics, publishing times, and distribution channels.

By providing users with new strategies to target RNA structure and treat previously undruggable diseases, Atomic AI is working to revolutionize the field of medicine in a unique way.

5-year search growth: 6,200%

Search growth status: Regular

Year founded: 2017

Location: Berkeley, California

More:
60 Growing AI Companies & Startups (July 2024) - Exploding Topics

Read More..

Predictive modeling of lower extreme deep vein thrombosis following radical gastrectomy for gastric cancer: based on … – Nature.com

The significance of this study in addressing the risk of lower extremity DVT in postoperative GC patients is underscored by the substantial morbidity and potential mortality associated with VTE in this patient population13. Notably, GC surgery is linked to a heightened risk of postoperative VTE, including DVT and PE14,15. Compared with air wave pressure therapy instrument, rivaroxaban has better preventive effect on lower extremity DVT after GC operations16. A systematic review and meta-analysis involving 111,936 patients indicated that the 1-month incidence of VTE post GC surgery was 1.8%, and specifically for DVT, it was 1.2%11. Among the 666 Korean patients after gastrectomy, the overall incidence of VTE was 2.1%17. These figures highlight the critical importance of focusing on DVT in GC patients postsurgery. Moreover, this study aims to fill a significant gap in the current research. While the incidence of VTE in GC patients is known, there is less focus on predicting lower extremity DVT, specifically in the postoperative phase of GC. A retrospective cohort study revealed that age, preoperative blood glucose level, postoperative anemia, and tumor malignancy were independent risk factors for postgastrectomy VTE in GC patients18. However, compared with previous studies, our study focused on predictive modeling using a comprehensive set of clinical indicators, including age and calcium ion levels, and provided a more detailed risk assessment tool; this underscores the need for predictive models that can accurately identify patients at higher risk for DVT following GC surgery, enabling targeted prophylactic strategies.

The predictive model developed in this study demonstrated high accuracy, as reflected by the area under the curve (AUC) values in both the training and validation sets. This finding indicates the strong predictive capability of the NRS-2002, which is essential in clinical settings for risk stratification and management of DVT in postoperative GC patients. The importance of such predictive models is highlighted by the varying risk factors identified across different studies, including age and tumor-related factors. Age has been consistently identified as a significant risk factor for postoperative VTE18, and the role of calcium in coagulation processes further substantiates its relevance as a predictive marker in the developed model. These factors provide critical insights into patient-specific risk profiles and can guide clinicians in the prophylaxis and management of DVT after GC surgery.

According to our univariate analysis, age emerged as a significant independent variable influencing DVT occurrence following gastrectomy in GC patients. Furthermore, multivariate analysis highlighted age as a contributing factor to the development of postoperative DVT in these patients. Age is also a risk factor for VTE in patients with GC19. Here, we found that calcium ions were a significant clinical factor in our model. The role of calcium ions in the coagulation process and thrombosis is complex and multifaceted; one key aspect is their involvement in platelet activation. Platelets play a critical role in maintaining hemostasis and vessel integrity under normal conditions and in thrombosis under pathological conditions. The activation of platelets strongly depends on an increase in the intracellular calcium (Ca2+) concentration. This increase results from the release of Ca2+ by the dense tubular system and the entry of Ca2+ from the extracellular space20. In the context of fibrinogen clotting, calcium ions are also known to be necessary for the normal polymerization of fibrin monomers21. In the activation of coagulation factor XIII, an important player in the final stages of the coagulation cascade, calcium also plays a crucial role22. Therefore, calcium ions are integral to the coagulation process and influence various stages, from platelet activation to stabilization of the fibrin clot.

LDL plays a significant role in the pathogenesis of atherothrombotic processes. It can modify the antithrombotic properties of the vascular endothelium and influence vessel contractility, partly by reducing the availability of endothelial nitric oxide and activating proinflammatory signaling pathways. These modified intravascular LDLs promote the formation of foam cells from smooth muscle cells and macrophages, increasing the vulnerability of atherosclerotic plaques and enhancing the thrombogenicity of both plaques and blood23.

Several research findings indicate that a reduction in hemoglobin levels may serve as an indicator of increased VTE risk and poorer prognosis in cancer patients5. Another study demonstrated that low hemoglobin levels at baseline correlated with an increased likelihood of symptomatic VTE, symptomatic DVT, and nonfatal PE24. Another study investigated the influence of anemia on the risk of bleeding in patients receiving anticoagulant therapy for VTE25. These findings underscore the importance of considering anemia as a factor in the management of VTE, particularly in populations at high risk, such as acutely ill patients and those with cancer.

Different from previous research studies, here, we collected plentiful and comprehensive clinical indicators including a total of 47 baseline, preoperative, surgical and pathological clinical data. So far, we have included the largest number of clinical variables in our study. Most importantly, in our research, we use a variety of comprehensive machine learning algorithms. Machine learning methods have been successfully applied in various fields of medicine and have shown great potential in predictive data analytics26. Compared to conventional prediction models (logistic regression), machine learning models perform as well as logistic regression models; however, some machine learning methods exhibit exceptional performance27. One study developed machine learning models (LightGBMs) to predict VTE diagnosis and 1-year risk using electronic health record data from diverse populations. These tools outperformed existing risk assessment tools, showing robust performance across various VTE types and patient demographics28. In our study, we used various machine learning algorithms, including logistic regression, decision trees, random forests, SVM, XGBoost, and LightGBM. By applying these insights to our study, we can anticipate a more robust and precise model for predicting lower extremity DVT risk in postoperative GC patients, potentially leading to better patient outcomes.

In a real-world setting, the model could be integrated into clinical decision-making processes, perhaps through electronic health records systems. By inputting patient-specific data, health care providers could receive immediate risk assessments, guiding them in choosing the most appropriate prophylactic measures. This approach aligns with the growing trend of personalized medicine, where treatment and preventive strategies are tailored to individual patient characteristics and risk profiles.

Despite its contributions, one potential limitation of this study is its retrospective nature, which may introduce biases such as selection bias or information bias. The data used in the study might also have limitations in terms of their scope or the accuracy of the recorded information. Another limitation could be the generalizability of the findings. The studys results are based on a specific patient population and may not be directly applicable to other populations or settings. Additionally, this study developed a population-specific predictive model. However, the selected predictors were not unique to any specific population, as they appear applicable to patients undergoing gastrointestinal, liver, and pancreatic surgeries. Therefore, it raises the question of whether it is necessary to develop a postoperative lower limb thrombosis prediction model specifically for patients undergoing radical gastrectomy.

Future research should focus on validating the predictive model in diverse patient populations and clinical settings to enhance its generalizability. Future studies could also explore the integration of the model into clinical workflows and its impact on patient outcomes in a real-world setting. However, further research is needed to understand the biological mechanisms underlying the identified risk factors for DVT in GC patients; this could lead to more targeted therapeutic interventions. Additionally, incorporating new types of data, such as genetic or molecular marker data, could improve the models predictive accuracy.

In summary, the development of a predictive model for lower extremity DVT in postoperative GC patients addresses a vital clinical need. The models accuracy and ability to identify significant predictive factors make it a valuable tool for enhancing postoperative care and patient outcomes in patients with GC.

See the rest here:
Predictive modeling of lower extreme deep vein thrombosis following radical gastrectomy for gastric cancer: based on ... - Nature.com

Read More..

AI Is Cracking a Hard Problem Giving Computers a Sense of Smell – The Good Men Project

Over 100 years ago, Alexander Graham Bell asked the readers of National Geographic to do something bold and fresh to found a new science. He pointed out that sciences based on the measurements of sound and light already existed. But there was no science of odor. Bell asked his readers to measure a smell.

Today, smartphones in most peoples pockets provide impressive built-in capabilities based on the sciences of sound and light: voice assistants, facial recognition and photo enhancement. The science of odor does not offer anything comparable. But that situation is changing, as advances in machine olfaction, also called digitized smell, are finally answering Bells call to action.

Research on machine olfaction faces a formidable challenge due to the complexity of the human sense of smell. Whereas human vision mainly relies on receptor cells in the retina rods and three types of cones smell is experienced through about 400 types of receptor cells in the nose.

Machine olfaction starts with sensors that detect and identify molecules in the air. These sensors serve the same purpose as the receptors in your nose.

But to be useful to people, machine olfaction needs to go a step further. The system needs to know what a certain molecule or a set of molecules smells like to a human. For that, machine olfaction needs machine learning.

Machine learning, and particularly a kind of machine learning called deep learning, is at the core of remarkable advances such as voice assistants and facial recognition apps.

Machine learning is also key to digitizing smells because it can learn to map the molecular structure of an odor-causing compound to textual odor descriptors. The machine learning model learns the words humans tend to use for example, sweet and dessert to describe what they experience when they encounter specific odor-causing compounds, such as vanillin.

However, machine learning needs large datasets. The web has an unimaginably huge amount of audio, image and video content that can be used to train artificial intelligence systems that recognize sounds and pictures. But machine olfaction has long faced a data shortage problem, partly because most people cannot verbally describe smells as effortlessly and recognizably as they can describe sights and sounds. Without access to web-scale datasets, researchers werent able to train really powerful machine learning models.

However, things started to change in 2015 when researchers launched the DREAM Olfaction Prediction Challenge. The competition released data collected by Andreas Keller and Leslie Vosshall, biologists who study olfaction, and invited teams from around the world to submit their machine learning models. The models had to predict odor labels like sweet, flower or fruit for odor-causing compounds based on their molecular structure.

The top performing models were published in a paper in the journal Science in 2017. A classic machine learning technique called random forest, which combines the output of multiple decision tree flow charts, turned out to be the winner.

I am a machine learning researcher with a longstanding interest in applying machine learning to chemistry and psychiatry. The DREAM challenge piqued my interest. I also felt a personal connection to olfaction. My family traces its roots to the small town of Kannauj in northern India, which is Indias perfume capital. Moreover, my father is a chemist who spent most of his career analyzing geological samples. Machine olfaction thus offered an irresistible opportunity at the intersection of perfumery, culture, chemistry and machine learning.

Progress in machine olfaction started picking up steam after the DREAM challenge concluded. During the COVID-19 pandemic, many cases of smell blindness, or anosmia, were reported. The sense of smell, which usually takes a back seat, rose in public consciousness. Additionally, a research project, the Pyrfume Project, made more and larger datasets publicly available.

By 2019, the largest datasets had grown from less than 500 molecules in the DREAM challenge to about 5,000 molecules. A Google Research team led by Alexander Wiltschko was finally able to bring the deep learning revolution to machine olfaction. Their model, based on a type of deep learning called graph neural networks, established state-of-the-art results in machine olfaction. Wiltschko is now the founder and CEO of Osmo, whose mission is giving computers a sense of smell.

Recently, Wiltschko and his team used a graph neural network to create a principal odor map, where perceptually similar odors are placed closer to each other than dissimilar ones. This was not easy: Small changes in molecular structure can lead to large changes in olfactory perception. Conversely, two molecules with very different molecular structures can nonetheless smell almost the same.

Such progress in cracking the code of smell is not only intellectually exciting but also has highly promising applications, including personalized perfumes and fragrances, better insect repellents, novel chemical sensors, early detection of disease, and more realistic augmented reality experiences. The future of machine olfaction looks bright. It also promises to smell good.

Ambuj Tewari, Professor of Statistics, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

More:
AI Is Cracking a Hard Problem Giving Computers a Sense of Smell - The Good Men Project

Read More..

iCIMS Wins AI Breakthrough Award for "Best Overall AI Solution" – PR Newswire

Prestigious international award program recognizes iCIMS Talent Cloud AI as a trusted, powerful technology to simplify and accelerate hiring while driving quantifiable business outcomes

HOLMDEL, N.J., July 9, 2024 /PRNewswire/ --iCIMS, a leading provider of talent acquisition (TA) technology, today announced iCIMS Talent Cloud AI was selected the "Best Overall AI Solution" in the seventh annual AI Breakthrough Awards, a prominent market intelligence organization that recognizes the top companies, technologies and products in the global artificial intelligence (AI) market.

iCIMS Talent Cloud AI empowers organizations to simplify recruiting and dynamically engage with talent with job matching and search experiences. The award-winning technology enables TA teams to provide better and more personalized candidate experiences at scale, find best-fit candidates, hire faster and accelerate employee growth. iCIMS customers using its AI-powered solutions have reduced their time to fill an open role twice as fast as recruiting teams not using iCIMS Talent Cloud AI.

"iCIMS Talent Cloud AI gives customers a competitive hiring edge to build and scale winning teams, smarter and faster."

Native to the iCIMS platform, its AI is purpose-built and embedded across the entire experience no integration required. iCIMS' AI has been trained on billions of data points across hundreds of millions of candidate profiles and activity from thousands of organizations that receive more than 200M applications and make more than 5.5M hires annually.

The company has a longstanding journey of innovation with AI, accelerated by its acquisition of Opening.ioin 2020. Earlier this year, iCIMS advanced its program with the launch of its GenAI-powered recruiting assistantto help teams hire smarter and with greater efficiency. Most recently, iCIMS announced its next-generation CRM technology, iCIMS Candidate Experience Management(CXM), to help teams find and nurture talent that converts to quality hiresthrough a combination of advanced marketing automation, engagement scoring and artificial intelligence.

iCIMS is committed to helping organizations hire and scale their teams with reliable, responsible AI leveraging best practices, third-party audits and global regulations to help foster ethical and responsible recruiting. Its award-winning AI is grounded in six core principles: human-led, technically robust and safe, inclusive and fair, private and secure, transparent and accountable.

"CHROs are feeling the pressure to implement AI into business processes, yet it's one of the top priorities keeping them up at night, according to our new research," said Andreea Wade, VP of AI at iCIMS. "There's no doubt that AI provides a massive swath of opportunities, but it's so important to get right. It requires working with the right tech vendors, training and upskilling employees and level-setting on expectations. iCIMS is driving that technological innovation in TA forward, without exacerbating risk for our customers, their candidates and our own employees."

The mission of the AI Breakthrough Awards is to honor excellence and recognize the innovation, hard work and success in a range of AI and machine learning related categories, including Generative AI, Computer Vision, AIOps, Deep Learning, Robotics, Natural Language Processing, industry specific AI applications and many more. This year's program attracted more than 5,000 nominations from over 20 different countries throughout the world.

"HR and business leaders are always looking for new ways to improve the experience and create more efficiency and iCIMS does just that across the talent journey," said Steve Johansson, managing director, AI Breakthrough. "iCIMS Talent Cloud AI gives customers a competitive hiring edge to build and scale winning teams, smarter and faster, with reduced complexity and cost. After reviewing thousands of submissions across categories, we are proud to announce iCIMS as the 2024 winner of our 'Best Overall AI Solution' in our prestigious award program."

iCIMS will reveal the latest product innovations in its summer product release later this month. Request a demo today to see why leading employers like Microsoft, Target and Ford Motors use iCIMS to hire great teams. UK-based leaders and recruiters can see iCIMS in action at RecFest on 11 July in Knebworth Park.

About iCIMS, Inc. iCIMS is a leading provider of talent acquisition technology that enables organizations everywhere to build winning workforces. For over 20 years, iCIMS has been at the forefront of talent acquisition transformation. iCIMS empowers thousands of organizations worldwide with the right tools to meet their evolving needs across the talent journey and drive business success. Its AI-powered hiring platform is designed to improve efficiency, cut recruiting costs and build exceptional experiences for candidates and recruiters. For more information, visit http://www.icims.com.

ContactCarlee Capawana Director of Corporate Communications, iCIMS [emailprotected] 9089476572

SOURCE iCIMS, Inc.

Read more:
iCIMS Wins AI Breakthrough Award for "Best Overall AI Solution" - PR Newswire

Read More..