Category Archives: Artificial Intelligence
Artificial intelligence in liver cancer new tools for research and patient management – Nature.com
Sung, H. et al. Global Cancer Statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 71, 209249 (2021).
Article PubMed Google Scholar
Rumgay, H. et al. Global, regional and national burden of primary liver cancer by subtype. Eur. J. Cancer 161, 108118 (2022).
Article PubMed Google Scholar
European Association for the Study of the Liver. EASL Clinical Practice Guidelines: management of hepatocellular carcinoma. J. Hepatol. 69, 182236 (2018).
Article Google Scholar
Ducreux, M. et al. The management of hepatocellular carcinoma. Current expert opinion and recommendations derived from the 24th ESMO/World Congress on Gastrointestinal Cancer, Barcelona, 2022. ESMO Open. 8, 101567 (2023).
Article CAS PubMed PubMed Central Google Scholar
Echle, A. et al. Deep learning in cancer pathology: a new generation of clinical biomarkers. Br. J. Cancer 124, 686696 (2021).
Article PubMed Google Scholar
Friemel, J. et al. Intratumor heterogeneity in hepatocellular carcinoma. Clin. Cancer Res. 21, 19511961 (2015).
Article CAS PubMed Google Scholar
Calderaro, J. et al. Histological subtypes of hepatocellular carcinoma are related to gene mutations and molecular tumour classification. J. Hepatol. 67, 727738 (2017).
Article CAS PubMed Google Scholar
Solinas, A. & Calvisi, D. F. Lessons from rare tumors: hepatic lymphoepithelioma-like carcinomas. World J. Gastroenterol. 21, 34723479 (2015).
Article PubMed PubMed Central Google Scholar
Salomao, M., Yu, W. M., Brown, R. S. Jr, Emond, J. C. & Lefkowitch, J. H. Steatohepatitic hepatocellular carcinoma (SH-HCC): a distinctive histological variant of HCC in hepatitis C virus-related cirrhosis with associated NAFLD/NASH. Am. J. Surg. Pathol. 34, 16301636 (2010).
Article PubMed Google Scholar
Limousin, W. et al. Molecular-based targeted therapies in patients with hepatocellular carcinoma and hepato-cholangiocarcinoma refractory to atezolizumab/bevacizumab. J. Hepatol. 79, 14501458 (2023).
Article CAS PubMed Google Scholar
Prueksapanich, P. et al. Liver fluke-associated biliary tract cancer. Gut Liver 12, 236245 (2018).
Article CAS PubMed Google Scholar
European Association for the Study of the Liver. EASL-ILCA clinical practice guidelines on the management of intrahepatic cholangiocarcinoma. J. Hepatol. 79, 181208 (2023).
Article Google Scholar
Vithayathil, M., Bridegwater, J. & Khan, S. A. Medical therapies for intra-hepatic cholangiocarcinoma. J. Hepatol. 75, 981983 (2021).
Article PubMed Google Scholar
Nault, J.-C. & Villanueva, A. Biomarkers for hepatobiliary cancers. Hepatology 73, 115127 (2021).
Article PubMed Google Scholar
Brunt, E. et al. cHCC-CCA: consensus terminology for primary liver carcinomas with both hepatocytic and cholangiocytic differentation. Hepatology 68, 113126 (2018).
Article PubMed Google Scholar
Rinella, M. E. et al. A multisociety Delphi consensus statement on new fatty liver disease nomenclature. Ann. Hepatol. 29, 101133 (2024).
Article PubMed Google Scholar
Wong, V. W.-S., Ekstedt, M., Wong, G. L.-H. & Hagstrm, H. Changing epidemiology, global trends and implications for outcomes of NAFLD. J. Hepatol. 79, 842852 (2023).
Article PubMed Google Scholar
Clements, O., Eliahoo, J., Kim, J. U., Taylor-Robinson, S. D. & Khan, S. A. Risk factors for intrahepatic and extrahepatic cholangiocarcinoma: a systematic review and meta-analysis. J. Hepatol. 72, 95103 (2020).
Article PubMed Google Scholar
Jing, W. et al. Diabetes mellitus and increased risk of cholangiocarcinoma: a meta-analysis. Eur. J. Cancer Prev. 21, 2431 (2012).
Article PubMed Google Scholar
Litjens, G. et al. A survey on deep learning in medical image analysis. Med. Image Anal. 42, 6088 (2017).
Article PubMed Google Scholar
Wagner, S. J. et al. Transformer-based biomarker prediction from colorectal cancer histology: a large-scale multicentric study. Cancer Cell 41, 16501661.e4 (2023).
Article CAS PubMed PubMed Central Google Scholar
Khader, F. et al. Multimodal deep learning for integrating chest radiographs and clinical parameters: a case for transformers. Radiology 309, e230806 (2023).
Article PubMed Google Scholar
Reis-Filho, J. S. & Kather, J. N. Overcoming the challenges to implementation of artificial intelligence in pathology. J. Natl Cancer Inst. 115, 608612 (2023).
Article PubMed PubMed Central Google Scholar
Shmatko, A., Ghaffari Laleh, N., Gerstung, M. & Kather, J. N. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. Nat. Cancer 3, 10261038 (2022).
Article PubMed Google Scholar
Cheng, N. et al. Deep learning-based classification of hepatocellular nodular lesions on whole-slide histopathologic images. Gastroenterology 162, 19481961.e7 (2022).
Article CAS PubMed Google Scholar
Kiani, A. et al. Impact of a deep learning assistant on the histopathologic classification of liver cancer. NPJ Digit. Med. 3, 23 (2020).
Article PubMed PubMed Central Google Scholar
Calderaro, J. et al. Deep learning-based phenotyping reclassifies combined hepatocellular-cholangiocarcinoma. Nat. Commun. 14, 8290 (2023).
Article CAS PubMed PubMed Central Google Scholar
Chung, T. & Park, Y. N. Up-to-date pathologic classification and molecular characteristics of intrahepatic cholangiocarcinoma. Front. Med. 9, 857140 (2022).
Article Google Scholar
Albrecht, T. et al. Deep learning-enabled diagnosis of liver adenocarcinoma. Gastroenterology 165, 12621275 (2023).
Article PubMed Google Scholar
Lu, M. Y. et al. AI-based pathology predicts origins for cancers of unknown primary. Nature 594, 106110 (2021).
Article CAS PubMed Google Scholar
Saillard, C. et al. Predicting survival after hepatocellular carcinoma resection using deep-learning on histological slides. Hepatology 72, 20002013 (2020).
Article PubMed Google Scholar
Shi, J.-Y. et al. Exploring prognostic indicators in the pathological images of hepatocellular carcinoma based on deep learning. Gut 70, 951961 (2021).
Article CAS PubMed Google Scholar
Xie, J. et al. Survival prediction on intrahepatic cholangiocarcinoma with histomorphological analysis on the whole slide images. Comput. Biol. Med. 146, 105520 (2022).
Article CAS PubMed Google Scholar
Sjblom, N. et al. Automated image analysis of keratin 7 staining can predict disease outcome in primary sclerosing cholangitis. Hepatol. Res. 53, 322333 (2023).
Article PubMed Google Scholar
Cifci, D., Foersch, S. & Kather, J. N. Artificial intelligence to identify genetic alterations in conventional histopathology. J. Pathol. 257, 430444 (2022).
Article PubMed Google Scholar
Campanella, G. et al. H&E-based computational biomarker enables universal EGFR screening for lung adenocarcinoma. Preprint at https://doi.org/10.48550/arXiv.2206.10573 (2022).
Echle, A. et al. Artificial intelligence for detection of microsatellite instability in colorectal cancer a multicentric analysis of a pre-screening tool for clinical application. ESMO Open. 7, 100400 (2022).
Article CAS PubMed PubMed Central Google Scholar
Echle, A. et al. Deep learning for the detection of microsatellite instability from histology images in colorectal cancer: a systematic literature review. ImmunoInformatics 34, 100008 (2021).
Article CAS Google Scholar
Farahmand, S. et al. Deep learning trained on hematoxylin and eosin tumor region of interest predicts HER2 status and trastuzumab treatment response in HER2+ breast cancer. Mod. Pathol. 35, 4451 (2022).
Article CAS PubMed Google Scholar
Fu, Y. et al. Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis. Nat. Cancer 1, 800810 (2020).
Article CAS PubMed Google Scholar
Kather, J. N. et al. Pan-cancer image-based detection of clinically actionable genetic alterations. Nat. Cancer 1, 789799 (2020).
Article CAS PubMed PubMed Central Google Scholar
Zhang, H. et al. Predicting tumor mutational burden from liver cancer pathological images using convolutional neural network. In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (eds Yoo, I., Bi, J. & Hu, X) 920925 (IEEE, 2019).
Zeng, Q. et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J. Hepatol. 1, 116127 (2022).
Article Google Scholar
Macias, R. I. R. et al. Clinical relevance of biomarkers in cholangiocarcinoma: critical revision and future directions. Gut 71, 16691683 (2022).
CAS PubMed Google Scholar
Zeng, Q. et al. Artificial intelligence-based pathology as a biomarker of sensitivity to atezolizumab-bevacizumab in patients with hepatocellular carcinoma: a multicentre retrospective study. Lancet Oncol. 24, 14111422 (2023).
Article CAS PubMed Google Scholar
Oh, D.-Y. et al. Durvalumab plus gemcitabine and cisplatin in advanced biliary tract cancer. NEJM Evid. https://doi.org/10.1056/EVIDoa2200015 (2022).
Nam, D., Chapiro, J., Paradis, V., Seraphin, T. P. & Kather, J. N. Artificial intelligence in liver diseases: improving diagnostics, prognostics and response prediction. JHEP Rep. 4, 100443 (2022).
Article PubMed PubMed Central Google Scholar
Narita, K. et al. Iodine maps derived from sparse-view kV-switching dual-energy CT equipped with a deep learning reconstruction for diagnosis of hepatocellular carcinoma. Sci. Rep. 13, 3603 (2023).
Article CAS PubMed PubMed Central Google Scholar
Lee, H. J., Kim, J. S., Lee, J. K., Lee, H. A. & Pak, S. Ultra-low-dose hepatic multiphase CT using deep learning-based image reconstruction algorithm focused on arterial phase in chronic liver disease: a non-inferiority study. Eur. J. Radiol. 159, 110659 (2023).
Article PubMed Google Scholar
Liu, F. et al. Deep learning radiomics based on contrast-enhanced ultrasound might optimize curative treatments for very-early or early-stage hepatocellular carcinoma patients. Liver Cancer 9, 397413 (2020).
Article PubMed PubMed Central Google Scholar
Huang, Z. et al. Deep learning-based radiomics based on contrast-enhanced ultrasound predicts early recurrence and survival outcome in hepatocellular carcinoma. World J. Gastrointest. Oncol. 14, 23802392 (2022).
Article PubMed PubMed Central Google Scholar
Mller-Franzes, G. et al. Using machine learning to reduce the need for contrast agents in breast MRI through synthetic images. Radiology 307, e222211 (2023).
Originally posted here:
Artificial intelligence in liver cancer new tools for research and patient management - Nature.com
AI-Powered Apps Streamline Team Collaboration – PYMNTS.com
Artificial intelligence (AI) chatbots are already conversing with you and are now here to enhance teamwork.
Snap, a new app by Swit Technologies, is among a wave of collaboration tools that use generative AI to streamline project management, communication and workflows. Experts say such software can include intelligent meeting schedulers, real-time document collaboration, virtual assistants, and adaptive workflow management systems.
AI can be great for speeding up or automating certain tasks and elements of collaboration that can be tedious or prone to error,Darrin Murriner, the CEO ofCloverleaf.me, told PYMNTS. In the collaboration process, this could include collaborating on documents, writing content, communicating and compiling information.
APYMNTS report from last year suggests that GenAI technologies like OpenAIs ChatGPT could significantly enhance productivity. While they may also disrupt employment landscapes, the chief operations officer at Axios HQ, Jordan Zaslav, expressed optimism about AIs role in fostering collaboration. He predicted the designation AI-powered tools might soon become as commonplace as cloud-based technologies are today, inspiring a new era of productivity.
Snap is a project management system, task manager, and message board rolled into one designed to provide a range of features that extend beyond simple conversation facilitation. The chatbot aims to support collaborative project work by offering functionalities such as converting conversations into tasks, generating checklists, offering contextual responses and summarizing tasks.
Snap is not alone in the realm of AI-powered collaboration tools. Zoom, the well-known video conferencing platform, has recently introducedZoom Workplace, an AI-driven solution aimed at boosting productivity and fostering teamwork within its user-friendly interface. The AI Companion updates feature a range of new tools, most notably Ask AI Companion, a digital assistant that helps users streamline their workday within Zoom Workplace. Other improvements include an AI Companion for Zoom Phone and enhanced capabilities for Team Chat and Whiteboard.
AI note-taking applications such as Otter.ai and Fireflies not only transcribe meeting discussions in real time but also automatically distribute these notes to all participants after the meeting,Kevin LouxofCharlotte Works told PYMNTS. This feature ensures that everyone involved has access to the same information, fostering better communication and collaboration among team members.
AI tools are definitely a booster for collaboration especially with global and remote teams,Harpaul Sambhi, CEO of the AI company Magical, told PYMNTS. By incorporating AI tools into their workflow, teams can increase productivity, improve efficiency and streamline communication. By curating a shared library of top productivity tricks from frequently used messages to common workflow automation teams work more efficiently together.
Magical uses AI to automate repetitive tasks such as messaging, Sambhi said.
With Magical, we can start to understand the common workflows of all of our users and suggest recommendations for automating those tasks, he explained. AI will help us understand those patterns. Similarly, if you think of a large organization with many employees and lots of coordination/collaboration, we can start to narrow in on the repetitive tasks of, lets say, a team or department, and start to automate the tasks between employees.
As AI evolves and as people get more comfortable with its application, the uses of AI in collaboration will evolve as well, Murriner predicted.
There will likely be a move from more routine tasks to higher-order problem-solving and solutions, as well as improving our ability to build relationships and make connections, he added. These can be useful in a multitude of ways, including improving sales performance, recommending new opportunities for collaboration, or identifying who to connect with to improve outcomes.
See the original post here:
AI-Powered Apps Streamline Team Collaboration - PYMNTS.com
Palantir Stock vs. Microsoft Stock: Which Is the Best Artificial Intelligence (AI) Stock to Buy? – The Motley Fool
Palantir might be a smaller company, but that doesn't automatically make Microsoft the better investment.
Fool.com contributor Parkev Tatevosian compares Palantir Technologies (PLTR -2.60%) to Microsoft (MSFT -0.66%) to determine the better stock to buy.
*Stock prices used were the afternoon prices of April 14, 2024. The video was published on April 16, 2024.
Parkev Tatevosian, CFA has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Microsoft and Palantir Technologies. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Parkev Tatevosian is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through his link, he will earn some extra money that supports his channel. His opinions remain his own and are unaffected by The Motley Fool.
Read more from the original source:
Palantir Stock vs. Microsoft Stock: Which Is the Best Artificial Intelligence (AI) Stock to Buy? - The Motley Fool
These 3 Artificial Intelligence (AI) Cryptos Are Rocketing Higher Today – Yahoo Finance
It's been a wild day for cryptocurrency investors, with a number of top tokens seeing outsize volatility in today's session. For AI cryptos, these moves have been even more exaggerated.
As of 2:15 p.m. ET on Monday, The Graph (CRYPTO: GRT), Fetch.ai (CRYPTO: FET), and SingularityNET (CRYPTO:AGIX) are still up meaningfully, surging 5.6%, 2%, and 1.8%, respectively, over the past 24 hours. However, many of these tokens have continued to decline in afternoon trading alongside other risk assets, as Middle East tensions rise.
For AI cryptos, geopolitical concerns shouldn't matter to the same degree as with other assets that are more sensitive to capital flows. That said, capital flows do matter regardless of which niche a given project is pursuing, and selling pressure remains strong today.
Fetch.ai and SingularityNET are two projects uniquely focused on AI that have a shared catalyst that investors are clearly pricing in. Fetch.ai is collaborating with SingularityNET and Ocean Protocol to create what they're calling the "Superintelligence Alliance."
As part of this alliance, some talks around a potential token merger have taken place, with investors now pricing these tokens in high correlation to each other.
That certainly makes sense, given the AI focus of both projects, and their collaborative ties to work together on solving much bigger problems than they likely could on their own. One thing that certainly stands out to me about crypto assets is the relative lack of willingness for projects to merge. If these projects do tie the knot at some point, it will be interesting to see how the market values a token combination.
The demand for blockchain-based AI solutions appears to be strong, and a combination of these two relatively small-cap projects could improve their chances of success in creating meaningful utility for end users.
The Graph's core model as an oracle network, allowing off-blockchain data to be ported on-chain, has seen impressive demand build over time. A number of recent collaborations and partnerships have driven an impressive amount of momentum in this token over the past week. The fact that this momentum has continued is a very positive development for long-term investors, and suggests this AI-related play could have more room to run.
Today's price action certainly implies a dip could be on the horizon, or at least a mellowing out of some rather strong momentum in these tokens in recent days. No rally lasts forever, and a breather can turn out to be a good thing. This year, these three AI-related cryptos have been among the best performers, and I wouldn't be surprised to see that narrative carried through to the end of the year.
Story continues
For growth investors seeking some crypto exposure, (and in particular, projects with AI-related headwinds), these are three tokens that I think are worth adding to the watch list to potentially buy on dips. Each project has unique catalysts that could drive value for investors and users over time. That's what this space is supposed to be about, which is what makes assessing these cryptos so compelling.
Before you buy stock in Fetch, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the10 best stocks for investors to buy now and Fetch wasnt one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service hasmore than tripledthe return of S&P 500 since 2002*.
See the 10 stocks
*Stock Advisor returns as of April 15, 2024
Chris MacDonald has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Fetch and The Graph. The Motley Fool has a disclosure policy.
These 3 Artificial Intelligence (AI) Cryptos Are Rocketing Higher Today was originally published by The Motley Fool
Read this article:
These 3 Artificial Intelligence (AI) Cryptos Are Rocketing Higher Today - Yahoo Finance
AI Is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career – The New York Times
Pulling all-nighters to assemble PowerPoint presentations. Punching numbers into Excel spreadsheets. Finessing the language on esoteric financial documents that may never be read by another soul.
Such grunt work has long been a rite of passage in investment banking, an industry at the top of the corporate pyramid that lures thousands of young people every year with the promise of prestige and pay.
Until now. Generative artificial intelligence the technology upending many industries with its ability to produce and crunch new data has landed on Wall Street. And investment banks, long inured to cultural change, are rapidly turning into Exhibit A on how the new technology could not only supplement but supplant entire ranks of workers.
The jobs most immediately at risk are those performed by analysts at the bottom rung of the investment banking business, who put in endless hours to learn the building blocks of corporate finance, including the intricacies of mergers, public offerings and bond deals. Now, A.I. can do much of that work speedily and with considerably less whining.
The structure of these jobs has remained largely unchanged at least for a decade, said Julia Dhar, head of BCGs Behavioral Science Lab and a consultant to major banks experimenting with A.I. The inevitable question, as she put it, is do you need fewer analysts?
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.
Thank you for your patience while we verify access.
Already a subscriber?Log in.
Want all of The Times?Subscribe.
Read more here:
AI Is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career - The New York Times
‘Jailbreaking’ AI services like ChatGPT and Claude 3 Opus is much easier than you think – Livescience.com
Scientists from artificial intelligence (AI) company Anthropic have identified a potentially dangerous flaw in widely used large language models (LLMs) like ChatGPT and Anthropics own Claude 3 chatbot.
Dubbed "many shot jailbreaking," the hack takes advantage of "in-context learning, in which the chatbot learns from the information provided in a text prompt written out by a user, as outlined in research published in 2022. The scientists outlined their findings in a new paper uploaded to the sanity.io cloud repository and tested the exploit on Anthropic's Claude 2 AI chatbot.
People could use the hack to force LLMs to produce dangerous responses, the study concluded even though such systems are trained to prevent this. That's because many shot jailbreaking bypasses in-built security protocols that govern how an AI responds when, say, asked how to build a bomb.
LLMs like ChatGPT rely on the "context window" to process conversations. This is the amount of information the system can process as part of its input with a longer context window allowing for more input text. Longer context windows equate to more input text that an AI can learn from mid-conversation which leads to better responses.
Related: Researchers gave AI an 'inner monologue' and it massively improved its performance
Context windows in AI chatbots are now hundreds of times larger than they were even at the start of 2023 which means more nuanced and context-aware responses by AIs, the scientists said in a statement. But that has also opened the door to exploitation.
The attack works by first writing out a fake conversation between a user and an AI assistant in a text prompt in which the fictional assistant answers a series of potentially harmful questions.
Get the worlds most fascinating discoveries delivered straight to your inbox.
Then, in a second text prompt, if you ask a question such as "How do I build a bomb?" the AI assistant will bypass its safety protocols and answer it. This is because it has now started to learn from the input text. This only works if you write a long "script" that includes many "shots" or question-answer combinations.
"In our study, we showed that as the number of included dialogues (the number of "shots") increases beyond a certain point, it becomes more likely that the model will produce a harmful response," the scientists said in the statement. "In our paper, we also report that combining many-shot jailbreaking with other, previously-published jailbreaking techniques makes it even more effective, reducing the length of the prompt thats required for the model to return a harmful response."
The attack only began to work when a prompt included between four and 32 shots but only under 10% of the time. From 32 shots and more, the success rate surged higher and higher. The longest jailbreak attempt included 256 shots and had a success rate of nearly 70% for discrimination, 75% for deception, 55% for regulated content and 40% for violent or hateful responses.
The researchers found they could mitigate the attacks by adding an extra step that was activated after a user sent their prompt (that contained the jailbreak attack) and the LLM received it. In this new layer, the system would lean on existing safety training techniques to classify and modify the prompt before the LLM would have a chance to read it and draft a response. During tests, it reduced the hack's success rate from 61% to just 2%.
The scientists found that many shot jailbreaking worked on Anthropic's own AI services as well as those of its competitors, including the likes of ChatGPT and Google's Gemini. They have alerted other AI companies and researchers to the danger, they said.
Many shot jailbreaking does not currently pose "catastrophic risks," however, because LLMs today are not powerful enough, the scientists concluded. That said, the technique might "cause serious harm" if it isn't mitigated by the time far more powerful models are released in the future.
Read more here:
'Jailbreaking' AI services like ChatGPT and Claude 3 Opus is much easier than you think - Livescience.com
‘Living Nostradamus’ warns that future epidemics could come from AI labs – UNILAD
Featured Image Credit: Instagram/@athos_salome / Getty Stock Image
Published Apr 13, 2024, 17:27:08 GMT+1Last updated Apr 13, 2024, 17:27:07 GMT+1
The psychic known as the 'Living Nostradamus' has made another worrying prediction about 2024, and it's worth listening up considering his track record.
Athos Salom earned his nickname after foreseeing a number of events that came true in the past few years, ranging from predicting COVID to Elon Musk's takeover of Twitter - and even the death of Queen Elizabeth II.
Salom has now hinted at the next thing to look out for, and surprise surprise, it's all about AI.
The advancements we're seeing in technology could prove to be bigger than we think, as he told the Daily Star: "While AI can assist in various aspects of human life, Salom warns of the destructive potential of this technology.
"Future epidemics might not be natural phenomena but rather synthetic creations from AI laboratories.
"This fusion between biology and technology suggests a scenario where artificial viruses could be developed, whether to cure existing diseases or, paradoxically, to create new ailments."
But that's not all, as the Brazilian psychic has also pointed out that we as humans might be more similar to AI than we think, with one thing tying us together.
"Electricity is a medium between humans and AI, but the ends are distinct: one is the maintenance and experience of biological life, and the other is the processing of information and the execution of programmed or manipulated tasks," he revealed.
Salom told LADbible Group previously that 2024 would see a 'new chapter in human history', with many of his prophecies prior to the year not sounding particularly positive.
He also vaguely stated that artificial intelligence could 'awaken' this year, before expanding on this and explaining how we could expect it to develop.
The 37-year-old has already had a prediction come true this year, as he warned us of the impending 'three days of darkness', previously stating that: "A solar flare would hit Earth, and that a coronal mass ejection (CME) was ahead of us."
A Coronal Mass Ejection, or a CME, is when the Sun ejects a plasma mass and magnetic field outwards.
Salom previously said: "The piece delves into conspiracy theories surrounding the Three Days of Darkness coinciding with a total solar eclipse on April 8, 2024 raising concerns about solar coronal mass ejections (CMEs)."
The aforementioned CME was sighted just weeks before the solar eclipse, with people sighting it on March 24.
It did not cause three days of darkness, but a spectacular solar flare was observed by space fans.
Topics:Technology, Science, Artificial Intelligence, Weird
Read more:
'Living Nostradamus' warns that future epidemics could come from AI labs - UNILAD
AI Industry Reshaping the Future: The Growth of Artificial Intelligence Investments – yTech
The artificial intelligence sector is set to redefine countless industries, from automotive to healthcare, as it flourishes at an impressive rate. OpenAIs ChatGPT sparked a renewed interest in AI technology, propelling a significant shift in business strategies across the tech world. In an attempt to capture a piece of the burgeoning market, valued at an estimated $200 billion, companies have been quickly pivoting towards AI-focused ventures.
Research indicates that the AI domain is expected to swell at a compound annual growth rate of 37% through 2030, with the potential market worth nearing a staggering $2 trillion. This exponential growth has caught the attention of investors, culminating in a 67% rise in the Nasdaq-100 Technology Sector index in 2023 alone.
Investing in AI has proven beneficial, with the potential for monumental gains remaining robust for the foreseeable future. Nvidia, one of the giants in AI chip production, achieved a resounding 90% market share in AI GPUs in 2023. Their forward-thinking approach has seen their stocks surge by 214% over the year, reflecting their dominance in the sector. Nvidias financial reports display remarkable year-over-year growth, with data center revenue spikes attributed to AI GPU demand.
Additionally, Microsofts strategic investments in AI have significantly enhanced its product offerings across its vast portfolio, further solidifying its position as a titan in the tech industry. Meanwhile, Advanced Micro Devices (AMD) is rapidly catching up, launching AI products that have already attracted major clients and positioning itself as a key player in the future of AI-integrated PCs.
As AI continues to push technological boundaries, the industry offers an attractive investment opportunity. These advancements serve as a reminder that those willing to invest in the evolving field of artificial intelligence may very well become the millionaires of tomorrow.
The artificial intelligence sector is set to redefine countless industries, from automotive to healthcare, as it flourishes at an impressive rate. OpenAIs ChatGPT sparked a renewed interest in AI technology, propelling a significant shift in business strategies across the tech world. In an attempt to capture a piece of the burgeoning market, valued at an estimated $200 billion, companies have been quickly pivoting towards AI-focused ventures.
Research indicates that the AI domain is expected to swell at a compound annual growth rate of 37% through 2030, with the potential market worth nearing a staggering $2 trillion. This exponential growth has caught the attention of investors, culminating in a 67% rise in the Nasdaq-100 Technology Sector index in 2023 alone.
Investing in AI has proven beneficial, with the potential for monumental gains remaining robust for the foreseeable future. Nvidia, one of the giants in AI chip production, achieved a resounding 90% market share in AI GPUs in 2023. Their forward-thinking approach has seen their stocks surge by 214% over the year, reflecting their dominance in the sector. Nvidias financial reports display remarkable year-over-year growth, with data center revenue spikes attributed to AI GPU demand.
Additionally, Microsofts strategic investments in AI have significantly enhanced its product offerings across its vast portfolio, further solidifying its position as a titan in the tech industry. Meanwhile, Advanced Micro Devices (AMD) is rapidly catching up, launching AI products that have already attracted major clients and positioning itself as a key player in the future of AI-integrated PCs.
As AI continues to push technological boundaries, the industry offers an attractive investment opportunity. These advancements serve as a reminder that those willing to invest in the evolving field of artificial intelligence may very well become the millionaires of tomorrow.
Aside from these corporate giants, the AI industry encompasses a vast array of applications, leading to substantial investments in areas such as autonomous vehicles, robotic process automation (RPA), and intelligent virtual assistants. Companies like Tesla are at the forefront of integrating AI into electric vehicles, while healthcare providers are turning to AI for diagnostic accuracy and personalized medicine.
The market forecast for AI is exceedingly optimistic. However, issues related to the industry are interspersed within this technological upturn, such as ethical concerns over data privacy, potential job displacement due to automation, and the need for regulation in AIs decision-making processes. Moreover, the AI talent gap poses a challenge, with the demand for skilled professionals outstripping supply, thus hindering growth to some extent.
Despite these issues, the integration of AI into businesses and consumer products continues to create a thriving market that fosters innovation and development across numerous sectors. For those interested in the dynamic world of AI, insightful resources and news can be found through key industry leaders and market research firms, which may provide a wealth of information on emerging trends and technologies.
Leokadia Gogulska is an emerging figure in the field of environmental technology, known for her groundbreaking work in developing sustainable urban infrastructure solutions. Her research focuses on integrating green technologies in urban planning, aiming to reduce environmental impact while enhancing livability in cities. Gogulskas innovative approaches to renewable energy usage, waste management, and eco-friendly transportation systems have garnered attention for their practicality and effectiveness. Her contributions are increasingly influential in shaping policies and practices towards more sustainable and resilient urban environments.
Read the original:
AI Industry Reshaping the Future: The Growth of Artificial Intelligence Investments - yTech
Dove Refreshes ‘Real Women’ Push in Counterpoint to AI Images – PYMNTS.com
The artificial intelligence (AI) content free-for-all has everyone scrambling to understand what the new normal will look like, and this week, a few brands decided it was time to lay down some ground rules.
From the harmless (fun face-altering apps, for instance, or recordings of beloved cartoon characters singing classic rock favorites) to the truly scary (such as deepfakes enabling cybercrimes), the widespread availability of low- or no-cost AI content-generating technology is transforming our world. Now, some brands are looking to be more deliberate about how they build toward the AI-integrated future.
Take Dove. In 2004, when the brand first launched its Real Beauty campaign, the word real was pushing back against the types of women who were featured in most popular media who did not represent the majority of the population. Now, its a counterpoint to literally fake women AI-generated images of people who dont exist.
On Tuesday (April 9), the Unilever-owned personal care products brand announced a commitment to never use AI in place of real humans in its advertising. Alongside this promise, the company also published its Real Beauty Prompt Guidelines issued in a playbook discussing how to create images that are representative of Real Beauty using generative AI.
At Dove, we seek a future in which women get to decide and declare what real beauty looks like not algorithms, Dove Chief Marketing Officer Alessandro Manfredi said in a statement. As we navigate the opportunities and challenges that come with new and emerging technology, we remain committed to protect, celebrate, and champion Real Beauty. Pledging to never use AI in our communications is just one step.
Meanwhile, Adobeis now paying creators for the content their AI leverages. The company will is compensating artists and photographers to supply videos and images that will be used to train the companys models, supplementing its existing library of stock media, according to a report Thursday (April 11). Granted, its not much Adobe is paying between 6 cents and 16 cents for each photo and an average of $2.62 per minute for videos, according to the report.
The music industry is also confronting the compensation questions AI poses.
We want to ensure that artists and IP [intellectual property] owners can collaborate with AI innovators to find ethical win-win solutions in this AI era. We are in the disrupt phase of generative AI right now, and we have some navigating to do,Jenn Anderson-Miller, CEO and Co-founder ofAudiosocket, told PYMNTS in an interview published Tuesday. We call disruptions that because, initially, they are disruptive. And we have to level the playing field, she added about AI in the music industry.
Plus, a week ago, Metashared that it has modified its approach to handling media that has been manipulated with artificial intelligence (AI) and by other means on Facebook, Instagram and Threads. The company will now label a wider range of content as Made with AI when they detect industry standard AI image indicators or when the people uploading content disclosed that it was generated withthe technology.
View post:
Dove Refreshes 'Real Women' Push in Counterpoint to AI Images - PYMNTS.com
Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical … – Nature.com
P-GAN enables visualization of cellular structure from a single speckled image
The overall goal was to learn a mapping between the single speckled and averaged images (Fig.1b) using a paired training dataset. Inspired by the ability of traditional GAN networks to recover aspects of the cellular structure (Supplementary Fig.4), we sought to further improve upon these networks with P-GAN. In our network architecture (Supplementary Fig.2), the twin and the CNN discriminators were designed to ensure that the generator faithfully recovered both the local structural details of the individual cells as well as the overall global mosaic of the RPE cells. In addition, we incorporated a WFF strategy to the twin discriminator that concatenated features from different layers of the twin CNN with appropriate weights, facilitating effective comparisons and learning of the complex cellular structures and global patterns of the images.
P-GAN was successful in recovering the retinal cellular structure from the speckled images (Fig.1d and Supplementary Movie1). Toggling between the averaged RPE images (obtained by averaging 120 acquired AO-OCT volumes) and the P-GAN recovered images showed similarity in the cellular structure (Supplementary Movie2). Qualitatively, P-GAN showed better cell recovery capability than other competitive deep learning networks (U-Net41, GAN25, Pix2Pix30, CycleGAN31, medical image translation using GAN (MedGAN)42, and uncertainty guided progressive GAN (UP-GAN)43) (additional details about network architectures and training are shown in Other network architectures section in Supplementary Methods and Supplementary Table4, respectively) with clearer visualization of the dark cell centers and bright cell surroundings of the RPE cells (e.g., magenta arrows in Supplementary Fig.4 and Supplementary Movie3), possibly due to the twin discriminators similarity assessment. Notably, CycleGAN was able to generate some cells that were perceptually similar to the averaged images, but in certain areas, undesirable artifacts were introduced (e.g., the yellow circle in Supplementary Fig.4).
Quantitative comparison between P-GAN and the off-the-shelf networks (U-Net41,GAN25, Pix2Pix30, CycleGAN31, MedGAN42, and UP-GAN43) using objective performance metrics (PieAPP34, LPIPS35, DISTS36, and FID37) further corroborated our findings on the performance of P-GAN (Supplementary Table5). There was an average reduction of at least 16.8% in PieAPP and 7.3% in LPIPS for P-GAN compared to the other networks, indicating improved perceptual similarity of P-GAN recovered images with the averaged images. Likewise, P-GAN also achieved the best DISTS and FID scores among all networks, demonstrating better structural and textural correlations between the recovered and the ground truth averaged images. Overall, these results indicated that P-GAN outperformed existing AI-based methods and could be used to successfully recover cellular structure from speckled images.
Our preliminary explorations of the off-the-shelf GAN frameworks showed that these methods have the potential for recovering cellular structure and contrast but alone are insufficient to recover the fine local cellular details in extremely noisy conditions (Supplementary Fig.4). To further reveal and validate the contribution of the twin discriminator, we trained a series of intermediate models and observed the cell recovery outcomes. We began by training a conventional GAN, comprising of the generator, G, and the CNN discriminator, D2. Although GAN (G+D2) showed promising RPE visualization (Fig.2c) relative to the speckled images (Fig.2a), the individual cells were hard to discern in certain areas (yellow and orange arrows in Fig.2c). To improve the cellular visualization, we replaced D2 with the twin discriminator, D1. Indeed, a 7.7% reduction in DISTS was observed with clear improvements in the visualization of some of the cells (orange arrows in Fig.2c, d).
a Single speckled image compared to images of the RPE obtained via b average of 120 volumes (ground truth), c generator with the convolutional neural network (CNN) discriminator (G+D2), d generator with the twin discriminator (G+D1), e generator with CNN and twin discriminators without the weighted feature fusion (WFF) module (G+D2+D1-WFF), and f P-GAN. The yellow and orange arrows indicate cells that are better visualized using P-GAN compared to the intermediate models. gi Comparison of the recovery performance using deep image structure and texture similarity (DISTS), perceptual image error assessment through pairwise preference (PieAPP), and learned perceptual image patch similarity (LPIPS) metrics. The bar graphs indicate the average values of the metrics across sample size, n=5 healthy participants (shown in circles) for different methods. The error bars denote the standard deviation. Scale bar: 50m.
Having shown the outcomes of training D1 and D2 independently with G, we showed that combining both D1 and D2 with G (P-GAN) boosted the performance even further, evident in the improved values (lower scores implying better perceptual similarity) of the perceptual measures (Fig.2gi). For this combination of D1 and D2, we replaced the WFF block, which concatenated features from different layers of the twin CNN with appropriate weights, with global average pooling of the last convolutional layer (G+D2+D1-WFF). Without the WFF, the model did not adequately extract powerful discriminative features for similarity assessment and hence resulted in poor cell recovery performance. This was observed both qualitatively (yellow and orange arrows in Fig.2e, f) as well as quantitatively with the higher objective scores (indicating low perceptual similarity with ground truth averaged images) for G+D2+D1-WFF compared to P-GAN (Fig.2gi).
Taken together, this established that the CNN discriminator (D2) helped to ensure that recovered images were closer to the statistical distribution of the averaged images, while the twin discriminator (D1), working in conjunction with D2, ensured structural similarity of local cellular details between the recovered and the averaged images. The adversarial learning of G with D1 and D2 ensured that the recovered images not only have global similarity to the averaged images but also share nearly identical local features.
Finally, experimentation using different weighting configurations in WFF revealed that the fusion of the intermediate layers with weights of 0.2 with the last convolutional layer proved complementary in extracting shape and texture information for improved performance (Supplementary Tables2,3). These ablation experiments indicated that the global perceptual closeness (offered by D2) and the local feature similarity (offered by D1 and WFF) were both important for faithful cell recovery.
Given the relatively recent demonstration of RPE imaging using AO-OCT in 201612, and the long durations needed to generate these images, currently, there are no publicly available datasets for image analysis. Therefore, we acquired a small dataset using our custom-built AO-OCT imager13 consisting of seventeen retinal locations obtained by imaging up to four different retinal locations for each of the five participants (Supplementary Table1). To obtain this dataset, a total of 84h was needed (~2h for image acquisition followed by 82hours of data processing which included conversion of raw data to 3D volumes and correction for eye motion-induced artifacts). After performing traditional augmentation (horizontal flipping), this resulted in an initial dataset of only 136 speckled and averaged image pairs. However, considering that this and all other existing AO-OCT datasets that we are aware of are insufficient in size compared to the training datasets available for other imaging modalities44,45, it was not surprising that P-GAN trained on this initial dataset yielded very low objective perceptual similarity (indicated by the high scores of DISTS, PieAPP, LPIPS, and FID in Supplementary Table6) between the recovered and the averaged images.
To overcome this limitation, we leveraged the natural eye motion of the participants to augment the initial training dataset. The involuntary fixational eye movements, which are typically faster than the imaging speed of our AO-OCT system (1.6 volumes/s), resulted in two types of motion-induced artifacts. First, due to bulk tissue motion, a displacement of up to hundreds of cells between acquired volumes could be observed. This enabled us to create averaged images of different retinal locations containing slightly different cells within each image. Second, due to the point-scanning nature of the AO-OCT system compounded by the presence of continually occurring eye motion, each volume contained unique intra-frame distortions. The unique pattern of the shifts in the volumes was desirable for creating slightly different averaged images, without losing the fidelity of the cellular information (Supplementary Fig.3). By selecting a large number of distinct reference volumes onto which the remaining volumes were registered, we were able to create a dataset containing 2984 image pairs (22-fold augmentation compared to the initial limited dataset) which was further augmented by an additional factor of two using horizontal flipping, resulting in a final training dataset of 5996 image pairs for P-GAN (also described in Data for training and validating AI models in Methods). Using the augmented dataset for training P-GAN yielded high perceptual similarity of the recovered and the ground truth averaged images which was further corroborated by improved quantitative metrics (Supplementary Table6). By leveraging eye motion for data augmentation, we were able to obtain a sufficiently large training dataset from a recently introduced imaging technology to enable P-GAN to generalize well for never-seen experimental data (Supplementary Table1 and Experimental data for RPE assessment from the recovered images in Methods).
In addition to the structural and perceptual similarity that we demonstrated between P-GAN recovered and averaged images, here, we objectively assessed the degree to which cellular contrast was enhanced by P-GAN compared to averaged images and other AImethods. As expected, examination of the 2D power spectra of the images revealed a bright ring in the power spectra (indicative of the fundamental spatial frequency present within the healthy RPE mosaic arising from the regularly repeating pattern of individual RPE cells) for the recovered and averaged images (insets in Fig.3bi).
a Example specked image acquired from participant S1. Recovered images using b U-Net, c generative adversarial network (GAN), d Pix2Pix, e CycleGAN, f medical image translation using GAN (MedGAN), g uncertainty guided progressive GAN (UP-GAN), h parallel discriminator GAN (P-GAN). i Ground truth averaged image (obtained by averaging 120 adaptive optics optical coherence tomography (AO-OCT) volumes). Insets in (ai) show the corresponding 2D power spectra of the images. A bright ring representing the fundamental spatial frequency of the retinal pigment epithelial (RPE) cells can be observed in U-Net,GAN, Pix2Pix, CycleGAN, MedGAN, UP-GAN, P-GAN, and averaged images power spectrum corresponds to the cell spacing. j Circumferentially averaged power spectral density (PSD) for each of the images. A visible peak corresponding to the RPE cell spacing was observed for U-Net,GAN, Pix2Pix, CycleGAN, MedGAN, UP-GAN, P-GAN, and averaged images. The vertical line indicates the approximate location of the fundamental spatial frequency associated with the RPE cell spacing. The height of the peak (defined as peak distinctiveness (PD)) indicates the RPE cellular contrast measured as the difference in the log PSD between the peak and the local minima to the left of the peak (inset in (j)). Scale bar: 50m.
Interestingly, although this ring was not readily apparent on the speckled single image (inset in Fig.3a), it was present in all the recovered images, reinforcing our observation of the potential of AI to decipher the true pattern of the RPE mosaic from the speckled images. Furthermore, the radius of the ring, representative of the approximate cell spacing (computed from the peak frequency of the circumferentially averaged PSD) (Quantification of cell spacing and contrast in Methods), showed consistency among the different methods (shown by the black vertical line along the peak of the circumferentially averaged PSD in Fig.3j and Table1), indicating high fidelity of recovered cells in comparison to the averaged images.
The height of the local peak of the circumferentially averaged power spectra (which we defined as peak distinctiveness) provided an opportunity to objectively quantify the degree to which cellular contrast was enhanced. Among the different AI methods, the peak distinctiveness achieved by P-GAN was closest to the averaged images with a minimal absolute error of 0.08 compared to ~0.16 for the other methods (Table1), which agrees with our earlier results indicating the improved performance of P-GAN. In particular, P-GAN achieved a contrast enhancement of 3.54-fold over the speckled images (0.46 for P-GAN compared with 0.13 for the speckled images). These observations demonstrate P-GANs effectiveness in boosting cellular contrast in addition to structural and perceptual similarity.
Having demonstrated the efficacy and reliability of P-GAN on test data, we wanted to evaluate the performance of P-GAN on experimental data from never-seen human eyes across an experimental dataset (Supplementary Table1), which to the best of our knowledge, covered the largest extent of AO-OCT imaged RPE cells reported (63 overlapping locations per eye). This feat was made possible using the AI-enhanced AO-OCT approach developed and validated in this paper. Using the P-GAN approach, in our hands, it took 30min of time (including time needed for rest breaks) to acquire single volume acquisitions from 63 separate retinal locations compared to only 4 non-overlapping locations imaged with nearly the same duration using the repeated averaging process (15.8-fold increase in number of locations). Scaling up the averaging approach from 4 to 63 locations would have required nearly 6h to acquire the same amount of RPE data (note that this does not include any data processing time), which is not readily achievable in clinical practice. This fundamental limitation explains why AO-OCT RPE imaging is currently performed only on a small number of retinal locations12,13.
Leveraging P-GANs ability to successfully recover cellular structures from never-seen experimental data, we stitched together overlapping recovered RPE images to construct montages of the RPE mosaic (Fig.4 and Supplementary Fig.5). To further validate the accuracy of the recovered RPE images, we also created ground truth averaged images by acquiring 120 volumes from four of these locations per eye (12 locations total) (Experimental data for RPE assessment from the recovered images in Methods). The AI-enhanced and averaged images for the experimental data at the 12 locations were similar in appearance (Supplementary Fig.6). Objective assessment using PieAPP, DISTS, LPIPS, and FID also showed good agreement with the averaged images (shown by comparable objective scores for experimental data in Supplementary Table7 and test data in Supplementary Table5) at these locations, confirming our previous results and illustrating the reliability of performing RPE recovery for other non-seen locations as well (P-GAN was trained using images obtained from up to 4 retinal locations across all participants). The cell spacing estimated using the circumferentially averaged PSD between the recovered and the averaged images (Supplementary Fig.7 and Supplementary Table8) at the 12 locations showed an error of 0.61.1m (meanSD). We further compared the RPE cell spacing from the montages of the recovered RPE from the three participants (S2, S6, and S7) with the previously published in vivo studies (obtained using different imaging modalities) and histological values (Fig.5)12,46,47,48,49,50,51. Considering the range of values in Fig.5, the metric exhibited inter-participant variability, with cell spacing varying up to 0.5m across participants at any given retinal location. Nevertheless, overall our measurements were within the expected range compared to the published normative data12,46,47,48,49,50,51. Finally, peak distinctiveness computed at 12 retinal locations of the montages demonstrated similar or better performance of P-GAN compared to the averaged images in improving the cellular contrast (Supplementary Table8).
The image shows the visualization of the RPE mosaic using the P-GAN recovered images (this montage was manually constructed from up to 63 overlapping recovered RPE images from the left eye of participant S2). The white squares (ae) indicate regions that are further magnified for better visualization at retinal locations a 0.3mm, b 0.8mm, c 1.3mm, d 1.7mm, and e 2.4mm temporal to the fovea, respectively. Additional examples of montages from two additional participants are shown in Supplementary Fig.5.
Symbols in black indicate cell spacing estimated from P-GAN recovered images for three participants (S2, S6, and S7) at different retinal locations. For comparison, data in gray denote the mean and standard deviation values from previously published studies (adaptive optics infrared autofluorescence (AO-IRAF)48, adaptive optics optical coherence tomography (AO-OCT)12, adaptive optics with short-wavelength autofluorescence (AO-SWAF)49, and histology46,51).
Voronoi analysis performed on P-GAN and averaged images at 12 locations (Supplementary Fig.8) resulted in similar shapes and sizes of the Voronoi neighborhoods. Cell spacing computed from the Voronoi analysis (Supplementary Table9) fell within the expected ranges and showed an average error of 0.50.9m. These experimental results demonstrate the possibility of using AI to transform the way in which AO-OCT is used to visualize and quantitatively assess the contiguous RPE mosaic across different retinal locations directly in the living human eye.
The rest is here:
Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical ... - Nature.com