Category Archives: Machine Learning
Novel wearable system improves balance evaluation – Research & Development World
Researchers at Florida Atlantic University, some of whom are pictured, have developed a novel method using wearable sensors and AI that could reshape balance assessment practices. Credit: Alex Dolce, Florida Atlantic University
Traditionally, physicians have relied on subjective observations and specialized equipment to gauge balance in individuals with conditions such as Parkinsons disease, neurological injuries, and age-related decline. Such methods especially subjective ones can lack precision, are difficult to administer remotely, and can be inconsistent. To address such limitations, researchers from Florida Atlantic University have developed a novel approach using wearable sensors and advanced machine learning algorithms that could redefine balance assessment practices.
The research is published in Frontiers in Digital Health.
The researchers used wearable Inertial Measurement Unit (IMU) sensors placed on five body locations: ankle, lumbar, sternum, wrist, and arm. Data collection followed the Modified Clinical Test of Sensory Interaction on Balance (m-CTSIB) protocol, testing four sensory conditions: eyes open and closed on stable and foam surfaces. Each test lasted roughly 11 seconds, simulating continuous balance scenarios.
The scientists then preprocessed and extracted features from the raw sensor data. They then applied a trio of machine learning algorithms to estimate m-CTSIB scores: multiple linear regression, support vector regression, and the open-source software library XGBOOST.
The researchers trained and validated the models with wearable sensor data as input and corresponding m-CTSIB scores from Falltrak II as ground truth labels.
They used cross-validation methods, correlation with ground truth scores, and Mean Absolute Error (MAE) measures, to evaluate the performance
The XGBOOST model using lumbar sensor data yielded the best results, demonstrating high accuracy and strong correlation with ground truth balance scores. The lumbar and dominant ankle sensors produced the highest performance in balance score estimation.
In Frontiers in Digital Health, the researchers concluded that the findings pave the way for more precise and convenient balance assessments. They state the approach has immense potential to enhance balance performance assessment and management in various settings, including clinical environments, rehabilitation, and remote monitoring.
Read the original here:
Novel wearable system improves balance evaluation - Research & Development World
Artificial Intelligence in GIS: Promise, Progress, and Possibilities | Summer 2024 | ArcNews – Esri
Imagine completing an ArcGIS project from start to finish without needing to click a user interface, open a tool, load a spreadsheet, or adjust symbols and colors. Rather than manually creating a map, users would simply communicate their requirements in natural language inside the software. A few prompts later, the user would have a map with their desired appearance and specifications.
These are real possibilities being investigated and evaluated by research and development teams building generative AI capabilities into ArcGIS. Early prototypes have shown promise in making this vision a reality.
In GIS, AI assistants offer a compelling opportunity to democratize what is already a powerful technology. They stand to make geospatial understanding more accessible to a wider audience and empower users of all skill levels to tackle complex challenges.
A different type of AI is already in use in ArcGIS.
Geospatial artificial intelligence, or GeoAI, accelerates GIS outcomes by leveraging AI subfields like pattern recognition, computer vision, and machine and deep learning methods. GIS professionals use it to automate feature extraction and similar repetitive tasks and to perform advanced analyses.
The development of AI assistants and GeoAI demands careful navigation, given the sensitive nature of GIS work and the important decisions that follow from it.
Esri is embracing the power of AI and the promise it brings. While it is tempting to move quickly, doing things right is more important than doing them fast.
With GeoAI, artificial intelligence is already delivering on its promise to dramatically improve how organizations solve spatial problems. It enables ArcGIS users to automate tasks that once required extensive manual efforts.
GeoAI tools are especially good at extracting meaningful geospatial features from a variety of data sources, including text documents and images. ArcGISwith any of the 70-plus ready-to-use pretrained deep learning packages from Esrican help users automate the extraction of features such as buildings, land-use polygons, swimming pools, solar panels, or trees from imagery or 3D point clouds.
Many different types of organizations use GeoAI capabilities to enhance their geographic approach.
A highway maintenance department can use GeoAI to identify cracks in roads based on drone imagery. Then staff can integrate this with data on traffic patterns to prioritize repair work.
Aid organizations can use GeoAI to make quick damage assessments. Using ArcGIS and a deep learning model, they can compare before-and-after satellite images and identify damaged buildings on a map.
In regions of the world where people live in informal settlements, local governments can use GeoAI to take a more accurate census. The process involves capturing aerial imagery and then, with a deep learning model, extracting building footprints to estimate population.
Each of these scenarios would have required tedious digitization that, in the past, was done manually. Now, users can apply out-of-the-box deep learning models to accelerate the job.
GeoAI also enables predictive analysis of vector data through machine learning algorithms. For example, a machine learning model can be used to estimate flash flood risk in an area based on factors related to precipitation, topography, hydrology, policies, and population demographics.
All this allows for better decision-making and planning by incorporating data-driven insight into GIS workflows.
As everyone knows, the GIS community does important work that informs impactful decisions. It is, therefore, imperative that the data involved is accurate and up-to-date.
This is a fundamental GIS concept that has been true for decades. AI raises the stakesespecially where decisions from AI models affect people and communities.
GeoAI in ArcGIS is built by following the highest standards for trustworthy AI, including well-documented models and instrumentation to help users measure accuracy and bias in analysis.
As has always been the case, GIS professionals must ask the right questions of the data.
Recent advancements in language models have opened exciting new possibilities for building generative AI capabilities into the ArcGIS user experience. These assistants are still in early development, but several prototypes have shown promising potential.
Broadly, two types of AI assistants are being evaluated inside ArcGIS.
The first type, embedded assistants, are designed to boost productivity on everyday tasks. They provide suggestions and automate repetitive actions inside regularly used ArcGIS tools.
Furthest along in development is a beta feature in ArcGIS Survey123. This assistant simplifies the survey design process by providing a conversational approach to building surveys. Prompting the assistant just as they might with ChatGPT, users can quickly create a survey draft without needing to navigate menus or interfaces in the tool.
Other embedded AI assistants are in the early stages of research and development at Esri.
One of these AI assistants aims to help ArcGIS users author SQL, Python, Cypher, and Arcade expressions in ArcGIS Pro. Another is the ArcGIS help system chatbot trained on volumes of ArcGIS documentation that can quickly answer how-to questions. A third assistant would help users conduct market planning and site selection inside ArcGIS Business Analyst.
Apart from the embedded assistants, the second type of assistant being evaluated for use in ArcGIS technology is a broader general AI assistant that might someday encompass the entire ArcGIS experience. Think of this as a sophisticated chatbot that understands GIS data and tools and can answer geospatial questions.
As a simple example, a municipality using ArcGIS Hub could build a site with a public-facing AI assistant that interprets a query about trash pickups. The assistant would reference authoritative open data about the pickup schedule from within the public works departments hub site and use a geocoding service to discern the users location.
Accuracy is paramount in the design. This assistant would invite the user to confirm their location by creating a map showing the geocoded address. For transparency, the assistant would cite its sourcea public works database.
The development of AI technology is moving at an astounding pace. We have only scratched the surface of what AI can do in GIS.
Users are already doing the foundational work. They are publishing data as services and adding metadata. High-quality data forms the backbone of how AI systems learn and reason.
In developing the AI tools for ArcGIS, much of the work involves mitigating risks. This means constraining inputs to authoritative sources and building configurable guardrails.
The development process demands responsible implementation. An Esri AI advisory boarda cross-functional team of technology, product, legal, security, and privacy officersprovides guidelines for responsibly implementing AI in ArcGIS.
Through a commitment to responsible implementation and continuous learning, Esri is helping organizations apply the promise of geography and AI to solve the most challenging problems.
Ismael Chivite is Esris senior principal product manager for AI assistants in ArcGIS. A geographer by training, Chivite loves helping people leverage GIS to improve the way they work. He has been with Esri since 2002 and is always looking for ideas to create and enhance Esri products. Outside of working hours, he likes LEGOs, rock climbing, Romanesque architecture, and Jamn Ibrico.
See more here:
Artificial Intelligence in GIS: Promise, Progress, and Possibilities | Summer 2024 | ArcNews - Esri
AI and Social Media: What Users Must Understand – AiThority
AI and social media have become inseparable entities in todays digital landscape. It is revolutionizing the way we connect, communicate, and consume information. Artificial Intelligence, with its advanced algorithms and machine learning capabilities, has transformed social media platforms. They are now powerful tools for personalization, engagement, and targeted advertising.
Read: How Does AI Contribute To Web3 Intelligence?
AI-driven recommendation systems analyze user preferences and behaviors to curate personalized content feeds, enhancing user experience and maximizing platform engagement. Chatbots powered by AI are being utilized for customer service, providing i****** and efficient responses to user inquiries. Moreover, AI algorithms help with content moderation, flagging and removing inappropriate content to maintain a safe online environment.
However, the proliferation of AI in social media also raises concerns about privacy, data security, and algorithmic bias. As AI continues to evolve, the intersection between AI and social media will shape the future of digital interactions. It will be influencing social dynamics, information dissemination, and the boundaries of online privacy.
Read: Taking Generative AI from Proof of Concept to Production
As AI continues to shape social media platforms, it is crucial for users to understand the inner workings of algorithms. Here are some crucial details related to the working of algorithms:
Read: How AI can help Businesses Run Service Centres and Contact Centres at Lower Costs?
Lets explore some interesting developments that are likely to be witnessed in AI-driven social media:
As AI continues to shape social media, understanding algorithms, anticipating future advancements, and advocating for ethical practices will empower users to navigate the digital landscape responsibly and confidently.
[To share your insights with us as part of editorial or sponsored content, please write topsen@itechseries.com]
Excerpt from:
AI and Social Media: What Users Must Understand - AiThority
Latest Research on VQA part1(Machine Learning 2024) | by Monodeep Mukherjee | Jun, 2024 – Medium
Tackling VQA with Pretrained Foundation Models without Further Training
Authors: Alvin De Jun Tan, Bingquan Shen
Abstract: Large language models (LLMs) have achieved state-of-the-art results in many natural language processing tasks. They have also demonstrated ability to adapt well to different tasks through zero-shot or few-shot settings. With the capability of these LLMs, researchers have looked into how to adopt them for use with Visual Question Answering (VQA). Many methods require further training to align the image and text embeddings. However, these methods are computationally expensive and requires large scale image-text dataset for training. In this paper, we explore a method of combining pretrained LLMs and other foundation models without further training to solve the VQA problem. The general idea is to use natural language to represent the images such that the LLM can understand the images. We explore different decoding strategies for generating textual representation of the image and evaluate their performance on the VQAv2 dataset
Originally posted here:
Latest Research on VQA part1(Machine Learning 2024) | by Monodeep Mukherjee | Jun, 2024 - Medium
Helping nonexperts build advanced generative AI models | MIT News | Massachusetts Institute of Technology – MIT News
The impact of artificial intelligence will never be equitable if theres only one company that builds and controls the models (not to mention the data that go into them). Unfortunately, todays AI models are made up of billions of parameters that must be trained and tuned to maximize performance for each use case, putting the most powerful AI models out of reach for most people and companies.
MosaicML started with a mission to make those models more accessible. The company, which counts Jonathan Frankle PhD 23 and MIT Associate Professor Michael Carbin as co-founders, developed a platform that let users train, improve, and monitor open-source models using their own data. The company also built its own open-source models using graphical processing units (GPUs) from Nvidia.
The approach made deep learning, a nascent field when MosaicML first began, accessible to far more organizations as excitement around generative AI and large language models (LLMs) exploded following the release of Chat GPT-3.5. It also made MosaicML a powerful complementary tool for data management companies that were also committed to helping organizations make use of their data without giving it to AI companies.
Last year, that reasoning led to the acquisition of MosaicML by Databricks, a global data storage, analytics, and AI company that works with some of the largest organizations in the world. Since the acquisition, the combined companies have released one of the highest performing open-source, general-purpose LLMs yet built. Known as DBRX, this model has set new benchmarks in tasks like reading comprehension, general knowledge questions, and logic puzzles.
Since then, DBRX has gained a reputation for being one of the fastest open-source LLMs available and has proven especially useful at large enterprises.
More than the model, though, Frankle says DBRX is significant because it was built using Databricks tools, meaning any of the companys customers can achieve similar performance with their own models, which will accelerate the impact of generative AI.
Honestly, its just exciting to see the community doing cool things with it, Frankle says. For me as a scientist, thats the best part. Its not the model, its all the amazing stuff the community is doing on top of it. That's where the magic happens.
Making algorithms efficient
Frankle earned bachelors and masters degrees in computer science at Princeton University before coming to MIT to pursue his PhD in 2016. Early on at MIT, he wasn't sure what area of computing he wanted to study. His eventual choice would change the course of his life.
Frankle ultimately decided to focus on a form of artificial intelligence known as deep learning. At the time, deep learning and artificial intelligence did not inspire the same broad excitement as they do today. Deep learning was a decades-old area of study that had yet to bear much fruit.
I dont think anyone at the time anticipated deep learning was going to blow up in the way that it did, Frankle says. People in the know thought it was a really neat area and there were a lot of unsolved problems, but phrases like large language model (LLM) and generative AI werent really used at that time. It was early days.
Things began to get interesting with the 2017 release of a now-infamous paper by Google researchers, in which they showed a new deep-learning architecture known as the transformer was surprisingly effective as language translation and held promise across a number of other applications, including content generation.
In 2020, eventual Mosaic co-founder and tech executive Naveen Rao emailed Frankle and Carbin out of the blue. Rao had read a paper the two had co-authored, in which the researchers showed a way to shrink deep-learning models without sacrificing performance. Rao pitched the pair on starting a company. They were joined by Hanlin Tang, who had worked with Rao on a previous AI startup that had been acquired by Intel.
The founders started by reading up on different techniques used to speed up the training of AI models, eventually combining several of them to show they could train a model to perform image classification four times faster than what had been achieved before.
The trick was that there was no trick, Frankle says. I think we had to make 17 different changes to how we trained the model in order to figure that out. It was just a little bit here and a little bit there, but it turns out that was enough to get incredible speed-ups. Thats really been the story of Mosaic.
The team showed their techniques could make models more efficient, and they released an open-source large language model in 2023 along with an open-source library of their methods. They also developed visualization tools to let developers map out different experimental options for training and running models.
MITs E14 Fund invested in Mosaics Series A funding round, and Frankle says E14s team offered helpful guidance early on. Mosaics progress enabled a new class of companies to train their own generative AI models.
There was a democratization and an open-source angle to Mosaics mission, Frankle says. Thats something that has always been very close to my heart. Ever since I was a PhD student and had no GPUs because I wasnt in a machine learning lab and all my friends had GPUs. I still feel that way. Why cant we all participate? Why cant we all get to do this stuff and get to do science?
Open sourcing innovation
Databricks had also been working to give its customers access to AI models. The company finalized its acquisition of MosaicML in 2023 for a reported $1.3 billion.
At Databricks, we saw a founding team of academics just like us, Frankle says. We also saw a team of scientists who understand technology. Databricks has the data, we have the machine learning. You can't do one without the other, and vice versa. It just ended up being a really good match.
In March, Databricks released DBRX, which gave the open-source community and enterprises building their own LLMs capabilities that were previously limited to closed models.
The thing that DBRX showed is you can build the best open-source LLM in the world with Databricks, Frankle says. If youre an enterprise, the skys the limit today.
Frankle says Databricks team has been encouraged by using DBRX internally across a wide variety of tasks.
Its already great, and with a little fine-tuning its better than the closed models, he says. Youre not going be better than GPT for everything. Thats not how this works. But nobody wants to solve every problem. Everybody wants to solve one problem. And we can customize this model to make it really great for specific scenarios.
As Databricks continues pushing the frontiers of AI, and as competitors continue to invest huge sums into AI more broadly, Frankle hopes the industry comes to see open source as the best path forward.
Im a believer in science and Im a believer in progress and Im excited that were doing such exciting science as a field right now, Frankle says. Im also a believer in openness, and I hope that everybody else embraces openness the way we have. That's how we got here, through good science and good sharing.
Harness AIs potential and navigate disruption with Digital Realty – CIO
It may seem like artificial intelligence (AI) became a media buzzword overnight, but this disruptive technology has been at the forefront of our agenda for several years at Digital Realty. Weve seen how the advent of cloud technology significantly transformed the landscape of digital business, and AI is set to disrupt industries in ways we are only beginning to understand. The key, as always, is to be on the right side of disruption, by embracing change and leveraging it to your advantage.
Getting AI right is a raceenterprises are feeling the pressure to harness AI to build unique value ahead of their competitors.
Digital Realty anticipated how AI would disrupt IT infrastructure and began planning a roadmap to support our customers over six years ago, working with groundbreaking early adopters and learning along the way. Heres what weve learned is necessary to successfully navigate the inevitable disruption and come out ahead by harnessing AIs potential.
AIs evolution: Machine learning, deep learning, GenAI
AI encompasses a suite of rapidly evolving technologies. Its a journey that started in earnest during the early 2000s with machine learning (ML). ML crunches vast amounts of data to learn from results, discover patterns, make predictions, and even automate some tasks.
Then came deep learning in the 2010s, further enhancing perception capabilities in computer vision. This enabled object classification and detection, voice recognition, and even partly autonomous vehicles.
Now, we are witnessing the rise of generative AI in the 2020s, which emphasizes language mastery. Its implications are profound, given how language permeates every facet of an organizations activities institutional knowledge, communication, and processes.
The potential benefits are enormous:Accentureestimates that 40% of all working hours can be augmented by large language models like GPT-4 and 65% of language tasks can be transformed into more productive activities through augmentation and automation.
Crucially, all these AI technologies hinge on data. Thats why our focus at Digital Realty has always been about data, and managingData Gravity challenges, to help ensure our customers can efficiently store, analyze, and extract value from their data by providing the meeting place where companies, technologies, and data come together.
Cloud as a case study: What we learned
The cloud journey is a good case study for thinking about disruption. I remember its inception and the initial debates about whether the cloud was friend or foe, and many enterprises are still navigating through its profound impact on digital transformation.
Your data oceans feed your cloud workloads and applications, which then creates even more data. The big question now is how do you optimize this relationship to create maximum value?
Initially, cloud was accessed over the public internet, often with little thought to proximity andsecurity. Many enterprises are understanding that in practice, proximity, and security matter immensely and businesses can lose their competitive edge if they dont optimize each. In fact, Ive built my career on pioneering private cloud consumption and enabling businesses to optimize their digital transformations.
Digital Realty has been instrumental in transforming the cloud into a safe and efficient environment where businesses can drive unique value. Today, we manage over 2.4 GW (gigawatts) of power and enable connected campuses across the globe.Were working to lower barriers to optimizehybrid multi-cloudwithServiceFabric Connect, a private, software-definedinterconnectionsolution.
Having assisted many of our 5,000 customers in their cloud journey, were poised to do the same for your AI journey.
Unlock the value from your data with AI
Falling behind in AI could mean getting disrupted. Its a land rush to build unique value over competitors and to fend off new entrants like digital disruptors that arent contending with legacy infrastructure.
At Digital Realty, weve been tracking the evolution of AI since before ourInvestor Day in 2017, where we identified AI as a primary driver of next-generationdata centerrequirements. Digital Realty has been aligning our offerings to meet these emerging demands. We understood that our customers would need an AI-readyglobal data center platformpurpose-built to deploy and scale innovation and drive business value.
Digital Realty
Source: Digital Realty Investor Day presentation, Slide 18, 2017
Why does AI require an AI-ready data center platform?
AI, especially analytics, requires a specialized environment due to specific hardware and data processing requirements. Power density requirements for AI can be 5 to 10 times more than traditional data center functions, and the need for liquid cooling is fast approaching.
Digital Realtys solution? A range of state-of-the-art tools to build optimized AI architectures and the ability to digitally engineer deployments in virtual environments. Digital Realtys data center designs contain modularity and large capacity blocks to support legacy and high-density AI deployments, allinterconnectedwithServiceFabric, a global, purpose-built network fabric.
Were also committed to sustainable growth. We can support your sustainable data needs of today and tomorrow with 400 MW of space expected to come online in the next 18 months, 1 GW of renewable energy under contract, and our entire European portfolio and our UScolocationportfolio are 100% renewable powered.
Digital Realty has supported the cloud providers globally for years and we developed core competencies along the way that enable us to do the same for our customers who need a home for AI.
Stay innovative,reach out to us, and lets deploy AI in a way that transforms your organization.
As of March 31, 2023, and represents consolidated portfolio plus our managed portfolio of unconsolidated joint ventures based on our ownership percentage.
View original post here:
Harness AIs potential and navigate disruption with Digital Realty - CIO
Coordinate-based neural representations for computational adaptive optics in widefield microscopy – Nature.com
Ji, N. Adaptive optical fluorescence microscopy. Nat. Methods 14, 374380 (2017).
Article Google Scholar
Hampson, K. M. et al. Adaptive optics for high-resolution imaging. Nat. Rev. Methods Primer 1, 68 (2021).
Article Google Scholar
Zhang, Q. et al. Adaptive optics for optical microscopy [invited]. Biomed. Opt. Express 14, 1732 (2023).
Article Google Scholar
Rueckel, M., Mack-Bucher, J. A. & Denk, W. Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing. Proc. Natl Acad. Sci. USA 103, 1713717142 (2006).
Article Google Scholar
Cha, J. W., Ballesta, J. & So, P. T. C. Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy. J. Biomed. Opt. 15, 046022 (2010).
Article Google Scholar
Aviles-Espinosa, R. et al. Measurement and correction of in vivo sample aberrations employing a nonlinear guide-star in two-photon excited fluorescence microscopy. Biomed. Opt. Express 2, 3135 (2011).
Article Google Scholar
Azucena, O. et al. Adaptive optics wide-field microscopy using direct wavefront sensing. Opt. Lett. 36, 825827 (2011).
Article Google Scholar
Wang, K. et al. Rapid adaptive optical recovery of optimal resolution over large volumes. Nat. Methods 11, 625628 (2014).
Article Google Scholar
Wang, K. et al. Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue. Nat. Commun. 6, 7276 (2015).
Article Google Scholar
Paine, S. W. & Fienup, J. R. Machine learning for improved image-based wavefront sensing. Opt. Lett. 43, 1235 (2018).
Article Google Scholar
Asensio Ramos, A., De La Cruz Rodrguez, J. & Pastor Yabar, A. Real-time, multiframe, blind deconvolution of solar images. Astron. Astrophys. 620, A73 (2018).
Article Google Scholar
Nishizaki, Y. et al. Deep learning wavefront sensing. Opt. Express 27, 240 (2019).
Article Google Scholar
Andersen, T., Owner-Petersen, M. & Enmark, A. Neural networks for image-based wavefront sensing for astronomy. Opt. Lett. 44, 4618 (2019).
Article Google Scholar
Saha, D. et al. Practical sensorless aberration estimation for 3D microscopy with deep learning. Opt. Express 28, 29044 (2020).
Article Google Scholar
Wu, Y., Guo, Y., Bao, H. & Rao, C. Sub-millisecond phase retrieval for phase-diversity wavefront sensor. Sensors 20, 4877 (2020).
Article Google Scholar
Allan, G., Kang, I., Douglas, E. S., Barbastathis, G. & Cahoy, K. Deep residual learning for low-order wavefront sensing in high-contrast imaging systems. Opt. Express 28, 26267 (2020).
Article Google Scholar
Yanny, K., Monakhova, K., Shuai, R. W. & Waller, L. Deep learning for fast spatially varying deconvolution. Optica 9, 96 (2022).
Article Google Scholar
Hu, Q. et al. Universal adaptive optics for microscopy through embedded neural network control. Light: Sci. Appl. 12, 270 (2023)
Lehtinen, J. et al. Noise2Noise: learning image restoration without clean data. In Proc. 35th International Conference on Machine Learning Vol. 80 (eds Dy, J. & Krause, A.) 29652974 (PMLR, 2018).
Krull, A., Buchholz, T.-O. & Jug, F. Noise2Void - learning denoising from single noisy images. In Proc. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 21242132 (IEEE, 2019); https://doi.org/10.1109/CVPR.2019.00223
Platisa, J. et al. High-speed low-light in vivo two-photon voltage imaging of large neuronal populations. Nat. Methods 20, 10951103 (2023).
Li, X. et al. Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat. Biotechnol. https://doi.org/10.1038/s41587-022-01450-8 (2022).
Article Google Scholar
Eom, M. et al. Statistically unbiased prediction enables accurate denoising of voltage imaging data. Nat. Methods 20, 15811592 (2022).
Ren, D., Zhang, K., Wang, Q., Hu, Q. & Zuo, W. Neural blind deconvolution using deep priors. In Proc. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 33383347 (IEEE, 2020); https://doi.org/10.1109/CVPR42600.2020.00340
Wang, F. et al. Phase imaging with an untrained neural network. Light: Sci. Appl. 9, 77 (2020).
Article Google Scholar
Bostan, E., Heckel, R., Chen, M., Kellman, M. & Waller, L. Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network. Optica 7, 559 (2020).
Article Google Scholar
Kang, I. et al. Simultaneous spectral recovery and CMOS micro-LED holography with an untrained deep neural network. Optica 9, 1149 (2022).
Article Google Scholar
Zhou, K. C. & Horstmeyer, R. Diffraction tomography with a deep image prior. Opt. Express 28, 12872 (2020).
Article Google Scholar
Sun, Y., Liu, J., Xie, M., Wohlberg, B. & Kamilov, U. CoIL: coordinate-based internal learning for tomographic imaging. IEEE Trans. Comput. Imaging 7, 14001412 (2021).
Article Google Scholar
Liu, R., Sun, Y., Zhu, J., Tian, L. & Kamilov, U. Recovery of continuous 3D refractive index maps from discrete intensity-only measurements using neural fields. Nat. Mach. Intell. 4, 781791 (2022).
Kang, I. et al. Accelerated deep self-supervised ptycho-laminography for three-dimensional nanoscale imaging of integrated circuits. Optica 10, 10001008 (2023).
Article Google Scholar
Chan, T. F. & Chiu-Kwong, W. Total variation blind deconvolution. IEEE Trans. Image Process. 7, 370375 (1998).
Article Google Scholar
Levin, A., Weiss, Y., Durand, F. & Freeman, W. T. Understanding and evaluating blind deconvolution algorithms. In Proc. 2009 IEEE Conference on Computer Vision and Pattern Recognition 19641971 (IEEE, 2009); https://doi.org/10.1109/CVPR.2009.5206815
Perrone, D. & Favaro, P. Total variation blind deconvolution: the devil is in the details. In Proc. 2014 IEEE Conference on Computer Vision and Pattern Recognition 29092916 (IEEE, 2014); https://doi.org/10.1109/CVPR.2014.372
Jin, M., Roth, S. & Favaro, P. in Computer Vision ECCV 2018. ECCV 2018. Lecture Notes in Computer Science Vol. 11211 (eds Ferrari, V. et al.) 694711 (Springer, 2018).
Hornik, K., Stinchcombe, M. & White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359366 (1989).
Article Google Scholar
Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2, 303314 (1989).
Tewari, A. et al. Advances in neural rendering. In ACM SIGGRAPH 2021 Courses, 1320 (Association for Computing Machinery, 2021).
Tancik, M. et al. in Advances in Neural Information Processing Systems Vol. 33 (eds Larochelle, H. et al.) 75377547 (Curran Associates, 2020).
Mildenhall, B. et al. NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65, 99106 (2022).
Article Google Scholar
Perdigao, L., Shemilt, L. A. & Nord, N. rosalindfranklininstitute/RedLionfish v.0.9. Zenodo https://doi.org/10.5281/zenodo.7688291 (2023).
Richardson, W. H. Bayesian-based iterative method ofimage restoration*. J. Opt. Soc. Am. 62, 55 (1972).
Article Google Scholar
Lucy, L. B. An iterative technique for the rectification of observed distributions. Astron. J. 79, 745 (1974).
Article Google Scholar
Sitzmann, V. et al. Scene representation networks: continuous 3D-structure-aware neural scene representations. In Proc. 33rd International Conference on Neural Information Processing Systems Vol. 32 (eds Wallach, H. et al.) 11211132 (Curran Associates, 2019).
Martel, J. N. P. et al. ACORN: adaptive coordinate networks for neural scene representation. ACM Trans. Graph. 40, 113 (2021).
Zhao, H., Gallo, O., Frosio, I. & Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3, 4757 (2017).
Article Google Scholar
Kang, I., Zhang, F. & Barbastathis, G. Phase extraction neural network (PhENN) with coherent modulation imaging (CMI) for phase retrieval at low photon counts. Opt. Express 28, 21578 (2020).
Article Google Scholar
Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://doi.org/10.48550/arXiv.1412.6980 (2017).
Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In Proc. 33rd International Conference on Neural Information Processing Systems (eds Wallach, H. M. et al.) 721 (Curran Associates, 2019).
Turcotte, R., Liang, Y. & Ji, N. Adaptive optical versus spherical aberration corrections for in vivo brain imaging. Biomed. Opt. Express 8, 38913902 (2017).
Article Google Scholar
Kolouri, S., Park, S. R., Thorpe, M., Slepcev, D. & Rohde, G. K. Optimal mass transport: signal processing and machine-learning applications. IEEE Signal Process Mag. 34, 4359 (2017).
Article Google Scholar
Villani, C. Topics in Optimal Transportation Vol. 58 (American Mathematical Society, 2021).
Turcotte, R. et al. Dynamic super-resolution structured illumination imaging in the living brain. Proc. Natl Acad. Sci. USA 116, 95869591 (2019).
Article Google Scholar
Li, Z. et al. Fast widefield imaging of neuronal structure and function with optical sectioning in vivo. Sci. Adv. 6, eaaz3870 (2020).
Article Google Scholar
Zhang, Q., Pan, D. & Ji, N. High-resolution in vivo optical-sectioning widefield microendoscopy. Optica 7, 1287 (2020).
Article Google Scholar
Zhao, Z. et al. Two-photon synthetic aperture microscopy for minimally invasive fast 3D imaging of native subcellular behaviors in deep tissue. Cell 186, 24752491.e22 (2023).
Article Google Scholar
Wu, J. et al. Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale. Cell 184, 33183332.e17 (2021).
Article Google Scholar
Gerchberg, R. W. A practical algorithm for the determination of plane from image and diffraction pictures. Optik 35, 237246 (1972).
Google Scholar
Flamary, R. et al. POT: Python optimal transport. J. Mach. Learn. Res. 22, 18 (2021).
Google Scholar
Holmes, T. J. et al. in Handbook of Biological Confocal Microscopy (ed. Pawley, J. B.) 389402 (Springer, 1995).
See more here:
Coordinate-based neural representations for computational adaptive optics in widefield microscopy - Nature.com
Prediction of hepatic metastasis in esophageal cancer based on machine learning | Scientific Reports – Nature.com
Esophageal cancer is a remarkably fatal malignancy, with a prevalence of distant metastases reaching up to 42% in newly diagnosed patients, prominently affecting the liver as the most frequently involved organ26,27,28. The effective treatment and comprehensive management of metastatic esophageal cancer necessitate a multimodal strategy, which continues to pose significant challenges. Therefore, it is of crucial significance for clinical decision-making to identify high-risk factors of esophageal cancer and accurately predict whether patients will develop liver metastasis based on their individual and unique clinical and pathological characteristics.
Currently, the HM of advanced esophageal cancer remains understudied in the scientific literature. Prognostic research in this domain is predominantly focused on two key aspects. Firstly, there is a conspicuous paucity of exploratory investigations into the high-risk prognostic factors associated with esophageal cancer. Additionally, further exploration of the interrelationships among these independent prognostic factors is noticeably lacking. Secondly, there is a dearth of research on HM models for advanced esophageal cancer that leverage the immense potential of big data. Consequently, there is an urgent need for comprehensive studies in these areas to contribute to an improved understanding and accurate prognostication of advanced esophageal cancer.
Some studies believe that smoking and drinking are the most common risk factors for male esophageal cancer29. Some previous studies30 have also shown that for cancer patients, the degree of tissue differentiation, pathological N-stage, vascular invasion, and neuroinvasion are recognized factors that affect the prognosis of patients with esophageal cancer31,32,33,34. The conclusions of these studies lacked the support of big data and did not address the prediction on HM of advanced esophageal cancer. Based on big data analysis of SEER database, our study screened out independent high risk factors associated with HM by logistic regression analysis. This study included 15 clinically common relevant factors associated with advanced esophageal cancer with liver metastasis, which are: age, sex, Marital status, Race, Primary Site, Tumor histology, Tumor grade, T stage, N stage, Surgery, Radiation, Chemotherapy, Brain metastasis, Bone metastasis, Lung metastasis. To identify the independence between features, we obtained a correlation heat map by Spearman correlation analysis. There was no strong correlation among these 15 features by the Fig.2A. Moreover, 11 independent high risk factors related to liver metastasis were screened by logistic regression analysis, which were as follows: age, Primary Site, Tumor histology, Tumor grade, T stage, N stage, Surgery, Radiation, Chemotherapy, Bone metastasis, Lung metastasis.
Undoubtedly, the construction of prediction models for HM of advanced esophageal cancer is equally significant to the exploration of independent high risk factors in this context. Presently, there is a notable dearth of studies focused on risk factors in esophageal cancer patients with distant organ metastases35. For instance, Tang et al. previously constructed a nomogram to predict the survival of patients with metastatic esophageal cancer; however, this study encompassed metastases to all anatomical sites, without specifically exploring a prediction model for predicting the risk of distant metastasis36. Similarly, Cheng et al. established models for predicting both the risk and survival of esophageal cancer patients, albeit those specifically tailored to brain metastasis37. Furthermore, Guo et al. provided detailed characteristics and explored risk and prognostic factors for patients with liver metastasis, yet they did not develop any predictive tools38. Considering that liver metastasis represents the most common site of distant spread, conducting a comprehensive investigation specifically targeting esophageal cancer patients with liver metastasis assumes paramount clinical importance.
Previous studies have constructed nomograms to predict EC metastasis based on traditional logistic models. However, the limitations of this method in prediction accuracy and processing big data have made it difficult to make great breakthroughs in precision medicine9,10. And traditional research cannot exploration the interaction between different independent high risk factors18,19. In contrast, our study can better document complex associations between different independent high risk factors, thereby improving the accuracy of the model20. Previous studies have used nomogram methods to build a model for predicting the metastasis of patients with esophageal cancer based on the data of patients with esophageal cancer in the SEER database, but these studies did not involve the establishment of a predicting model for HM of advanced metastatic esophageal cancer by ML21.
We then constructed six prediction models using ML, Internal ten-fold cross-validation (Fig.3A) showed that GBM model performed best among the six models. Leveraging these findings, we have successfully devised an openly accessible online calculator (https://project2-dngisws9d7xkygjcvnue8u.streamlit.app/) based on the GBM model. The model we have developed accurately predicts patients' risk of HM based on various clinical indicators. Clinicians can access this model through the provided website to input patient information and obtain corresponding predictions of hepatic metastases, thereby facilitating clinical decision-making.
Our research has the following advantages. Firstly, this study established a statistical model based on machine learning that can predict the HM of patients with EC. To the best of our knowledge, we are the first to use ML to construct a prediction model of LM of EC. This model is more reliable than the traditional nomogram prediction model. And this work expanded our knowledge of advanced EC. Second, our study further explores the relationship between different independent high risk factors, which provides a new direction for future clinical research. In other words, clinical research should not only explore the metastasis of patients, but also explore the correlation between different independent high risk factors, so as to better find the relationship between these factors and further eliminate the factors that are not conducive to the metastasis of patients during perioperative period.
Meanwhile, this study has some limitations. First, Current machine learning is almost entirely statistical or black-box, bring severe theoretical limitations to its performance23. Second, this study is a single-center study with limited number of patients included, and the application of machine learning model on large data sets can obtain more stable results22. Therefore, in subsequent studies, multi-center data can be added for training and external verification, so as to obtain a more reliable prediction model. Third, this study did not include neoadjuvant therapy, surgical methods, circulating tumor DNA and other factors that may affect the long-term prognosis of patients with esophageal cancer. In the future, with the continuous improvement of the database, we will incorporate more correlation parameters associated with the HM of EC into the web predictor to improve its adaptability.
Read more here:
Prediction of hepatic metastasis in esophageal cancer based on machine learning | Scientific Reports - Nature.com
Deep learning links lung shape differences and COVID-19 severity – HealthITAnalytics.com
June 24, 2024 -A research team from Emory AI.Health used deep learning to determine that COVID-19 patients experience significant lung damage and deformities associated with the diseases severity, according to a study published in the Journal of Computers in Medicine and Biology.
The researchers emphasized that severe cases of COVID-19 can result in extensive lung disease and deformities, but the impact of disease severity on lung structure has not been well-studied to date. Studying the diseases effects on internal organs is critical to informing care strategies for COVID-19 patients.
COVID-19 can cause serious complications such as pneumonia, severe lung damage, and blood infections, which can lead to lasting lung problems like scarring and chronic breathing issues. While some people recover fully, others may suffer permanent lung damage, explained first author of the study Amogh Hiremath, AI scientist at Picture Health, in a news release. Understanding how COVID-19 affects the lungs during its early onset can help us better understand and treat the disease.
To investigate the relationship between COVID-19 severity and lung structure, the research team turned to deep learning.
Chest computed tomography (CT) scans were gathered from a cohort of 3,230 patients. These participants were then split into three groups based on COVID-19 presence and severity: healthy, mild-COVID-19 and severe COVID-19.
This information was used to explore lung shape differences among the groups with baseline CT imaging. The researchers fed this data to a 3D residual convolutional network to segment and analyze each image.
In doing so, the deep learning approach allowed the team to build a map of lung shape changes. This analysis revealed that lung deformations were prominent in those infected with COVID-19, regardless of severity.
Across both mild and severe cases, differences along the mediastinal surfaces of the lungs were observed, and significant differences in the basal surfaces were found when the healthy and severe COVID-19 cohorts were compared.
The researchers indicated that these deformities are likely to impair lung function, leading to potential adverse outcomes in overall health, quality of life and mortality for affected patients. The team further noted that their findings could help shed light on the lingering effects of COVID-19 on lung function, especially as health systems and patients grapple with long COVID.
Although the acute phase of COVID-19 has largely been mitigated, the persistence and impact of long COVID remains a concern. Our AI analysis identified specific areas of lung damage that could have enduring consequences, said Anant Madabhushi, PhD, executive director of Emory AI.Health and principal investigator of the study. While we have not yet examined long COVID patients explicitly, its crucial to investigate whether these individuals exhibit residual lung deformation, which could provide valuable insights into the long-term effects of this disease.
Visit link:
Deep learning links lung shape differences and COVID-19 severity - HealthITAnalytics.com
Assessing the risk of HCC with machine learning – Drug Target Review
A novel screening tool may increase the five-year survival rate of hepatocellular carcinoma patients to 90 percent.
Researchers at the University Pittsburgh School of Medicine have developed a serum-fusion-gene machine-learning (ML) model. Due to its enhanced accuracy in early diagnosis of hepatocellular carcinoma (HCC), the most common form of liver cancer, this screening tool could increase the five-year survival rate of HCC patients from 20 percent to 90 percent.
The most common screening test searches for the HCC biomarker, serum alpha-fetal protein. However, it is not always accurate, and up to 60 percent of liver cancers are diagnosed at advanced stages, meaning a poor survival rates for patients. Lead investigator Dr Jian-Hua Luo, Department of Pathology, High Throughput Genome Center, and Pittsburgh Liver Research Center, University of Pittsburgh School of Medicine, commented:What we need is a cost-effective, accurate, and convenient test to screen early-stage liver cancer in human populations. We wanted to explore if a machine-learning approach could be used to increase the accuracy of screening for HCC based on the status of the fusion genes.
The team analysed nine fusion transcripts in serum samples from 61 patients with HCC and 75 patients with non-HCC conditions using real-time quantitative reverse transcription PCR (RT-PCR). In HCC patients, seven of the nine fusions were often found. Then, based on the serum fusion-gene levels to predict HCC in the training cohort, ML models were generated.
An accuracy of 83 percent to 91 percent in predicting the occurrence of HCC was produced from a four fusion gene logistic regression model. When combined with serum alpha-fetal protein, the two-fusion gene plus alpha-fetal protein logistic regression model produced 95 percent accuracy for all the cohorts. Additionally, quantification of fusion gene transcripts in the serum samples accurately evaluated the impact of the treatment and could monitor for the recurrence of the cancer.
Dr Luo explained:The fusion gene machine-learning model significantly improves the early detection rate of HCC over the serum alpha-fetal protein alone. It may serve as an important tool in screening for HCC and in monitoring the impact of HCC treatment. This test will find patients who are likely to have HCC.
The study was published in The American Journal of Pathology.
See more here:
Assessing the risk of HCC with machine learning - Drug Target Review