Page 489«..1020..488489490491..500510..»

Ethereum’s Layer 2 Debate: Buterin Aligns with Daniel Wang on Validium Classification – Blockchain.News

Ethereum's co-founder, Vitalik Buterin, has sparked a significant discussion in the crypto community regarding the classification and nature of layer 2 scaling solutions, especially focusing on the concept of validiums. This debate arises following Buterin's agreement with a statement by Daniel Wang, the founder of the Ethereum rollup solution Taiko, on the classification of certain layer-2 solutions as validiums.

Buterin concurred with Wang's view that Ethereum rollups utilizing external data chains, such as the modular blockchain Celestia, should be considered validiums rather than traditional rollups. The crux of this classification lies in the security guarantees provided by these solutions. Buterin emphasized that the essence of a rollup is its unconditional security guarantee, which allows users to recover their assets even in cases of collusion. This level of security is compromised when data availability relies on external systems, a characteristic of validium networks.

Validiums, a subset of Ethereum scaling solutions, use zero-knowledge proofs to facilitate off-chain transactions while depending on Ethereum's mainnet for security and verification. Unlike zero-knowledge rollups that batch transactions on a layer-2 network and then verify them on Ethereum's main chain, validiums do not post full transaction data to the main chain. Instead, they post cryptographic proofs of the transactions' validity, aiming for greater scalability since the complete transaction data isn't stored on-chain. However, this approach has its drawbacks, notably in terms of data availability, as it depends on operators to post honest proofs.

The debate around validiums and their classification as layer-2 solutions is not just technical but also conceptual. It reflects the evolving nature of Ethereum's infrastructure and the diverse perspectives within its community. Buterin, in response to the debate, suggested new terminologies, proposing terms like "strong" for security-favoring systems and "light" for scale-favoring systems, such as validiums. This proposal is part of a broader conversation about the trade-offs between security, decentralization, and scalability in the development of Ethereum's layer-2 solutions.

Despite the ongoing debates, the adoption of layer-2 networks like Arbitrum and Optimism is increasing, indicating a growing interest and investment in Ethereum's scaling solutions. The upcoming Ethereum Merge upgrade is anticipated to further boost the efficiency and appeal of these networks.

In summary, Ethereum's journey mirrors the early days of the internet, evolving from a niche technology to a mainstream platform. As Ethereum undergoes significant technical transitions, including the layer-2 scaling transition, it faces challenges similar to those the internet overcame. This evolution, marked by debates like the one sparked by Buterin, is a testament to Ethereum's dynamic and innovative community, striving to balance scalability, security, and decentralization.

Read the original post:

Ethereum's Layer 2 Debate: Buterin Aligns with Daniel Wang on Validium Classification - Blockchain.News

Read More..

Ethereum leaders propose classification for layer-2 solutions By Investing.com – Investing.com

SAN FRANCISCO - co-founder Vitalik Buterin and Taiko's Daniel Wang have put forward a proposal to classify layer-2 scaling solutions to enhance clarity in the Ethereum ecosystem. The discussion, which took place today, introduced a distinction between "strong" rollups and "light" validiums, terms aimed at navigating the trade-offs between security and scalability within the network.

The proposed classification system comes as the Ethereum community gears up for the Ethereum Merge upgrade, an initiative designed to bolster network performance. According to the conversation between Buterin and Wang, rollups are considered "strong" due to their method of posting full transaction data on the Ethereum chain, thus prioritizing security. On the other hand, "light" validiums focus on scalability by employing zero-knowledge proofs and storing only a hash on-chain.

With the Ethereum Merge on the horizon, the distinction between different types of layer-2 options is expected to play a significant role in the upgrade's success and the future scalability of the Ethereum network.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Read the original here:

Ethereum leaders propose classification for layer-2 solutions By Investing.com - Investing.com

Read More..

DeepMind AI rivals the world’s smartest high schoolers at geometry – Ars Technica

Enlarge / Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGO, attends the AI Safety Summit at Bletchley Park on November 2, 2023 in Bletchley, England.

A system developed by Googles DeepMind has set a new record for AI performance on geometry problems. DeepMinds AlphaGeometry managed to solve 25 of the 30 geometry problems drawn from the International Mathematical Olympiad between 2000 and 2022.

That puts the software ahead of the vast majority of young mathematicians and just shy of IMO gold medalists. DeepMind estimates that the average gold medalist would have solved 26 out of 30 problems. Many view the IMO as the worlds most prestigious math competition for high school students.

Because language models excel at identifying general patterns and relationships in data, they can quickly predict potentially useful constructs, but often lack the ability to reason rigorously or explain their decisions, DeepMind writes. To overcome this difficulty, DeepMind paired a language model with a more traditional symbolic deduction engine that performs algebraic and geometric reasoning.

The research was led by Trieu Trinh, a computer scientist who recently earned his PhD from New York University. He was a resident at DeepMind between 2021 and 2023.

Evan Chen, a former Olympiad gold medalist who evaluated some of AlphaGeometrys output, praised it as impressive because it's both verifiable and clean. Whereas some earlier software generated complex geometry proofs that were hard for human reviewers to understand, the output of AlphaGeometry is similar to what a human mathematician would write.

AlphaGeometry is part of DeepMinds larger project to improve the reasoning capabilities of large language models by combining them with traditional search algorithms. DeepMind has published several papers in this area over the last year.

Lets start with a simple example shown in the AlphaGeometry paper, which was published by Nature on Wednesday:

The goal is to prove that if a triangle has two equal sides (AB and AC), then the angles opposite those sides will also be equal. We can do this by creating a new point D at the midpoint of the third side of the triangle (BC). Its easy to show that all three sides of triangle ABD are the same length as the corresponding sides of triangle ACD. And two triangles with equal sides always have equal angles.

Geometry problems from the IMO are much more complex than this toy problem, but fundamentally, they have the same structure. They all start with a geometric figure and some facts about the figure like side AB is the same length as side AC. The goal is to generate a sequence of valid inferences that conclude with a given statement like angle ABC is equal to angle BCA.

For many years, weve had software that can generate lists of valid conclusions that can be drawn from a set of starting assumptions. Simple geometry problems can be solved by brute force: mechanically listing every possible fact that can be inferred from the given assumption, then listing every possible inference from those facts, and so on until you reach the desired conclusion.

But this kind of brute-force search isnt feasible for an IMO-level geometry problem because the search space is too large. Not only do harder problems require longer proofs, but sophisticated proofs often require the introduction of new elements to the initial figureas with point D in the above proof. Once you allow for these kinds of auxiliary points, the space of possible proofs explodes and brute-force methods become impractical.

Read more:
DeepMind AI rivals the world's smartest high schoolers at geometry - Ars Technica

Read More..

Top Five Courses You Should Take at Harvard University – Analytics Insight

Harvard University is one of the most prestigious and renowned institutions for higher education in the world. With a rich history, a diverse faculty, and a wide range of academic programs, Harvard offers something for everyone. But with so many options, how do you choose the best courses to take at Harvard? Here are our top five recommendations for courses at Harvard University, based on popularity, relevance, and quality.

CS50 is Harvards flagship course on computer science and one of the most popular courses in the world. Taught by the charismatic and engaging Professor David Malan, CS50 introduces you to the fundamentals of programming, algorithms, data structures, web development, and more. You will learn how to think computationally and creatively, and how to solve real-world problems using code. CS50 is a challenging but rewarding course that will equip you with the skills and knowledge to pursue a career in technology or any other field.

Course link

Justice is one of Harvards most famous and influential courses, taught by the renowned philosopher Professor Michael Sandel. Justice explores the big questions of moral and political philosophy, such as what is the right thing to do, what is a fair society, and what is the role of government. You will engage with the ideas of thinkers such as Aristotle, Kant, Mill, Rawls, and Singer, and apply them to contemporary issues such as abortion, euthanasia, affirmative action, and income inequality. Justice is a course that will challenge your assumptions, broaden your perspectives, and inspire you to think critically and ethically.

Course link

Machine learning is one of the most exciting and powerful applications of data science and one of the most sought-after skills in the job market. In this course, part of Harvards Professional Certificate Program in Data Science, you will learn how to build a movie recommendation system and learn the science behind one of the most popular and successful data science techniques. You will learn the concepts and methods of supervised and unsupervised learning, such as regression, classification, clustering, and dimensionality reduction. You will also learn how to use Python and its libraries, such as pandas, numpy, scikit-learn, and matplotlib, to implement and evaluate machine learning models.

Course link

Neuroscience is the study of the structure and function of the nervous system, and is one of the most fascinating and interdisciplinary fields of science. In this course, taught by Harvard Medical School faculty, you will explore the entire nervous system, from the microscopic inner workings of a single nerve cell to the staggering complexity of the brain. You will learn the basics of neuroanatomy, neurophysiology, and neuropharmacology, and how they relate to sensation, perception, cognition, emotion, and behavior.

Course link

Communication is one of the most essential and valuable skills in any field or endeavor. In this course, you will learn how to write and speak effectively and persuasively, using the principles and techniques of rhetoric. You also learn how to use rhetorical devices, such as ethos, pathos, logos, and kairos, to enhance your appeal and impact. You will also examine and critique examples of communication from American political rhetoric, such as the speeches of Abraham Lincoln, Martin Luther King Jr., and Barack Obama.

Course link

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

See the rest here:

Top Five Courses You Should Take at Harvard University - Analytics Insight

Read More..

Gurin wins grant to enhance atmospheric simulation speed – The Source – Washington University in St. Louis

Roch Gurin, chair of computer science and engineering at the McKelvey School of Engineering and the Harold B. & Adelaide G. Welge Professor of Computer Science at Washington University in St. Louis, has received a two-year $207,394 grant from the National Science Foundation to enhance the ability of a 3D atmospheric simulation software to rapidly simulate how Earths atmosphere responds to changes in its chemical composition.

The software, GEOS-Chem, is designed to study climate change, and the project brings together expertise in computer and atmospheric science and includes McKelvey Engineering faculty Kunal Agrawal, a professor of computer science and engineering, and Randall Martin, the Raymond R. Tucker Distinguished Professor in energy, environmental and chemical engineering and model scientist of the GEOS-Chem project.

By improving the speed at which those simulations can run, the project aims to improve researchers ability to simulate and understand how Earths atmosphere evolves, contributing to climate change research and mitigation with potential economic and societal impacts.

Read more on the McKelvey School of Engineering website.

See more here:

Gurin wins grant to enhance atmospheric simulation speed - The Source - Washington University in St. Louis

Read More..

PNW Computer Science research team tests AI-powered gunshot detection technology – Purdue University Northwest

January 19, 2024

As artificial intelligence (AI) continues to rapidly grow as a contemporary technology, faculty and students at Purdue University Northwest (PNW) are researching its practical uses to benefit humans in many different capacities.

Wei (David) Dai, assistant professor of Computer Science and director of the Advanced Intelligence Software Lab, has been interested in the practical application of AI with public safety initiatives since he began his own doctoral research. With the help of three student research assistants and collaboration with the PNW Police Department, the group developed and successfully tested a Gunshot Detection System apparatus powered by AI. The technology may pave the way for improving safety on school campuses and reducing response times to incidents involving gun violence.

When our team was approached by Dr. Dai and his Computer Science research assistants regarding this project, we were excited to partner with them on the trials, as well as provide guidance during the research process, said Brian Miller, director of Public Safety. It is amazing to see PNW faculty and students innovation as they apply their research and design proposed solutions that could deliver positive change to many others beyond Northwest Indiana.

Nearly instantaneous alerts

The Gunshot Detection Systems research purpose is ultimately intended to help reduce law enforcement officials response time because of the average delay in humans recognition of gunshots.

To help inform its research, the team analyzed 15 case studies involving active shooter scenarios at schools. The researchers found five minutes was the approximate time it took for someone to recognize gunshots from a potential active shooter and call 911.

When tested, the Gunshot Detection System recognized a gunshot and was able to alert PNW Police in two to five seconds.

Every minute that can be saved in response time in turn means lives that can be saved, said Brian Miller, director of Public Safety at PNW. An average time of five minutes before a 911 call is way too long. With this technology, police officers can get the notification immediately as well as the audio.

The research team began by training the AI program to recognize gunshot sounds at a firearms shooting range. The team also taught the program to identify other loud noises, such as a popping balloon or a firework, in order to differentiate the sounds. By understanding how to contrast these noises, the program can accurately identify true gunshots and relay data to first responders within seconds to alert them to a potential active shooter.

The research team furthermore trained the AI program to recognize the locations of gunshots inside large, multistory buildings. Partnering with the PNW Police, researchers tested blank ammunition for the AI to recognize. The program had nearly perfect accuracy on recognition of the first gunshot, followed by 100% accuracy in recognizing a second gunshot in each trial. The program also pinpointed which floor location the gunshots came from.

If you are a police officer responding to an active shooter in a multi-level building, it can be difficult to identify the shooters location because of echoes and other sounds, said Miller. This device can tell us the direction and floor, such as the north side of the building and the second floor. That is a tremendous help to a response in identifying an active shooter.

Practical uses

Historically, other commercial gunshot detection technologies have existed on the market. Police departments, for example, may contract with companies technology that can accurately pinpoint gunshots in different neighborhoods. However, these technologies can cost up to hundreds of thousands of dollars to cover a wide area, said Miller.

Dai estimates installation of the Gunshot Detection System technology would only cost up to $5,000 per module.

I am incredibly proud of the research teams work and PNW Polices help in studying this technologys effectiveness, said Dai. At PNW, many of our projects involve applied research that can demonstrate positive impacts and contribute to social good. Artificial intelligence allows us to investigate new possibilities that in turn can become tangible benefits for people.

Read the rest here:

PNW Computer Science research team tests AI-powered gunshot detection technology - Purdue University Northwest

Read More..

Five questions on adult computer literacy with Assistant Professor of Computer Science Fred Agbo – willamette.edu

Computing is the cornerstone of modern society and the training and education offered at Willamettes School of Computing and Information Sciences has never been more important for those looking to find solutions to the worlds greatest challenges. Yet, according to Assistant Professor of Computer Science Fred Agbo, a key demographic is being left out of computing education: adults.

Computer literacy isnt just important for job and career success. For older adults, Agbo says that computing can provide other benefits in their daily lives including helping older adults with cognitive development, building confidence, and empowering active citizenship. But lack of funding and attention have made gaining these crucial skills more challenging for adults.

Agbos research on broadening participation in computing (BPC) was recently accepted at the Technical Symposium on Computer Science Education, one of the worlds premier computer science conferences. We spoke with Professor Agbo to learn more about his research and about how adults can be brought into the computing revolution.

1. What is BPC and what does it mean to make computing more inclusive?

Agbo: BPC means broadening participation in computing. It is a program that engages the computing community to design strategies and frameworks to increase the participation of underrepresented groups in computing.

The goal of the BPC program is to democratize computing education by ensuring that more folks who normally would not have access to computing literacy are motivated to study computer science and even pursue a career in it. Moreover, this program promotes diversity, equity, and inclusion in all educational contexts such as the K-12, colleges, and post-college adults.

While there is evidence on BPC in K-12 education, there are limited studies that showcase adults in computing education. This is a fundamental gap that must be addressed. My research amplifies this gap, advocates for BPC in adult education, and aims to inspire discussion about how to advance computing education for adults.

2. Why is it so important for adults to receive computer education?

Agbo: In a digitally evolving world and with the rapid diffusion of technology influencing every aspect of life, everyone needs to be computer literate. Computer literacy in this context entails acquiring 21st-century skills such as creativity, computational thinking, and problem-solving abilities, which can be applied in all areas of life.

Adults (those with work-life, retirees, or senior citizens alike), do not just need these skills to remain relevant at their jobs or change careers, but also to unravel daily contemporary problems. In addition, adults can develop their cognitive capabilities through computing education, which will keep them active as they age. Computing education can also empower adults to uphold lifelong learning through citizenship education.

3. What does your research suggest about past efforts to expand computing education for adults?

Agbo: The digital disparity between older adults and the younger generation is not clear in the academic literature. My research investigated this gap by systematically examining the literature and found that attempts towards BPC in adult education started as far back as the 1980s. However, there has been little to no significant progress made over the years.

This study also investigated the learning outcomes for adults and found that there are positive gains for BPC in adults education. For example, studies show that computing education for adults increases their motivation, interest, self-confidence, and computational knowledge.

4. Why has computing education been slow to expand to adults?

Agbo: Its difficult to say. However, there is evidence that suggests limited funding to support the development of BPC in adult education. Another issue that may have limited BPC in adult education is the faint attention it receives from computing educators and scholars. Advocacy for BPC in adult education is necessary, which is one of the contributions of this paper.

5. How can we help improve adult computer literacy?

Agbo: Computer science educators and researchers must identify the need for inclusion in designing strategies, frameworks, and curricula for BPC where adult education context is also considered a significant part of the program. Funding should be made available to the community of computing educators to carry out studies in developing curricula for computing education in adults.

Thankfully, the Special Interest Group on Computer Science Education a highly respected symposium in the community has recognized this need. This paper will hopefully engender actions towards developing adults computing education.

Originally posted here:

Five questions on adult computer literacy with Assistant Professor of Computer Science Fred Agbo - willamette.edu

Read More..

ColorStack aims to help Black and Latinx computer science students | Binghamton News – Binghamton

ColorStack a national organization dedicated to increasing the number of Black and Latinx computer science graduates that go on to launch rewarding careers founded its newest chapter at Binghamton University during the spring 2023 semester.

The mission of ColorStackBU is to increase opportunities and foster academic success for students from historically underrepresented backgrounds.

ColorStackBUs president, Julian Ortiz 26, started the group to support other students who are coming to college looking for community.

Our three pillars are social, technical and professional, Ortiz said. We want to bring people together on every level.

Since chapter approval at the end of the spring 2023 semester, ColorStackBU has attracted membership from students throughout the Thomas J. Watson College of Engineering and Applied Science. During the semester, the organization hosts events where students can learn technical skills needed in their fields, get professional photos taken, and network with other computer science graduates who are already established in their careers.

ColorStack aims to increase the number of Black and Latinx computer science graduates who go on to launch rewarding careers. Image Credit: Provided.

ColorStackBU is a great way for us to get up to speed and help us support each others personal and professional development, Ortiz said. A lot of us come from underserved high schools and are touching computer science for the first time in college, and it helps to be surrounded by other students in similar positions.

ColorStackBUs vice president, Bryan Perez 25, has seen the organization flourish over the fall semester and has high hopes for expanding ColorStackBU in the new year.

We are reaching a point where were all getting internships, were all getting fellowships, and its nice that the mission of our organization was met with such enthusiasm from so many students, Perez said.

As hardworking students, ColorStack members experienced a lack of resources for Black and Latinx students pursuing a degree in computer science and wanted to fill the gap.

There hadnt been a large focus on people of color in computer science, and we felt like we needed to provide a space, Ortiz said. Instead of splitting up the community into different subgroups, we decided to start an organization open to everyone.

As a first-year student, Hilary Rojas Rosales 27 was quickly drawn to ColorStackBU. She is now one of the groups interns.

I was looking around for a student organization to join upon starting college, and I saw ColorStacks new chapter advertised as a student-run network that places an emphasis on people of color in computer science fields, Rojas Rosales said. It felt like that aha moment like when you find something youve been looking for.

ColorStackBU is invested in making sure incoming students from Black and Latinx backgrounds have been exposed to the same resources as some of their more privileged peers, and it frequently hosts events outside of normal class hours to give students an equal opportunity to participate.

Many of us are coming into college with no idea what a resum is or what LinkedIn is, because they didnt teach us those things in high school, Rojas Rosales said. ColorStack recognizes that students from underrepresented backgrounds should have those things available to them.

As more than just a professional development organization, ColorStackBU also hosts a series of cultural events where students can celebrate their heritages with one another while networking.

My favorite event so far was Sip and Apply, where students got to make and enjoy a special Mexican drink while applying to internships, Perez said. We love to have people come and enjoy each others culture, and just talk with one another. Were all looking to help each other out.

With several exciting events coming up for spring 2024, the executive board of ColorStackBU is determined to expand membership and get more people on campus talking about the group.

We want to see our community flourish, and for a lot of us who are first-generation computer science students, we come to Binghamton for that community, Perez said. Being able to be mentors for each other and give back to one another is so important, and we can only grow from here.

Ortiz also feels as if his life has changed since he founded ColorStack, and he wants students to know that the same opportunities are available to them.

I was so unsure of myself, and far less secure in my position before I realized networks like ColorStacks were available to me, he said. Finding a strong community full of people who I can relate to within my field made me a lot more confident. Now, I am sure that this is the path I want to go down. As an organization, we strive to bring the same security to our peers.

Read the original here:

ColorStack aims to help Black and Latinx computer science students | Binghamton News - Binghamton

Read More..

New hope for early pancreatic cancer intervention via AI-based risk prediction – MIT News

The first documented case of pancreatic cancer dates back to the 18th century. Since then, researchers have undertaken a protracted and challenging odyssey to understand the elusive and deadly disease. To date, there is no better cancer treatment than early intervention. Unfortunately, the pancreas, nestled deep within the abdomen, is particularly elusive for early detection.

MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) scientists, alongside Limor Appelbaum, a staff scientist in the Department of Radiation Oncology at Beth Israel Deaconess Medical Center (BIDMC), were eager to better identify potential high-risk patients. They set out to develop two machine-learning models for early detection of pancreatic ductal adenocarcinoma (PDAC), the most common form of the cancer. To access a broad and diverse database, the team synced up with a federated network company, using electronic health record data from various institutions across the United States. This vast pool of data helped ensure the models' reliability and generalizability, making them applicable across a wide range of populations, geographical locations, and demographic groups.

The two models the PRISM neural network, and the logistic regression model (a statistical technique for probability), outperformed current methods. The teams comparison showed that while standard screening criteria identify about 10 percent of PDAC cases using a five-times higher relative risk threshold, Prism can detect 35 percent of PDAC cases at this same threshold.

Using AI to detect cancer risk is not a new phenomena algorithms analyze mammograms, CT scans for lung cancer, and assist in the analysis of Pap smear tests and HPV testing, to name a few applications. The PRISM models stand out for their development and validation on an extensive database of over 5 million patients, surpassing the scale of most prior research in the field, says Kai Jia, an MIT PhD student in electrical engineering and computer science (EECS), MIT CSAIL affiliate, and first author on an open-access paper in eBioMedicine outlining the new work. The model uses routine clinical and lab data to make its predictions, and the diversity of the U.S. population is a significant advancement over other PDAC models, which are usually confined to specific geographic regions, like a few health-care centers in the U.S. Additionally, using a unique regularization technique in the training process enhanced the models' generalizability and interpretability.

This report outlines a powerful approach to use big data and artificial intelligence algorithms to refine our approach to identifying risk profiles for cancer, says David Avigan, a Harvard Medical School professor and the cancer center director and chief of hematology and hematologic malignancies at BIDMC, who was not involved in the study. This approach may lead to novel strategies to identify patients with high risk for malignancy that may benefit from focused screening with the potential for early intervention.

Prismatic perspectives

The journey toward the development of PRISM began over six years ago, fueled by firsthand experiences with the limitations of current diagnostic practices. Approximately 80-85 percent of pancreatic cancer patients are diagnosed at advanced stages, where cure is no longer an option, says senior author Appelbaum, who is also a Harvard Medical School instructor as well as radiation oncologist. This clinical frustration sparked the idea to delve into the wealth of data available in electronic health records (EHRs).

The CSAIL groups close collaboration with Appelbaum made it possible to understand the combined medical and machine learning aspects of the problem better, eventually leading to a much more accurate and transparent model. The hypothesis was that these records contained hidden clues subtle signs and symptoms that could act as early warning signals of pancreatic cancer, she adds. This guided our use of federated EHR networks in developing these models, for a scalable approach for deploying risk prediction tools in health care.

Both PrismNN and PrismLR models analyze EHR data, including patient demographics, diagnoses, medications, and lab results, to assess PDAC risk. PrismNN uses artificial neural networks to detect intricate patterns in data features like age, medical history, and lab results, yielding a risk score for PDAC likelihood. PrismLR uses logistic regression for a simpler analysis, generating a probability score of PDAC based on these features. Together, the models offer a thorough evaluation of different approaches in predicting PDAC risk from the same EHR data.

One paramount point for gaining the trust of physicians, the team notes, is better understanding how the models work, known in the field as interpretability. The scientists pointed out that while logistic regression models are inherently easier to interpret, recent advancements have made deep neural networks somewhat more transparent. This helped the team to refine the thousands of potentially predictive features derived from EHR of a single patient to approximately 85 critical indicators. These indicators, which include patient age, diabetes diagnosis, and an increased frequency of visits to physicians, are automatically discovered by the model but match physicians' understanding of risk factors associated with pancreatic cancer.

The path forward

Despite the promise of the PRISM models, as with all research, some parts are still a work in progress. U.S. data alone are the current diet for the models, necessitating testing and adaptation for global use. The path forward, the team notes, includes expanding the model's applicability to international datasets and integrating additional biomarkers for more refined risk assessment.

A subsequent aim for us is to facilitate the models' implementation in routine health care settings. The vision is to have these models function seamlessly in the background of health care systems, automatically analyzing patient data and alerting physicians to high-risk cases without adding to their workload, says Jia. A machine-learning model integrated with the EHR system could empower physicians with early alerts for high-risk patients, potentially enabling interventions well before symptoms manifest. We are eager to deploy our techniques in the real world to help all individuals enjoy longer, healthier lives.

Jia wrote the paper alongside Applebaum and MIT EECS Professor and CSAIL Principal Investigator Martin Rinard, who are both senior authors of the paper. Researchers on the paper were supported during their time at MIT CSAIL, in part, by the Defense Advanced Research Projects Agency, Boeing, the National Science Foundation, and Aarno Labs. TriNetX provided resources for the project, and the Prevent Cancer Foundation also supported the team.

More:

New hope for early pancreatic cancer intervention via AI-based risk prediction - MIT News

Read More..

A group of LSU students are leveraging AI to fight cancer – The Reveille, LSU’s student newspaper

Four LSU students developed an AI tool last fall that could enable hospitals to automate cancer staging.

Our project was to develop a large language model that can take cancer pathology reports, specifically breast cancer, and give them to LLMs that both tell the staging of the cancer and help patients to better understand the reports themselves, saidjunior Yueh Wang.

Wang is one of four computer science majors, including senior Kyle McCleary, junior Aditya Srivastava and sophomore Jamar Whitfield, who developed the tool last fall in an honors course at the university.

The projects sponsor was professor Lucio Miele, chair for the Department of Genetics at LSU Health New Orleans School of Medicine. Mieles work has been featured in biomedical journals, and he serves on multiple grant review panels and foreign research funding agencies.

Staging is the process by which doctors determine a cancers severity by determining its size and spread. There are a handful of staging protocols. The most common is the TNM system, which looks at the size of the tumor, the condition of nearby lymph nodes and the degree a cancer has metastasized, or spread through the body.

The five stages of cancer, zero through four, provide a general description of specific staging protocols like TNM.

The LSU students staging tool used optical character recognition to scan doctors notes then determine stage with the TNM system based on the information.

Staging can be a long and time-consuming process for medical professionals because it involves collecting and synthesizing many documents.

They collect thousands of reports. They have a very large number of files they have to go over, like pathology reports. Our involvement in it will improve their efficiency and save time, Wang said.

After the staging process, medical staff can export the information to JSON, CSB or Excel files to make them more compatible with their own systems.

Baseline tests during last semester reached 90% to 92% effectiveness. Among measures to ensure accuracy is a multi-pass process that assesses each stage and cross-references its findings to ensure accuracy. Cross-referencing uses whats called the swiss cheese model, which involves compounding accuracy, bringing error down near zero.

Were going to thoroughly verify the accuracy of the pipeline is at least 98% to 99%, McCleary said.

In cases where there is an uncertainty or unknown, the tool is trained to report inconclusiveness to avoid confident falsehood, what computer scientists call hallucinations.

All personal info is redacted prior to usage by the tool, so no sensitive info is used or at risk.

Security in medicine is more important than almost any other industry, so being able to properly ensure we can deploy this stuff at scale without putting anything in danger is very important, McCleary said.

The group also developed a framework and web app for deploying LLMs and other AI tools, named QueryLake. The model has a similar interface to ChatGPT but implements new features on top of existing language models.

QueryLake can be used to search uploaded documents from textbooks to pathology reports. Responses to queries have relevance scores for sourcing and show where the information was found for verification purposes.

Like the staging tool, QueryLake prioritizes security.

Weve been developing a very generalized framework that can be used for all types of stuff, McCleary said. Ive built it with a database thats properly secured. If they host documents on it, theyre all encrypted.

More specialized pipelines, like the staging tool, can use permission locks to block access to protect patient information.

Unique to the QueryLake model is the option to make collections of documents, something McCleary compared to playlists. This would allow users to switch between collections with ease.

These collections could be shared, which offers countless applications, including use by instructors.

McClearly said he sees QueryLake as the perfect tool for instructors who want to push productivity as much as possible, especially technical courses like programming.

Common to the rise of AI is the anxiety that more able computers will mean less work for people, but McCleary sees it differently. For him, the two arent mutually exclusive.

"Pessimists will say we're automating jobs away, but I think every meaningful job can save time not doing the menial stuff," McCleary said. "We get to recycle our own time by using it on something more valuable."

Continue reading here:

A group of LSU students are leveraging AI to fight cancer - The Reveille, LSU's student newspaper

Read More..